article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
solving numerically some nonlinear partial differential equations , for example by using finite elements or finite volumes , often amounts to the resolution of some nonlinear system of equations of the form : [ eq : syst ] ( ) = _ ^n , where is the number of degrees of freedom and can be large .one of the most popular method for solving the systems of the form is the celebrated newton - raphson method .if this iterative procedure converges , then its limit is necessarily a solution to .however , making the newton method converge is sometimes difficult and might require a great expertise .nonlinear preconditioning technics have been recently developed in improve the performance of the newton s method , see for instance .complex multiphase or unsaturated porous media flows are often modeled thanks to degenerate parabolic problems .we refer to for an extensive discussion about models of porous media flows . for such degenerate problems , making newton s method converge is often very difficult .this led to the development of several strategies to optimize the convergence properties , like for instance the so - called continuation - newton method , or trust region based solvers .an alternative approach consist in solving thanks to a robust fixed point procedure with linear convergence speed rather that with the quadratic newton s method ( see for instance ) .comparisons between the fixed point and the newton s strategies are presented for instance in ( see also ) .combinations of both technics ( perform few fixed points iterations before running newton s algorithm ) was for instance performed in .our strategy consists in reformulating the problem before applying newton s method .the reformulation consists in changing the primary variable in order to improve the behavior of newton s method .we apply this strategy to the so called _ richards equation _ modeling the unsaturated flow of water within a porous medium .extension to more complex models of porous media flows will be the purpose of the forthcoming contribution .denote by some open subset of ( ) representing the porous medium ( in the sequel , will be supposed to be polyhedral for meshing purpose ) , by a finite time horizon , and by the corresponding space - time cylinder .we are interested in finding a saturation profile ] is a nondecreasing function that satisfies and , and where stands for the gravity vector . in order to ease the reading ,we have set the porosity equal to in and neglected the residual saturation .the pressure and the water content are supposed to be linked by some monotone relation [ eq : capi ] s = s(p ) , q where is a non - decreasing function from to ] to ] by [ eq : s-1 ] s^-1(a ) = \ { x 0 | s(x ) = a } , a .this allows to define an initial data as such that . choosing as the primary variable leads to the following doublydegenerate parabolic equation this equation turns to [ eq : richards - tau ] _t s ( ) + ( ( s ( ) ) - u ( ) ) = 0 q , at least if ( this will be ensured , cf . theorem [ thm : weak ] ) .it is relevant to impose the boundary condition [ eq : dirichlet - tau ] _|_(0,t ) = p^-1(p_d)=:_d _ . as a counterpart of .it is finally assumed that can be extended to the whole in a way such that [ dirichlet - tau - reg ] _d c^1(q ) , _ .the regularity required on is not optimal and can be relaxed .however , the treatment of the boundary condition is not central in our purpose , hence we stick to [ def : weak ] a measurable function is said to be a weak solution to the problem , , if , if , and if , for all , one has the following statement summarizes known results about the weak solutions . [ thm : weak ] there exists a unique weak solution to the problem , , in the sense of definition [ def : weak ] .moreover , a.e . in , ;l^p(\o)) ] .thanks to corollary [ coro : column ] , it suffices to check that is a column - wise -m - matrix where and depend on the prescribed quantities .first , it is easy to check that [ eq:(i ) ] j_lk^n ( ) 0 l k , _l j_lk^n ( ) 0 , 0 _ 1 j_kk^n ( ) , where therefore , condition ( i ) in definition [ def : ddm ] is fulfilled .it follows from the non - degeneracy condition that for all and all , either or .let be such that , then whence . on the other hand , if and , then as a consequence , if , one has necessarily that and . but in this case , the matrix defined by is irreducible and admits -transmissive paths from to for all .this ensures that admits a -transmissive path from any cell to any cell .therefore , condition ( ii ) of definition [ def : ddm ] is fulfilled . is then a row - wise -m - matrix , thus is a column - wise -m - matrix .the following corollary is a straightforward compilation of newton - kantorovich theorem [ thm : kanto ] with proposition [ prop : jac ] .[ coro : local - conv ] there exists such that the newton method converges as soon as .the radius appearing in corollary [ coro : local - conv ] can be estimated thanks to as soon as the second order derivatives and of and are uniformly bounded .it appears in a fairly natural way that is a non - decreasing function of . then choosing small enough, the variation between the time steps and is small and therefore , the convergence of the newton method is ensured if one uses an adaptive time - step algorithm ( as for instance in ) .assume that is exactly known , then the exact solution is obtained as the limit of . computing the exact value of is impossible and a convenient criterion must be adopted in order to stop the iterative procedure .this yields errors that accumulate along time .the goal of this section is to quantify the error induced by the inexact resolution of the nonlinear system . in this section, we assume that is exact , and we want to quantify the error corresponding to one single iteration .in what follows , we consider the following residual based stopping criterion : [ eq : stopping ] _n(^n , k)_1 = _ k |f_k(_k^n , k)| ^n for some prescribed tolerance .we denote by when is fulfilled and the loop is stopped .then the non - degeneracy of ^{-1} ] and hence for all in view of initial and boundary conditions .in addition , in view of we have therefore , at each step of inexact newton s method , the flux contribution globally offset , hence we have since is linear , one has , which implies that the mass is exactly conserved ( assuming that linear algebraic computations are exact ) at each iteration of newton s method .k. brenner , m. groza , l. jeannin , r. masson , and j. pellerin .immiscible two - phase darcy flow model accounting for vanishing and discontinuous capillary pressures : application to the flow in fractured porous media . ,2016 . to appear . c. cancs and c. guichard. numerical analysis of a robust free energy - diminishing finite volume scheme for degenerate parabolic equations with gradient structure .preprint hal : hal-01119735 , accepted for publication in found .2016 .j. droniou , r. eymard , t. gallout , and r. herbin .gradient schemes : a generic framework for the discretisation of linear , nonlinear and nonlocal elliptic and parabolic equations ., 23(13):23952432 , 2013 .r. eymard , t. gallout , and r. herbin .discretization of heterogeneous and anisotropic diffusion problems on general nonconforming meshes sushi : a scheme using stabilization and hybrid interfaces . , 30(4):10091043 , 2010 .f. a radu , i. s. pop , and p. knabner .newtontype methods for the mixed finite element discretization of some degenerate parabolic equations . in_ numerical mathematics and advanced applications _ , pages 11921200 .springer , 2006 .konstantin brenner + laboratoire jean - alexandre dieudonn , universit de nice sophia antipolis , + team coffee inria sophia antipolis mditerrane , + 06108 nice cedex 02 , france .
the nonlinear systems obtained by discretizing degenerate parabolic equations may be hard to solve , especially with newton s method . in this paper , we apply to richards equation a strategy that consists in defining a new primary unknown for the continuous equation in order to stabilize newton s method by parametrizing the graph linking the pressure and the saturation . the resulting form of richards equation is then discretized thanks to a monotone finite volume scheme . we prove the well - posedness of the numerical scheme . then we show under appropriate non - degeneracy conditions on the parametrization that newton s method converges locally and quadratically . finally , we provide numerical evidences of the efficiency of our approach . richards equation , finite volumes , newton s method , parametrization * ams subjects classification . * 65m22 , 65m08 , 76s05
he google books data set is captivating both for its availability and its incredible size .the first version of the data set , published in 2009 , incorporates over 5 million books .these are , in turn , a subset selected for quality of optical character recognition and metadata e.g ., dates of publication from 15 million digitized books , largely provided by university libraries .these 5 million books contain over half a trillion words , 361 billion of which are in english . along with separate data sets for american english , british english , and english fiction; the first version also includes spanish , french , german , russian , chinese , and hebrew data sets .the second version , published in 2012 , contains 8 million books with half a trillion words in english alone , and also includes books in italian .the contents of the sampled books are split into case - sensitive -grams which are typically blocks of text separated into = 1 , , 5 pieces by whitespace e.g ., `` i '' is a 1-gram , and `` i am '' is a 2-gram -gram lengths range from one to five , contain some exceptions to this rule of thumb .for example , in the 2009 version , `` i am . ''is tokenized as the 3-gram , `` i am . ''the period is counted in the corpus as a 1-gram , as are other punctuation marks .similarly , concatenations are often tokenized as 2-grams , so `` i ve been '' becomes `` i ve been''also a 3-gram . ] . a central if subtle and deceptive feature of the google books corpus , and for others composed in a similar fashion , is that the corpus is a reflection of a library in which only one of each book is available .ideally , we would be able to apply different popularity filters to the corpus .for example , we could ask to have -gram frequencies adjusted according to book sales in the uk , library usage data in the us , or how often each page in each book is read on amazon s kindle service ( all over defined periods of time ) .evidently , incorporating popularity in any useful fashion would be an extremely difficult undertaking on the part of google .we are left with the fact that the google books library has ultimately been furnished by the efforts and choices of authors , editors , and publishing houses , who collectively aim to anticipate or dictate what people will read .this adds a further distancing from `` true culture '' as the ability to predict cultural success is often rendered fundamentally impossible due to social influence processes we have one seed for each tree but no view of the real forest that will emerge . we therefore observe that the google books corpus encodes only a small - scale kind of popularity : how often -grams appear in a library with all books given ( in principle ) equal importance and tied to their year of publication ( new editions and reprints allow some books to appear more than once ) .the corpus is thus more akin to a lexicon for a collection of texts , rather than the collection itself .but problematically , because google books -grams do have frequency of usage associated with them based on this small - scale popularity , the data set readily conveys an illusion of large - scale cultural popularity .an -gram which declines in usage frequency over time may in fact become more often read by a particular demographic focused on a specific genre of books . for example , `` frodo '' first appears in the second google books english fiction corpus in the mid 1950s and declines thereafter in popularity with a few resurgent spikes .while this limitation to small - scale popularity tempers the kinds of conclusions we can draw , the evolution of -grams within the google books corpus their relative abundance , their growth and decay still gives us a valuable lens into how language use and culture has changed over time .our contribution here will be to show : 1 . a principled approach for exploring word and phrase evolution ; 2 . how the google books corpus is challenged in other , orthogonal respects , particularly by the inclusion of scientific and medical journals ; and 3 .how future analyses of the google books corpus should be considered . for ease of comparison with related work, we focus primarily on 1-grams from selected english data sets between the years 1800 and 2000 . in this work, we will use the terms `` word '' and `` 1-gram '' interchangeably for the sake of convenience .the total volume of ( non - unique ) english 1-grams grows exponentially between these years , as shown in fig .[ fig : volume ] , except during major conflicts e.g . , the american civil war and both world wars when the total volume dips substantially .we also observe a slight increase in volume between the first and second version of the unfiltered english data set .between the two english fiction data sets , however , the total volume actually decreases considerably , which indicates insufficient filtering was used in producing the first version , and immediately suggests the initial english fiction data set may not be appropriate for any kind of analysis .the simplest possible analysis involving any google books data set is to track the relative frequencies of a specific set of words or phrases .examples of such analyses involve words or phrases surrounding individuality , gender , urbanization , and time , all of which are of profound interest. however , the strength of all conclusions drawn from these must take into account both the number of words and phrases in question ( anywhere from two to twenty or more at a time ) and the sampling methods used to build the google books corpus .many researchers have carried out broad analyses of the google books corpus , examining properties and dynamics of entire languages .these include analyses of zipf s and heaps laws as applied to the corpus , the rates of verb regularization , rates of word introduction and obsolescence and durations of cultural memory , as well as an observed decrease in the need for new words in several languages .however , these studies also appear to take for granted that the data sets sample in a consistent manner from works spanning the last two centuries .analysis of the emotional content of books suggests a lag of roughly a decade between exogenous events and their effects in literature , complicating the use of the google books data sets directly as snapshots of cultural identity .as we will demonstrate , an assumption of unbiased sampling of books is not reasonable during the last century and especially during recent decades , which is of particular importance to all analyses concerned with recent social change . since parsing in the data sets is case - sensitive , we can give a suggestive illustration of this observation in fig .[ fig : ffig ] , which displays the relative ( normalized ) frequencies of `` figure '' versus `` figure '' in both versions of the corpus and for both english and english fiction . in both versions of the english data set , the capitalized version , `` figure, '' surpasses its lowercase counterpart during the 1960s . since the majority of books in the corpus originated in university libraries , a major effect of scientific texts on the dynamics of the data set is quite plausiblethis trend is also apparent albeit delayed in the first version of the english fiction data set , which again suggests insufficient filtering during the compilation process for that version .because of google books library - like nature , authors are not represented equally or by any measure of popularity in any given data set but are instead roughly by their own prolificacy .this leaves room for individual authors to have noteworthy effects on the dynamics of the data sets , as we will demonstrate in section [ sec : discussion ] .lastly , due to copyright laws , the public data sets do not include metadata ( see supporting online material ) , and the data are truncated to avoid inference of authorship , which severely limits any analysis of censorship in the corpus . under these conditions, we will show that much caution must be used when employing these data sets with a possible exception of the second version of english fiction to draw cultural conclusions from the frequencies of words or phrases in the corpus .we structure the remainder of the paper as follows . in sec . [sec : methods ] , we describe how to use jensen - shannon divergence to highlight the dynamics over time of both versions of the english and english fiction data sets , paying particular attention to key contributing words . in sec .[ sec : discussion ] , we display and discuss examples of these highlights , exploring the extent of the scientific literature bias and issues with individual authors ; we also provide a detailed inspection of some example decade decade comparisons .we offer concluding remarks in section [ sec : conc ] .we examine the dynamics of the google books corpus by calculating the statistical divergence between the distributions of 1-grams in two given years . a commonly used measure of statistical divergenceis kullback - leibler ( kl ) divergence , based on which we use a bounded , symmetric measure . given a language with unique words and 1-gram distributions in the first year and in second, the kl divergence between and can be expressed as where is the probability of observing the 1-gram random chosen from the 1-gram distribution for first year , and is the probability of observing the same word in the second year .the units of kl divergence is bits , and may be interpreted as the average number of bits wasted if a text from the first year is encoded efficiently , but according to the distribution from the latter , incorrect year . to demonstrate this, we may rewrite the previous equation as where is the shannon entropy , also the average number of bits required per word in an efficient encoding for the original distribution ; and the remaining term is the average number of bits required per word in an efficient , but mistaken , encoding of a given text .however , if a single ( say , the ) 1-gram in the language exists in the first year , but not in the second , then , and the divergence diverges .since this scenario is not extraordinary for the data sets in question , we instead use jensen - shannon divergence ( jsd ) given by where is a mixed distribution of the two years .this measure of divergence is bounded between 0 when the distributions are the same and 1 bit in the extreme case when there is no overlap between the 1-grams in the two distributions .if we begin with a uniform distribution of species and replace of those species with entirely new ones , the jsd between the original and new distribution is , the proportion of species replaced .the jsd is also symmetric , which is an added convenience .the jsd may be expressed as from which it is apparent that a similar waste analogy holds as with kl divergence , with the mixed distribution taking the place of the approximation regardless of the year a text was written .the form for jensen - shannon divergence given in eq .[ eq : jsd ] can be broken down into contributions from individual words , where the contribution from the word to the divergence between two years is given by some rearrangement gives where , so that contribution from an individual word is proportional to the average probability of the word , and the proportion depends on the ratio between the smaller probability ( without loss of generality ) and the average .namely , we may reframe the equation above as words with larger average probability yield greater contributions as do those with smaller ratios , , between the smaller and average probability .so while a common 1-gram such as `` the , '' `` if , '' or a period changing subtly can have a large effect on the divergence , so can an uncommon ( or entirely new ) word given a sufficient shift from one year to the next .the size of the contribution relative to the average probability is displayed in fig .[ fig : contribution_curve ] for ratios ranging from 0 to 1 . is symmetric about ( i.e. , no change ) , so no novel behavior is lost by omitting the case where ( i.e. , when is the larger probability ) .the maximum possible contribution ( in bits ) is precisely the average probability of the word in question , which occurs if and only if the smaller probability is 0 .no contribution is made if and only if the probability remains unchanged . between the smaller relative probability of an element and the average, is the proportion of the average contributed to the jensen - shannon divergence ( see eqs .[ eq : contribution - messy ] and [ eq : contribution ] ) . in particular ,if ( no change ) , then the contribution is zero ; if , the contribution is half its probability in the distribution in which it occurs with nonzero probability ., scaledwidth=35.0% ] we coarse - grain the data at the level of decades e.g . ,between 1800-to-1809 and 1990-to-1999by averaging the relative normalized frequency of each unique word in a given decade over all years in that decade .( each year is weighted equally . )this allows convenient calculation and sorting of contributions to divergence of individual 1-grams between any two time periods .the dashed lines highlight the divergences to and from the year 1880 , which are featured in fig .[ fig:1g_divcuts ] .the off - diagonal elements represent divergences between consecutive years , as in fig .[ fig:1g_divcuts2 ] .the color represents the percentage of the maximum divergence observed in the given time range for each data set .the divergence between a year and itself is zero . for any given year, the divergence increases with the distance ( number of years ) from the diagonal sharply at first , then gradually .interesting features of the maps are the presence of two cross - hairs in the first half of the 20th century , which strongly suggests a wartime shift in the language , as well as an asymmetry that suggests a particularly high divergence between the first half century and the last quarter century observed ., scaledwidth=48.0% ] fig .[ fig:1g_heatmaps ] shows the jsd between the 1-gram distributions for every pair of years between 1800 and 2000 contributed by 1-grams present above a threshold normalized frequency of for both versions of the english and english fiction data sets ( i.e. , words that appear with normalized frequency at least 1 in ) .a major qualitative aspect apparent from the heatmaps is a gradual increase in divergence with differences in time the lexicon underlying google steadily evolves though this is strongly curtailed for the second english fiction corpus .we see the heatmaps are `` pinched '' toward the diagonal in the vicinities of the two world wars . also visibleis an asymmetry that suggests a particularly high divergence between the first half century and the last quarter century observed .we examine these effects more closely in figs .[ fig:1g_divcuts ] and [ fig:1g_divcuts2 ] by taking two slices of the heatmaps .we specifically consider the divergences of each year compared with 1880 ( dashed lines ) , and the divergences between consecutive years ( off - diagonal ) . to verify qualitative consistency, we also include analogous contribution curves using the more restrictive threshold of . .contributions are counted for all words appearing above a threshold in a given year ; for the dashed curves , the threshold is .typical behavior in each case consists of a relatively large jump between one year and the next with a more gradual rise afterward ( in both directions ) .exceptions include wartime , particularly the two world wars , during which the divergence is greater than usual ; however , after the conclusion of these periods , the cumulative divergence settles back to the previous trend .initial spikiness in ( d ) is likely due to low volume ., scaledwidth=48.0% ] . for the solid curves ,contributions are counted for all words appearing above a threshold in a given year ; for the dashed curves , the threshold is .divergences between consecutive years typically decline through the mid-19th century , remain relatively steady until the mid-20th century , then continue to decline gradually over time ., scaledwidth=48.0% ] while the initial divergence between any two consecutive years is noticeable , the divergence increases ( for the most part ) steadily with the time difference .the cross - hairs from the heatmap resolve into war - time bumps in divergence , which quickly settle in peacetime .the larger boost to the divergence in recent decades , however , is more persistent suggesting a more fundamental change in the data set , which we will examine in more depth later in this section .divergences between consecutive years typically decline through the mid-19th century .divergences then remain relatively steady until the mid-20th century , then continue to decline gradually over time , which may be consistent with previous findings of decreased rates of word introduction and increased rates of word obsolescence in many google books data sets over time and a slowing down of linguistic evolution over time as the vocabulary of a language expands .the initial spikes in divergence in the second version of the fiction data set are likely due to the lower initial volume observed in fig .[ fig : volume ] .we present `` word shifts '' for a few examples of inter - decade divergences in figs .[ fig : barsalolnew30][fig : barsficnew50 ] , specifically comparing the 1940s to the 1930s and the 1980s to the 1950s for the first unfiltered english data set ( figs .[ fig : barsalolnew30][fig : barsallnew50 ] ) and both english fiction data sets ( figs .[ fig : barsficold30][fig : barsficnew50 ] ) .we provide a full set of such comparisons in the supporting information files s1 , s2 , s3 , and s4 . for each of the four data sets ,the largest contributions to all divergences generally appear to be from increased relative frequencies of use of words between decades . for the unfiltered data sets , these are in turn heavily influenced by increased mention of years , which is less pronounced for english fiction .the 1940s literature , unsurprisingly , features more references to hitler and war than the 1930s , along with other world war ii - related military and political terms .this is seen regardless of the specific data set used and is fairly encouraging .curiously , regardless of the specific data set , a noticeable contribution is given by an increase in relative use of the words `` lanny '' and `` budd , '' in reference to one character ( lanny budd ) frequently written about by upton sinclair during that decade .in the fiction data sets , this character dominates the charts . a comparison of the 1930s and 1940s for the second version of the unfiltered english data set ( fig .[ fig : barsalolnew30 ] ) shows dynamics dominated by references to years .( the first version is similar . for analogous figuressee the si files s1 , s2 , s3 , and s4 . )eight of the top ten contributions to the divergence between those decades are due to increased relative frequencies of use of each of years between 1940 and 1949 , their contribution decreasing chronologically , and the other two top ten words are the last two years of the previous decade ( `` 1948 '' and `` 1949 '' appear at ranks 15 and 34 , respectively ) .the last three years in the 1920s also appear by way of decreased relative frequency of use in the top 60 contributions .other notable differences include : * the 11th highest contribution is from `` war , '' which increased in relative frequency .* `` hitler '' and `` nazi '' ( increased relative frequencies ) are ranked 18th and 26th , respectively .* parentheses ( 13th and 14th ) show increased relative frequencies of use . *personal pronouns show decreased relative frequencies of use .* the word `` king '' ( 41st ) also shows a decreased relative frequency , possibly due to the british line of succession . * the top two contributions between the 1950s and the 1980s ( see fig .[ fig : barsallnew50 ] ) in the english data set are both parentheses , which show dramatically increased relative frequencies of use . *combined with increased relative frequencies for the colon ( 4th ) , solidus / virgule ( or forward slash ) ( 14th ) , `` computer '' ( 32nd ) , and square brackets ( 58th and 59th ) , this suggests that the primary changes between the 1950s and the 1980s are due specifically to computational sources . *other technical words showing noticeable increases include `` model '' ( 34th ) , `` data '' ( 35th ) , `` percent '' and the percentage sign ( 37th and 39th ) , `` figure '' ( 40th ) , `` technology '' ( 51st ) , and `` information '' ( 56th ) . *similarly to the divergence between the 1930s and 1940s , 19 out of the top 30 places are accounted for by increased relative frequencies of use in years between 1968 and 1980 .* the words `` the '' ( 3rd ) , `` of '' ( 8th ) , and `` which '' ( 16th ) all decrease noticeably in relative frequency and are the highest ranked alphabetical 1-grams .* unlike the divergence between the 1930s and 1940s , only masculine pronouns show decreases in the top 60 , while `` women '' ( 55th ) increases .the first version of english fiction shows similar dynamics to the second version of the unfiltered data set between the 1930s and the 1940s ( see fig .[ fig : barsficold30 ] ) with yearly mentions dominating the ranks .some exceptions include : * `` lanny '' rising in rank from 49th to 8th .* parentheses falling from 13th and 14th to 36th and 37th .`` ml '' ( increased relative frequency of use in the 1940s ) falling from 31st to 55th .* `` radio '' ( with increased relative frequency ) rising from 51st to 30th .* `` king '' is no longer in the top 60 contributions . *`` patient '' enters the top 60 ( ranked 51st ) .this similarity between the original english fiction data set and the unfiltered data set also appears in the divergence between the 1950s and the 1980s ( see fig . [fig : barsficold50 ] ) with parentheses and years dominating .moreover , `` patients '' ranks 13th ( with increased relative frequency of use ) despite not appearing in the top 60 for the unfiltered data set . these observations , combined with increases in `` levels '' ( 47th ) , `` drug '' ( 51st ) , `` response '' ( 55th ) , and `` therapy '' ( 56th )demonstrate the original fiction data set is strongly influenced by medical journals .therefore , this data set can not be considered as primarily fiction despite the label .fortunately , the same is not true for the second version of the english fiction data set .this is quickly apparent upon inspection of the two greatest contributions to the divergence between the 1930s and the 1940s ( see fig . [fig : barsficnew30 ] ) .the first of these is due to a dramatic increase in the relative frequencies of use of quotation marks , which implies increased dialogue .the second is the name `` lanny '' in reference to the recurring character lanny budd from 11 upton sinclair novels published between 1940 and 1953 .`` budd '' ranks 11th in the chart ahead of `` hitler '' ( 13th ) . the normalized frequency series for `` lanny '' and `` hitler '' provided in fig .[ fig : lannyhitlerfig ] demonstrate that lanny received more mention than hitler during this time period .the chart is littered with the names of fictional characters : * studs lonigan , the 1930s protagonist of a james t. farrel trilogy , secures the 12th spot .( naturally , he is mentioned fewer times during the 1940s . ) * dinny cherrel from the 1930s _ the forsyte saga _ by john galsworthy secures rank 15 . * wang yuan from the 1930s _ the house of earth _ trilogy by pearl s. buck ranks 17th and 37th . *detective bill weigand , a recurring character created by richard lockridge in the 1940s , secures rank 33 .* the eponymous , original asimov robot from the 1940 short story , `` robbie , '' ranks 19th . *`` mama '' ( ranked 48th ) is none other than the subject of _ mama s bank account _ , published in 1943 by kathryn forbes .* `` saburov '' ( ranked 22nd ) from _ days and nights _ by konstantin simonov and `` diederich '' ( ranked 45th ) from _ der untertan _ by heinrich mann are subjects of works translated into english in the 1940s .we note that while marcel proust ( 56th and 33rd ) , who died in 1922 may be present in the 1940s due to letters translated by mina curtiss in 1949 or other references not technically fiction . similarly ,`` b.m . ''( 18th ) may refer to the author b. m. bower .thus , the vast majority of prominent words in the word shift may be traced not only to authors of fiction , but to the content of their work .moreover , the greatest contributions to divergence appear to correspond to the most prolific authors , particularly upton sinclair . while there are no names of characters in the top divergences between the 1950s and the 1980s , the updated fiction data set ( fig .[ fig : barsficnew50 ] ) displays far more variety than the original version , including : * decreases in relative frequencies of masculine pronouns e.g ., `` he '' ( rank 19 ) and `` himself '' ( rank 48)and corresponding increases for feminine pronouns e.g ., `` her '' ( 3rd ) , `` she '' ( 5th ) , and `` she '' ( 6th ) .we present times series for `` he '' and `` she '' in fig .[ fig : heshefig ] . *an increase in relative frequencies of contractions ( see ranks 9 , 15 , and 21 ) . *a decrease in `` shall '' ( 16th ) and `` must '' ( 49th ) , and a variety of increased profanity ( particularly ranks 33 and 51 ) . * decreases in `` mr . ''( 10th ) and `` mrs . ''( 17th ) . * various shifts in punctuation , particularly fewer semicolons ( 1st ) and more periods ( 2vd ) .quotation ( 11th ) and question ( 18th ) marks both see increased relative frequencies of use in the 1980s , and the four - period ellipsis ( 20th ) loses ground to the three - period version ( 22td ) . as our jsd analysis has shown above , the unfiltered english data sets feature more general scientific terms and we compare `` percent , '' `` data , '' `` figure , '' and `` model '' in fig .[ fig : moreseries ] . the original fiction data set also features these , but also places `` patients , '' `` drug , '' `` response , '' and `` therapy '' among the top 60 contributions .the primary difference between the unfiltered and original fiction data sets in the 1980s ( compared to the 1950s ) appears to consist of the nature of journals sampled .the unfiltered components predicted and observed for this particular data set seem to be dominated by medical journals .as well as having more mentions of time and technical terms ( and parentheses ) in the 1980s than in the 1950s , both unfiltered versions and the first fiction data set include both `` et '' and `` al '' with greater relative frequency in the 1980s .perhaps more importantly , years do not have a large effect on the dynamics in the second english fiction data set .we see in fig .[ fig : yearsfig ] that while peaks for years rise in the unfiltered data , they do not in fiction .the absence of rising peaks in fiction strongly suggests the rise in peak relative frequencies of years in the larger data set is due to a citation bias in the unfiltered data set from high sampling of scientific journals .this bias casts strong doubt on conclusions that we as a culture forget things more quickly than we once did based on the observation that half - lives for mentions of a given year decline over time .the exponential rise in scientific literature is not a new phenomenon , and as de solla price stated in 1963 ( p. 81 ) when discussing the half - lives for citations of scientific literature , `` in fields embarrassed by an inundation of literature there will be a tendency to bury as much of the past as possible and to cite older papers less often than is their statistical due . ''it would see that an explanation for declining half - lives in the mentions of years lies in the dynamics of the memory of scientific discoveries rather than that of culture . for the second fiction data set , we observe in fig .[ fig : moreseries]b , that `` computer '' gains popularity in the fiction data set despite other technical words remaining relatively steady in usage , as we might expect .this should be encouraging for anyone attempting to analyze colloquial english , despite the prolificacy bias apparent from the authors such as upton sinclair . in the si filess1 , s2 , s3 , and s4 , we include the top 60 contributions to divergences between each pair of the 20 decades in each of the four data sets analyzed in this paper . in total ,760 figures are included ( 190 per data set ) for a grand total of 45,600 contributions .we highlight some of these here . * for divergences to and from the first decade of the 1800s , many of the contributions are due to a reduction of optical character recognition confusion between the letters ` f ' and ` s ' .for example , in the second unfiltered data set between the 1800s and 1810s , the top two contributions are due to reductions in `` fame '' and `` os , '' respectively .the word `` same '' ( ranked 11th ) is the first increasing contribution .decreased relative frequencies of `` os , '' `` sirst , '' `` thofe , '' `` fo , '' `` fay , '' `` cafe , '' `` fays , '' `` fome , '' and `` faid '' ( ranks 3 through 10 , respectively ) and `` lise '' ( 12th ) all suggest digital misreadings of both f and the long ` s ' .( the 13th contribution is `` napoleon , '' who is mentioned with greater relative frequently in the 1810s . )* contributions between the 1830s and the 1860s in the second unfiltered data set highlight the american civil war and its aftermath .`` state '' ( 11th ) , `` general '' ( 19th ) , `` states '' ( 20th ) , `` union '' ( 37th ) , `` confederate '' ( 48th ) , `` government '' ( 52nd ) , `` federal '' ( 56th ) , and `` constitution '' ( 59th ) all show increased relative frequency of use .religious terms tend to decline during this period , `` church '' ( 14th ) , `` god '' ( 24th ) , and `` religion '' ( 58th ) . * between the 1940s and 1960s , the second unfiltered dataset shows increases for `` nuclear '' ( 43rd ) , `` vietnam '' ( 47th ) , and `` communist '' ( 50th ) .the relative frequency of `` war '' ( 25th ) decreases substantially .meanwhile in fiction , `` lanny '' ( 5th ) declines , while `` television '' ( 38th ) and the hardy boys ( `` hardy '' ranks 51st ) appear with greater relative frequencies . * between the 1960s and 1970s , the second fiction data set is strongly affected by `` garp '' ( _ the world according to garp _ by john irving , 1978 ) at rank 19 , increased relative frequencies of profanity ( ranks 27 , 33 , and 38 ) , and increased mentions of `` nixon '' ( 41st ) and `` spock '' ( 47th , likely due to `` star trek '' novels ) . * between the 1980s and 1990s , the second fiction set shows increased relative frequencies of use of the words `` gay '' ( 15th ) , `` lesbian '' ( 19th ) , `` aids '' ( 24th ) , and `` gender '' ( 27th ) .female pronouns ( 2nd , 8th , and 9th ) show increased relative frequencies of use in continuance of fig .[ fig : barsficnew50 ] .based on our introductory remarks and ensuing detailed analysis , it should now be clear that the contents of the google books corpus do not represent an unbiased sampling of publications . beyond being library - like , the evolution of the corpus throughout the 1900s is increasingly dominated by scientific publications rather than popular works .we have shown that even the first data set specifically labeled as fiction appears to be saturated with medical literature .when examining these data sets in the future , it will therefore be necessary to first identify and distinguish the popular and scientific components in order to form a picture of the corpus that is informative about cultural and linguistic evolution .for instance , one should ask how much of any observed gender shift in language reflects word choice in popular works and how much is due to changes in scientific norms , as well as which might precede the other if they are somewhat in balance .even if we are able to restrict our focus to popular works by appropriately filtering scientific terms , the library - like nature of the google books corpus will mean the resultant normalized frequencies of words can not be a direct measure of the `` true '' cultural popularity of those words as they are read ( again , frodo ) .secondarily , not only will there be a delay between changes in the public popularity of words and their appearance in print , normalized frequencies will also be affected by the prolificacy of the authors . in the case of upton sinclairs lanny budd , a fictional character was vaulted to the upper echelons of words affecting divergence ( even surpassing hitler ) by virtue of appearing as the protagonist in 11 novels between 1940 and 1953 .google books is at best a limited proxy for social information after the fact .the google books corpus s beguiling power to immediately quantify a vast range of linguistic trends warrants a very cautious approach to any effort to extract scientifically meaningful results .our analysis provides a possible framework for improvements to previous and future works which , if performed on english data , ought to focus solely on the second version of the english fiction data set , or otherwise properly account for the biases of the unfiltered corpus .
it is tempting to treat frequency trends from the google books data sets as indicators of the `` true '' popularity of various words and phrases . doing so allows us to draw quantitatively strong conclusions about the evolution of cultural perception of a given topic , such as time or gender . however , the google books corpus suffers from a number of limitations which make it an obscure mask of cultural popularity . a primary issue is that the corpus is in effect a library , containing one of each book . a single , prolific author is thereby able to noticeably insert new phrases into the google books lexicon , whether the author is widely read or not . with this understood , the google books corpus remains an important data set to be considered more lexicon - like than text - like . here , we show that a distinct problematic feature arises from the inclusion of scientific texts , which have become an increasingly substantive portion of the corpus throughout the 1900s . the result is a surge of phrases typical to academic articles but less common in general , such as references to time in the form of citations . we highlight these dynamics by examining and comparing major contributions to the statistical divergence of english data sets between decades in the period 18002000 . we find that only the english fiction data set from the second version of the corpus is not heavily affected by professional texts , in clear contrast to the first version of the fiction data set and both unfiltered english data sets . our findings emphasize the need to fully characterize the dynamics of the google books corpus before using these data sets to draw broad conclusions about cultural and linguistic evolution .
autoassociative networks are useful models of one of the basic operations of cortical networks .` hebbian ' plasticity on recurrent connections , e.g. in the higher level areas of sensory cortex and in the hippocampus , is the crucial ingredient for autoassociation to work , with real neurons .neural network models , although very simplified and abstract , allow a comprehensive analysis , indicating whether associative memory retrieval can proceed safely , or whether it must face dynamical hurdles , such as ` spurious ' local minima in a free - energy landscape .the dynamics of such networks , in the simplest models , is governed by a number of dynamical attractors , each of which corresponds to a distribution of neural activity , i.e. a pattern , which represents a long term memory .memory is stored by superimposed synaptic weight changes , and the basic operation proceeds by supplying the network with an external signal that acts as a cue , correlated , perhaps only weakly , with a pattern , and which leads through attractor dynamics to the retrieval of the full pattern .+ how smoothly can such an operation proceed , and how wide are the basins of attraction of the memory states ? clearly , these issues depend critically on whether other attractors exist , that could hinder or obstruct retrieval as a crude example , if the cue is correlated with the image of a mule , the net may be able to retrieve either a horse or a donkey , if no `` mixed '' attractor exist .if instead the encoding procedure has , unintentionally , created a spurious attractor for the mule itself , the network will likely be stuck in such a mixed memory state . in a slightly more complicated model endowed with some topographic mapping of visual space , a horse cue and a donkey cuemight be presented simultaneously in neighbouring positions .if they are too close in visual space and spurious attractors exist , this topographic map might retrieve two mules next to each other .returning to nets without spatial structure and considering for simplicity only symmetric mixtures of patterns embedded with equal strengths , there are obviously 2-mixtures , 3-mixtures , and so on .do they correspond to stable attractors , and as such do they influence the network dynamics ? + in addition , connectionist modelers have proposed to describe in terms of spurious states certain psychiatric dysfunctions .speech disorders in schizophrenic patients , for instance , might arise from the existence of a large number of spurious states , that obstruct the retrieval of correct patterns . in their seminal investigation of the hopfield model , amit , gutfreund and sompolinsky found that while symmetric mixtures of an even number of patterns are unstable , odd mixtures and the spin glass phase can be stable , in a certain region of phase space . in the hopfield model ,though , neurons are modelled as binary units , and correspondingly each distribution of activity , in particular each memory pattern , is a binary vector .either or both of these aspects might be essential in producing the additional minima in the free - energy landscape .real neurons behave very differently from binary units in many respects , a basic one being that their spiking activity , once filtered with a short time - kernel , is better approximated by an analog variable .threshold - linear units reproduce this graded nature of neural response , yet still allow for a simple and complete statistical mechanics analysis of autoassociative network models . with threshold - linear units , the memory patterns encoded in the synaptic weights can still be taken to be binary vectors , but can also be taken to be drawn from a distribution with several discrete activity values , or from a continuous distribution .exponential distributions , in particular , can be argued to be not far from experimentally observed spike count distributions .the question of mixture states in analog nets was first addressed in , arguing that the multiple local minima of the spin glass phase are fewer in number in an associative net of units with more continuous ( sigmoid ) transfer function .later it was found , considering threshold - linear units , that are both realistic and amenable to analytical treatment , that the region of stability of the spin glass phase is severely restricted with such units , again indicative of a general smoothing of the free - energy landscape with analog variables .although these analyses provide a good starting point , they are not complete in the sense that they did not show what will happen to -mixture states with small ( the ones relevant to models of schizophrenia ) , and what is the effect of different coding schemes , that is pattern distributions . here, we consider instead symmetric -mixtures , with , and we consider non - binary memory vectors .also , from the biological point of view , it is important to study nets with diluted ( incomplete ) connectivity , which are much more realistic descriptions of cortical and hippocampal networks , where the probabiliy of a recurrent connection between any two units may be of the order of a few percent . in this manuscript we show that symmetric mixture states give rise to dynamical attractors only in very restricted circumstances , in associative networks of threshold - linear units , both with full and diluted connectivity .we have analysed the validity of this statement in different coding schemes , and did not find any stable mixture state at all , when memory patterns are not binary .essentially , we conclude that this type of spurious states are a pathological feature of the simplified binary models considered in the initial studies .we use a model very similar to that analysed in .we consider a fully connected network of units , taken to model excitatory neurons .the level of activity of unit is a dynamical variable , which corresponds to the short time averaged firing rate of the neuron .units are connected to each other through symmetric weights .the specific covariance hebbian learning rule we consider prescribes that the synaptic weight between units and be given as where represents the activity of unit in pattern .each is taken to be a quenched variable , drawn independently from a distribution , with the constraints , . as in one of the first extensions of the hopfield model , we thus allow for the mean activity of the patterns to differ from the value of the original model .the model further assumes that the input to unit takes the form where the first term enables the memories encoded in the weights to determine the dynamics ; the second term allows for external signals to cue the retrieval of one or several patterns ; and the third term is unrelated to the memory patterns , but is designed to regulate the activity of the newtork , so that at any moment in time .the activity of each unit is determined by its input through a threshold - linear function where is a threshold below which the input elicits no output , is a gain parameter , and the heaviside step function .units are updated , for example , sequentially in random order , possibly subject to fast noise .the exact details of the updating rule and of the noise are not specified further , here , because they do not affect the steady states of the dynamics , and we take the noise level to be vanishingly small , .discussions about the biological plausibility of this model for networks of pyramidal cells can be found in , and will not be repeated here .subject to the above dynamics , the network evolves towards one of a set of attractor states . in a given attractorthe network may still wander among a variety of configurations , but it reaches a stationary probability distribution of being in any particular configuration .the average of any quantity over such ` annealed ' probability distribution is denoted by ( whereas denotes the average over the quenched distribution ) .to analyse such a model one can introduce , as in the order parameters : where is simply the mean activity of the network , and ,the subtracted , or specific , overlap of the current state of the network with each of the stored patterns .two further parameters , } , \end{aligned}\ ] ] can be defined as a function of and , and play a particularly useful role in the analysis in the limit we consider , , when one configuration dominates the annealed average , and .the characteristic noise scale of the system is , and we define the storage load . in the limit , the system is thus characterized by the parameters ( mean pattern activity , which also parametrizes the coding sparseness in the sense that decreasing makes the code sparser ) , ( storage load ) , ( gain ) and ( threshold ) .we calculate the free energy using the replica trick , for symmetric -mixture states ( where overlaps take the same non - zero value , and the rest are zero ) elicited by external signals .these signals can be purely transient , so that at steady state , but we consider a non - zero steady value for the sake of generality . we look for symmetric states , characterized by non - zero .the saddle point equations reduce to : where now the input to each unit can be expresed as : and the free energy reads : if one defines new parameters ( the specific signal - to - noise ratio ) and /(t_0\rho)$ ] ( a sort of uniform field - to - noise ratio ) , it is easy to show that the mean field equations can be reduced to : where with . in the equations above , andthe subscript indicates that the -average has to be carried out only in the range where . in the following we take .thus symmetric -mixture attractors exist if we can find stable solutions of eqs.[e1 ] , [ e2 ] .to analyze the stability of the extrema of the free energy , one has to study the hessian matrix around the saddle point . in general , for -mixture states , there are three types of eigenvalues : \1 . a non - degenerate eigenvalue , which decides the stability against a uniform increase in the amplitude of the patterns that contribute to the thermodynamic state ( i.e. the condensed patterns ) , while the other overlaps remain zero .it is ( for ) \2 .an eigenvalue of degeneracy , associated with any direction which tends to change the relative amplitude of the non - zero overlaps .it is ( again for ) \3 .the third eigenvalue , with degeneracy , measures the stability against the appearance of additional overlaps .in order to proceed further , we restrict the analysis to a number of specific coding schemes , i.e. , to different choices for the distribution . we consider for small values of the load ( and hence of the quenched noise ) , eq.[e2 ] describes an hyperbole , whose center depends on the value of .eq.[e1 ] instead , for small values of , is a closed curve in the quartant , so that with an appropriate choice of the two curves intersect at two points .as grows , the region shrinks in size , until at a certain value of , which depends only on and the coding scheme , it reduces to a point and then disappears .we have investigated not just the existence but also the stability of solutions for symmetric 2- and 3-mixture states .the solutions behave exactly in the same manner in these two cases : for small values of both intersections discussed above are unstable , in the sense that both and are negative .this finding is confirmed by computer simulation , in which one of the overlaps tends to grow , reaching the corresponding attractor , whereas the other one ( or the other two in the case of 3-mixtures ) tend to zero .increasing the value of the sparsity parameter , one finds different results with binary coding and with other types of coding .let us consider binary coding first .after a range of -values with only one unstable eigenvalues ( or ) , one finds a range where genuinely stable solutions can be found .thus the retrieval of mixture patterns is possible for binary coding , as can be seen in the simulations shown in fig.[fig1 ] .the exact stability region in the plane differs for 2-mixtures and 3-mixtures . in both cases ,it is delimited to the right by the ` critical load ' , i.e. the value at which the island with shrinks to zero , and to the left by the load beyond which no intersection with both and can be found .fig.[fig2 ] illustrates these stability regions , compared with the critical load for the pure attractor states , as in . for ternary and exponential coding , the solutions of the saddle point equationsremain unstable even for very high values of the sparsity parameter . again , this was verified by computer simulations .fig.[fig3 ] illustrates the different situation occurring with ternary and binary coding , by considering a very low load and a sparsity value for which stable solutions for 3-mixtures are easily found in the binary case .note that the ` critical load ' for 3-mixtures would be considerably higher with ternary patterns ( not shown ) ; the fact is that at each position of the intersection , either or or both turn out to be negative .this complex behaviour of eigenvalues will be discussed elsewhere in more detail .we have also extended the analysis to a highly diluted network . in this case the number of patterns that can be stored scales with the number of connections each unit receives , rather than with the number of units .one then redefines the load parameter as . the essential difference introduced by the sparse ( i.e. diluted ) connectivity is that noise has less of an opportunity to reverberate along closed loops .in fact the signal , which during retrieval is simply contributed by the ` condensed ' patterns , propagates coherently and proportionally to , independently of the density of feedback loops in the network .the fluctuations in the overlaps with the undecondensed patterns , which as represent the sole source of noise , propagate coherently along feedback loops , giving rise to the amplifying factor of the fully connected case . for a given load ( fixed ) ,diluted connectivity reduces therefore the influence of this ` static ' noise , and performance is better than in the fully connected case with . in particular with the extreme dilution ,that is , if the condition is satisfied , one can neglect correlations among the inputs to a given unit , and the mean field equations become : examining again the stability matrix , we find that the mixture solutions , that were present with binary coding and large values of , still survive . by the token ,the results for ternary and exponential coding are not affected , in the sense that no stable solutions can be found even in the highly diluted case .the conclusion is that the existence of stable mixture states in a restricted region of the parameter space should be regarded as almost a pathological feature , resulting from binary coding .if one considers mixture states as spurious states , to be avoided , then one notes that the introduction of analog variables , a more realistic description of neural activity , goes a long way towards disposing of spurious states , just as it almost eliminated the spin glass phase .the remaining region of stability of spurious states is definitely eliminated by non - binary coding schemes , that further contribute to smooth the free - energy landscape .this result casts doubts upon e.g. models of schizophrenia that are based on the existence of spurious attractors .these results may well have implications in domains outside computational neuroscience .the smoothness of the free - energy landscape is a crucial features of many interacting systems used to map optimization problems , such as the travelling salesman or the graph matching problem .optimization generally fails if the dynamics gets stuck into local minima .our result indicates that undesired local minima may be eliminated by a combination of analog variables and coding schemes , which may in some cases be manipulated while mapping the problem at hand onto a dynamical system .rolls , a. treves , _ neural networks and brain function _ , ( oxford u.p . , oxford , 1998 ) .a. treves , e.t .rolls , network * 2 * 371 ( 91 ) d.j.amit , _ modeling brain function _ , ( cambridge u.p . , cambridge , 1989 ) .r. e. hoffman , arch . of gen. psych . * 44 * 178 ( 1987 ) j.j .hopfield , proc .usa * 79 * 2554 ( 82 ) d.j .amit , h. gutfreund , h. sompolinsky , phys . rev . * a 32 * 1007 ( 85 ) , d.j .amit , h. gutfreund , h. sompolinsky , ann . phys .( n.y . ) * 173 * 30 ( 87 ) such short time kernel may be thought to correspond to the conversion of presynaptic spikes into postsynaptic potentials .a. treves , phys .rev a * 42 * 2418 ( 90 ) w.b .levy , r.a .baxter , neural comp .* 8 * 531 ( 96 ) ; r.j . baddeley_ et al _ , proc .( london ) * b 264 * 1775 ( 97 ) ; a. treves _ et al _ , neural comp .* 11 * 611 ( 99 ) f.r .et al _ , phys .lett * 64 * 1986 ( 90 ) a. treves , j. phys .a : math . gen . * 24 * 2645 ( 91 ) v. braitenberg , a. schtz , _ statistics of the cortex : anatomy and geometry _ ,( springer , berlin , 1991 ) a. treves , e.t . rolls , hippocampus * 2 * 199 ( 92 ) m.v .tsodyks , m.v .feigelman , europhys .* 6 * 101 ( 88 ) d.j .amit , m.v .tsodyks , network * 2 * 259 , 275 ( 91 ) b. derrida , e. gardner , a. zippelius , europhys .lett . * 4 * 167 ( 87 ) a. treves , j. phys .a : math . gen .* 24 * 327 ( 91 ) j.j .hopfield , d.w . tank , science * 233 * 625 ( 86 ) y. fu , p.w .anderson , j. phys .a * 19 * 1605 ( 86 )
we show that symmetric -mixture states , when they exist , are almost never stable in autoassociative networks with threshold - linear units . only with a binary coding scheme we could find a limited region of the parameter space in which either 2-mixtures or 3-mixtures are stable attractors of the dynamics . 2 [ ]
financial market modelling lies in the field of interests of both theoreticians and practitioners . among those who develop these models are also econophysicisits . a view on a financial market as a complex system of investors similar to complex physical systems appeared to be very fruitful , since it allowed to reproduce many characteristic features of such a market .a special group constitute approaches based on the ising model or its generalization to a three - state model .they identify agents with spin variables which can take specific values depending on agents decisions . in much the same way as ising spins ,agents interact with each other , which leads to a herding behavior and , as a consequence , bubbles or crashes .an important target for such models is a reproduction of real market stylized facts such as fat - tailed returns distribution , clustered volatility , or long range correlations of absolute returns . in this paperwe proposed a simple model of financial markets , based on the granovetter threshold model of collective behavior , which is in a good agreement with the above facts .an inspiration for our work was the bornholdt model , corresponding to the ising spin model with an additional _ minority term _ , where agents act under the influence of their neighbors ( ising part ) and a global magnetization .the local field for i - th spin in the bornholdt model is defined as : with a global constant .the model interprets the price of an asset in terms of the magnetization , which enables the authors to reproduce some stylized facts of financial markets .here we developed a generalization of the ising spin model that also uses the absolute value of the magnetization as a factor controlling the dynamics .let us consider a model of n interacting market agents where each agent takes one of three actions : `` buy , '' `` sell , ''or `` stay inactive . ''such an agent can be represented by a three - state spin variable taking values when the agent is buying , when the agent is selling , and in the remaining case .the agents interact with each other according to an interaction matrix . in our modelwe assumed the interaction matrix corresponding to a 2-dimensional squared lattice in which each agent interacts only with its four nearest neighbors with an equal strength .the interaction strength can be _ferromagnetic _ ( ) when investors try to act like their neighbors or _ antiferromagnetic _( ) when they try to play against their neighbors .we chose the ferromagnetic case to introduce the herding behavior . at each time step the agent takes its value according to the following formula , ,\ ] ] where is a threshold signum function , the function has random values from the gaussian distribution with mean and variance equal to .the term simulates an individual erratic opinion of the i - th investor and the parameter defines strength of individual opinions .we define the magnetization of the network , which together with a constant forms a threshold parameter of the sign function .let us notice that for our model is identical with the 2-valued ising spin model .we simplified the procedure of price calculation presented in by omitting the influence of _ chartists _ and _ fundamentalists _ in the population of investors and redefining the price of an asset as : where is , in agreement with the efficient market hypothesis , a geometric brownian walk corresponding to fundamental price changes .let us put constant for more simplicity .we got therefore a logarithmic rate of return : according to ( 2 ) , each agent is under the influence of three factors .the first is an imitation of their neighbors , associated with the matrix which is responsible for the herding behavior .the second factor is an individual opinion of the agent provided by the term .so far , it is the standard ising model .yet , there is one more factor , , which plays the role of a threshold parameter .only those agents that are able to exceed the threshold are allowed to trade .the value of the threshold depends on the absolute magnetization . according to ( 5 ) , the magnetization measures a deviation from the fundamental value .so , when it is large , the agents are afraid of trading , unless they have a strong support from the neighbors or from their private opinions .using a square lattice of agents with periodic boundary conditions we computed the history of the magnetization , and thus the evolution of the stock price .the agents were allowed to interact only with their nearest neighbors .the simulation was started with a random configuration of spins . at each timestep a randomly chosen spin was updated according to the evolution equation ( 2 ) and this was repeated times .the first time steps were ignored as a period of system thermalization .we observed that for proper parameters price returns show volatility clustering ( fig . 1 ) and fat tails ( fig .2 ) . in time for parameters , and different . ] [ cols="^,^ " , ] for real prices , fat tails of returns distribution ( ) are getting slimmer with rising time lag .finally , the distribution became normal for sufficiently large . in figure 3we presented the distribution for different .the tails are slimming down with rising .the distribution has gaussian tails although its shape is slightly different .volatility clustering can be quantitatively shown using the autocorrelation function of a time series : where is the variance of , and means the average over .we computed the autocorrelation function of returns ( fig .4 ) and the absolute value of returns ( fig . 5 ) .the autocorrelation of returns decays very fast , which is consistent with observations of real markets .it has also the same shape as the distribution presented in [ 10 ] .the autocorrelation function of the absolute returns is a slowly decaying exponential similar to the results of [ 13 ] and [ 10 ] ., obtained for parameters , , . ] , obtained for parameters , , in log - log scale , and in semi - log scale ( inset ) . ]we proposed a simple model of financial markets based on the granovetter model .we introduced a threshold controlled by a magnetization of the system .this makes an agent take an action only if its confidence is strong enough to overwhelm the threshold .we defined a logarithmic rate of return as a change of the magnetization .the model with such a defined price reproduces main stylized facts of financial markets meaning a fat - tailed distribution of returns , volatility clustering , very fast decaying autocorrelation of returns and much slower decay of autocorrelation of absolute returns .it has been observed for real markets that the distribution of returns becomes gaussian for large .the distribution of returns generated by our model approaches the gaussian distribution for rising , however its shape differs from the normal distribution .the autocorrelation function of the absolute returns is a slowly decaying exponential function , while the empirical study of the markets reveals a power - law behavior .both these issues can be considered as weaker points of our model .we would like to dedicate this paper to prof .marcel ausloos and to prof .dietrich stauffer on the occasion of their 65th birthdays and to thank them for their inspiring contributions to econophysics and sociophysics research .+ the work has been supported by the project stochdyn by european science foundation and by polish ministry of science and higher education ( grant esf/275/2006 ). 1 r. cont , j .-bouchaud , macroecon .dynamics 4 , 170 ( 2000 ) , t. lux , m. marchesi , nature 397 , 498 ( 1999 ) , d. chowdhury , d. stauffer , eur .j. b 8 , 447 ( 1999 ) , i. giardina , j .- p .bouchaud , phys . a 324 , 6 ( 2003 ) , s. bornholdt , int . j. mod .c 12 , 667 ( 2001 ) , t. kaizoji , phys . a 287 , 493 ( 2000 ) , a. krawiecki , j. a. hoyst , d. helbing , phys .89 , 158701 ( 2002 ) , w .- x .zhou , d. sornette , eur .j. b 55 , 175 ( 2007 ) , g. iori , int . j. modc 10 , 1149 ( 1999 ) , t. takaishi , int . j. modc 16 , 1311 ( 2005 ) , a - h .sato , phys .a 382 , 258 ( 2007 ) , r.n .mantegna , h.e .stanley , _ an introduction to econophysics _ , cup , cambridge , uk 2000 , t. kaizoji , s. bornholdt , y. fujiwara , phys .a 316 , 441 ( 2002 ) , e. f. fama , j. finance 25 , 383 ( 1970 ) .
we proposed a model of interacting market agents based on the generalized ising spin model . the agents can take three actions : `` buy , '' `` sell , '' or `` stay inactive . '' we defined a price evolution in terms of the system magnetization . the model reproduces main stylized facts of real markets such as : fat - tailed distribution of returns and volatility clustering . _ faculty of physics , center of excellence for complex systems research , warsaw university of technology , koszykowa 75 , + pl-00 - 662 warsaw , poland _ * pacs . * 89.65.gh
quantum measurement is one of the simplest , yet most important , physical processes . without measurement , we see no event and obtain no information .a quantum measurement is a process that maps the quantum state of a quantum system to the classical state ( probability distribution ) of an external classical system that belongs to the observer side .it is known that the interface ( border ) between quantum and classical systems can be shifted .in fact , an observer does not interact with the system itself .instead , they extract information from another system , a measurement apparatus , that has direct contact with the system . both the system and the measurement apparatus can be treated quantum mechanically .the main purpose of this paper is to investigate the energy and interaction required for measuring an observable .more precisely , we investigate the energy of a quantum apparatus and the strength of interaction between a system and the apparatus so that the process is fully described by quantum theory . to put the question from a more pragmatic viewpoint ,our interest is in the amount of resource " required to perform a measurement ( or information transfer ) task .thus , to study this problem , the interface between quantum and classical systems must be located such that the apparatus is treated quantum mechanically .for instance , although we have an equivalent minimum description ( called an instrument ) of the dynamics and measurement results that puts the interface between the system and the apparatus , our problem does not allow us to employ it because our interest is in the limitations of the quantum apparatus . in section [ section2 ] , we discuss how large " the quantum side ( or quantum apparatus ) must be to describe a measurement process in a fully quantum manner .we examine a so - called standard measurement model to find that the model requires an external system switching on the measurement interaction between a system and an apparatus . in this sense ,the measurement process is not closed - it is not described in a fully quantum manner in this model .this model does not obey any non - trivial energy - time uncertainty relation .this conclusion agrees with previous results .then , in section [ section3 ] , we rigorously formulate a quantum apparatus as a timing device that autonomously switches on a perturbation .an argument on the analyticity shows that the total hamiltonian must be two - side unbounded to fulfill the conditions for the timing device .our main results are presented in section [ section4 ] .we consider a measurement apparatus that autonomously switches on the interaction at a certain time .we show that there is an energy - time uncertainty type relation between the fluctuation of the apparatus hamiltonian and time duration of the measurement .the proof of this trade - off relation is given by combining two kinds of uncertainty relations .we first observe that a perfect measurement of a given observable implies a perfect distortion of the conjugate states .this property is called an information - disturbance relation and is related to an uncertainty relation for a joint measurement .we then employ the mandelstam - tamm uncertainty relation to obtain a restriction on time and energy required for the distortion .in addition , because the measurement process involves an information transfer from the system to the apparatus , the interaction between them should not be too weak .in fact , in section [ section5 ] we show a trade - off relation between the strength of interaction and the measurement time .the robertson uncertainty relation plays an essential role in the proof . in section [ section6 ]we illustrate an argument of the spacetime uncertainty relation as an application of our result .let us consider a quantum system described by a hilbert space .a quantum state is represented as a density operator on .we first present a description when we put an interface between the quantum and external classical systems just outside this system . by measuring an observable, we obtain an outcome . in general, the outcome is not definite and only a probability distribution on the outcome set is determined .thus the measurement of an observable maps the quantum state of a quantum system to the classical state ( probability distribution ) of an external classical system which belongs to the observer side . for a map to be consistent with an interpretation based on probability , it must be an affine map .each affine map is completely described by a positive - operator - valued measure ( povm ) in the quantum system ( see e.g. , ) .a povm ( with a discrete outcome set ) is defined as a family of positive operators satisfying .a povm gives a probability distribution ] holds for every .the above physical measurement model is consistent with the abstract measurement theory by ozawa s extension theorem and has been shown to be useful in analyzing real measurement processes .this model , however , is not sufficient for obtaining general results on the minimum energy fluctuation required for a measurement . in order to illustrate this point , we consider the following example , a so - called standard measurement model developed essentially by von neumann .suppose that a system is a qubit ( thus ) and an apparatus is a one - particle system on a real line ( see fig .[ figure_switch11 ] ) ..,width=340 ] set , and , where represents the momentum operator . as the meter observablewe employ . for an arbitrary time duration , if the initial state at time of the apparatus is prepared in a narrowly - localized state with respect to a position , the accuracy of the measurement of can be made arbitrarily high. therefore , there is no limitation on the time and energy required for this measurement process . by looking carefully at this model ,one may notice that the time duration is given by hand . in this model , the interaction between a system and an apparatus is switched on at and switched off at .the dynamics outside this time interval is not discussed .an external observer must put these systems together at , which until then must have been somehow independent .thus the mechanism initiating this switching - on process is supposed to be outside the quantum model . in this sense , this model is not closed and we need to shift the interface further to include the on - and - off switching process . in the discussion below , we develop a general measurement process that also treats this switching - on ( and off ) mechanism quantum mechanically . in some situations , roughly speaking , we shift the quantum - classical interface so that the quantum side includes the experimenter who controls the apparatus . in the next sectionwe give a mathematically rigorous formulation of the timing device that switches on a perturbation .as stated above , we investigate an apparatus that works not only as an information extractor but also as a switching device for an interaction ( perturbation ) .to do so , in this section , we study a formulation of the latter condition and derive some results .we consider an apparatus that works as a timing device to switch on an interaction at a certain time after .thus , we assume that at time the state of the total system is a product state .the apparatus is specified by the hilbert space , hamiltonian and a state at time .let us first assume that the apparatus is perfectly isolated and has no interaction with the system. then the time evolution is described by a von neumann equation determined by an apparatus hamiltonian which acts only on . for an arbitrary time , the state at time is written as .next , we consider an interaction between the system and apparatus .we denote the hilbert space of the system by .the individual dynamics of the system is governed by the hamiltonian defined on .thus the total hamiltonian has three parts , , where is the interaction hamiltonian that acts on the tensor product .the time evolution of a state ( a density operator ) over the composite system is described by the corresponding von neumann equation , .\end{aligned}\ ] ] let us consider conditions for , and a state of the apparatus to describe a timing device .they must satisfy the following general conditions that represent the capability to switch on the interaction at a certain time .the apparatus evolves freely up to time and then switches on an interaction with the system at some time after .thus , the state of the apparatus should be decoupled from that of the system until time .the time evolution of the state is determined entirely once the state at a certain time is given .we impose the following condition on the dynamics and the state at time : [ cond1 ] ( no interaction up to time ) an apparatus ( specified by , and a state at time ) satisfies _ the no interaction condition up to time _ if for any and for any state of the system , = 0 \label{cond1eq}\end{aligned}\ ] ] holds , where .we emphasize that in condition [ cond1 ] , the left - hand side of ( [ cond1eq ] ) must be vanishing for any .this condition implies that only after the apparatus reaches , does induce the apparatus and the system to interact .in fact the following lemma shows that these conditions are equivalent . if part " : we differentiate by , to obtain = [ h_s + h_a , \rho(t ) \otimes \sigma(t)].\end{aligned}\ ] ] as the left - hand side is written as ] . as is arbitrary , we obtain condition [ cond1 ] .+ only if " part : it holds that for , \\ & = & [ h , \rho(t)\otimes \sigma(t ) ] , \end{aligned}\ ] ] where we used ( [ cond1eq ] ) for .thus we conclude that satisfies the von neumann equation whose integration is ( [ cond1int ] ) .[ lemma1 ] if condition [ cond1 ] is satisfied and at a certain time the state has a product form ; then the state at any time becomes the product state ; where .note that this condition is rather weak .it still allows the existence of such that acts trivially on for any .in addition , this condition does not specify exactly the switching time of the perturbation as .it only requires the existence of the moment at which the perturbation does not vanish .[ cond3](switching - off time condition ) an apparatus satisfies _ the switching - off time condition _ if there exists a time called a switching off time such that for any and for any state of the system , = 0 \label{cond3eq}\end{aligned}\ ] ] holds .[ ex1 ] suppose that an apparatus is described by and .it is coupled with a system described by with orthonormal basis and a trivial hamiltonian .the interaction term is introduced as , where and is a nonvanishing real function whose support is included in for some . in the position representation is supposed to be strictly localized in the negative real line .that is , the support of is included in for some .the freely evolved state of the apparatus can be written as , which has support in .it is easy to see that this system - plus - apparatus satisfies condition [ cond1 ] .let us consider the time evolution of the state denoted by . for , while the state of the system remains , the state of the apparatus is affected by the interaction .we denote the whole state at time by .it can be shown that at time , the state of the apparatus becomes in the position representation . on the other hand , for , while the state of the system remains , the state of the apparatus at time becomes in the position representation .thus , for a general initial state , the state of the total system evolves as which coincides with an unperturbed ( freely evolved ) state up to time .thus it satisfies condition [ cond1 ] .the state of the system evolves as for .thus condition [ cond2 ] is also satisfied .in addition , we put .taking into account the support of , we obtain for , as their supports do not intersect with the support of , condition [ cond3 ] is satisfied .suppose that is lower bounded .( an upper bounded case can be treated similarly . ) by purifying the state of the apparatus , we can assume the state to be pure .we denote the purified state by . on this enlarged system, also works as a hamiltonian and is lower bounded .( more precisely we need , where acts only on an auxiliary hilbert space employed for purification . )we denote by again the enlarged hilbert space of the apparatus . for an arbitrary state , we have for , =0 .\end{aligned}\ ] ] this implies that for there exists a function such that as is normalized we have we now define latexmath:[\ ] ] thus we can conclude we assume .combining it with theorem [ ettheorem ] by putting , we obtain this ends the proof .while a measurement process consists of the information transfer " described by the time evolution of a composite system and the state reduction conditional to each measurement outcome , we treat only the former .see for the relevant discussions . for a given total hamiltonian is an ambiguity in dividing it into three parts as . in this paper , by imposing conditions which are explained later , this arbitrariness is reduced to some extent .although there remains arbitrariness even with these conditions , our results do not depend on the choice of the division .
quantum measurement is a physical process . a system and an apparatus interact for a certain time period ( measurement time ) , and during this interaction , information about an observable is transferred from the system to the apparatus . in this study , we quantify the energy fluctuation of the quantum apparatus required for this physical process to occur autonomously . we first examine the so - called standard model of measurement , which is free from any non - trivial energy - time uncertainty relation , to find that it needs an external system that switches on the interaction between the system and the apparatus . in such a sense this model is not closed . therefore to treat a measurement process in a fully quantum manner we need to consider a larger " quantum apparatus which works also as a timing device switching on the interaction . in this setting we prove that a trade - off relation ( energy - time uncertainty relation ) , , holds between the energy fluctuation of the quantum apparatus and the measurement time . we use this trade - off relation to discuss the spacetime uncertainty relation concerning the operational meaning of the microscopic structure of spacetime . in addition , we derive another trade - off inequality between the measurement time and the strength of interaction between the system and the apparatus .
decentralized optimization has recently been receiving significant attention due to the emergence of large - scale distributed algorithms in machine learning , signal processing , and control applications for wireless communication networks , power networks , and sensor networks ; see , for example , . a central generic problem in such applicationsis decentralized resource allocation for a multiagent system , where the agents collectively solve an optimization problem in the absence of full knowledge about the overall problem structure . in such settings ,the agents are allowed to communicate to each other some relevant estimates so as to learn the information needed for an efficient global resource allocation .the decentralized structure of the problem is reflected in the agents local view of the underlying communication network , where each agent exchanges messages only with its neighbors . in recent literature on control and optimization ,an extensively studied decentralized resource allocation problem is one where the system objective function is given as a sum of local objective functions , i.e. , where is known only to agent ; see , for example . in this case, the objective function is separable across the agents , but the agents are coupled through the resource allocation vector . each agent maintains and updates its own copy of the allocation / decision vector , while trying to estimate an optimal decision for the system problem .another decentralized resource allocation problem is the one where the system objective function may not admit a natural decomposition of the form , and the resource allocation vector is distributed among the agents , where each agent is responsible for maintaining and updating only a coordinate ( or a part ) of the whole vector .such decentralized problems have been considered in ( see also the textbook ) . in the preceding work ,decentralized approaches converge when the agents are using weighted averaging , or when certain contraction conditions are satisfied .recently , li and marden have proposed a different algorithm with local updates , where each agent keeps estimates for the variables , , that are controlled by all the other agents in the network .the convergence of this algorithm relies on some contraction properties of the iterates .note that all the aforementioned algorithms were developed for offline optimization problems .our proposed algorithm * oda - ps * is closest to recent papers .the papers proposed a decentralized algorithm for online convex optimization which is very similar to * oda - ps * in a sense that they also introduce online subgradient estimations in primal or dual space into information aggregation using push - sum . in these papers ,the agents share a common decision set in , the objective functions are separable across the agents at each time ( i.e. , for all ) , and the regret is analyzed in terms of each agent s own copy of the whole decision vector .moreover , an additional assumption is made in that the objective functions are strongly convex .the paper is organized as follows . in section [sec : problem ] , we formalize the problem and describe how the agents interact . in section [ sec : basicregret ] , we provide an online decentralized dual - averaging algorithm in a generic form and establish a basic regret bound which can be used later for particular instantiations , namely , for the two algorithms * oda - c * and * oda - ps*. these algorithms are analyzed in sections [ sec : algo - regret ] , where we establish regret bounds under mild assumptions . in section[ sec : sim ] , we demonstrate our analysis by simulations on a sensor network .we conclude the paper with some comments in section [ sec : conclusion ] .* notation : *consider a multiagent system ( network ) consisting of agents , indexed by elements of the set ] . then {ij}\right\ } \\ & = \bz^k(t ) + u_k(t ) + \tr [ \tilde{m}v^k(t)],\end{aligned}\ ] ] where is an matrix with entries .since is a symmetric matrix , by , and is skew - symmetric , = 0 ] , the iterates in ( [ eqn : algo2 ] ) evolve according to the following dynamics {ik } u_k(s).\ ] ] * from relation ( [ eqn : algo2 ] ) , we have for all ] by unrolling this equation from time 0 to , we obtain where the equalities follows from and the initial condition for all .we get the desired result by taking the -th component of this vector . we now particularize the bound in theorem [ thm : main ] in this scenario under the additional assumption on the lipschitz continuous gradients ( assumption [ assume : lipgrad ] in section [ sec : algo - regret ] ) .[ thm:4 ] under assumptions [ assume : lipschitz]-[assume : lipgrad ] , the regret of the algorithm ( [ eqn : algo1])-([eqn : algo3 ] ) with the local update of agent computed according to ( [ eqn : policy1 ] ) can be upper - bounded as follows : for all , since the definition of in * oda - ps * ( cf .eq . ) coincides with that in * oda - c * ( cf ., we can reuse all the derivations in the proof of theorem [ thm : local_grad_signals ] except for the network - wide disagreement term : where the last inequality follows from the -lipschitzian property of the map ( * ? ? ?* lemma 1 ) . we now show that the network - wide disagreement term in theorem [ thm:4 ] is indeed upper - bounded by some constant . for doing this , we first restate a lemma from .[ lem : no ] let the graph sequence be -strongly connected. then the following statements are valid .* there is a sequence of stochastic vectors such that the matrix difference for decays geometrically , i.e. , for all ] . for * oda - c * , we experiment with a node cycle graph whose communication topology is given as : we set , for all , and if . for * oda - ps * , we experiment with a time - varying sequence of digraphs with nodes whose communication topology is changing periodically with period .the graph sequence is , therefore , -strongly connected . in figure[ fig : graph ] , we depict the repetition of the 3 corresponding graphs . the averaging matrices ( cf .can be determined accordingly .we ran our algorithms once for each ] which is reversible with respect to a strictly positive probability vector . for * oda - ps * , the bound is valid for a uniformly strongly connected sequence of digraphs and column - stochastic matrices of weights whose components are based on the out - degrees of neighbors .simulation results on a sensor network exhibit the desired theoretical properties of the two algorithms .a. martinoli , f. mondada , g. mermoud , n. correll , m. egerstedt , a. hsieh , l. parker , and k. stoy , _ distributed autonomous robotic systems_. 1em plus 0.5em minus 0.4emspringer tracts in advanced robotics , springer - verlag , 2013 .chang , a. nedi , and a. scaglione , `` distributed constrained optimization by consensus - based primal - dual perturbation method , '' _ ieee transactions on automatic control _59 , pp . 15241538 , 2014 .s. s. ram , a. nedi , and v. v. veeravalli , `` distributed stochastic subgradient projection algorithms for convex optimization , '' _ journal of optimization theory and applications _ ,147 , pp . 516545 , 2010 .k. tsianos , s. lawlor , and m. rabbat , `` consensus - based distributed optimization : practical issues and applications in large - scale machine learning , '' in _50th allerton conference on communication , control , and computing _ , 2012 , pp .15431550 .d. jakovetic , j. xavier , and j. moura , `` cooperative convex optimization in networked systems : augmented lagrangian algorithms with directed gossip communication , '' _ ieee transactions on signal processing _ , vol .59 , pp . 38893902 , 2011 .j. tsitsiklis , d. bertsekas , and m. athans , `` distributed asynchronous deterministic and stochastic gradient optimization algorithms , '' _ ieee transactions on automatic control _ ,31 , pp . 803812 , 1986 .f. benezit , v. blondel , p. thiran , j. tsitsiklis , and m. vetterli , `` weighted gossip : distributed averaging using non - doubly stochastic matrices , '' in _ ieee international symposium on information theory proceedings ( isit ) _ , 2010 , pp .1753 1757 .m. akbari , b. gharesifard , and t. linder , `` distributed subgradient - push online convex optimization on time - varying directed graphs , '' in _52nd allerton conference on communication , control , and computing _ , 2014 , pp .264269 .j. c. duchi , a. agarwal , and m. j. wainwright , `` dual averaging for distributed optimization : convergence analysis and network scaling , '' _ ieee transactions on automatic control _ , vol .592606 , 2012 .
we consider a decentralized online convex optimization problem in a network of agents , where each agent controls only a coordinate ( or a part ) of the global decision vector . for such a problem , we propose two decentralized variants ( * oda - c * and * oda - ps * ) of nesterov s primal - dual algorithm with dual averaging . in * oda - c * , to mitigate the disagreements on the primal - vector updates , the agents implement a generalization of the local information - exchange dynamics recently proposed by li and marden over a static undirected graph . in * oda - ps * , the agents implement the broadcast - based push - sum dynamics over a time - varying sequence of uniformly connected digraphs . we show that the regret bounds in both cases have sublinear growth of , with the time horizon , when the stepsize is of the form and the objective functions are lipschitz - continuous convex functions with lipschitz gradients . we also implement the proposed algorithms on a sensor network to complement our theoretical analysis .
edholm s law states that data rates offered by wire - less communication systems will converge with wired ones , where forward rate extrapolation indicates convergence around the year 2030 . as a result , the only cables that would require removal are the cables supporting power . in turn ,wireless power ( transfer ) ( wpt ) is rapidly gaining momentum , see fig .[ fig : wireless_power_popularity ] , and more companies are trying to capitalize on the wireless energy promise , refer for example to witricity , ubeam , ossia , artemis , energous , or proxi . a natural next step is the deployment of networks of wpt nodes ( denoted throughout this work as wptns ) , i.e. deployed and dedicated wpt devices providing power to nearby energy receivers ( see also an example of a fully energy autonomous wptn in ( * ? ? ?1 ) ) . wptns are expected to find numerous applications , e.g. in sensing systems ( vide rechargeable sensor network ) , biology research ( vide insect fly monitoring or animal inspection ) , or implantable networks ( vide brain - machine interface ) . in all the above applications ,the use of batteries is prohibitive ( in biology - related applications due to induced weight or prohibitive cabling , in implantable applications due to necessity of surgical battery replacement ) , thus wpt is the only long - term viable option . finally , we speculate that due to continuous decrease of energy consumption of embedded platforms ( * ? ? ? * fig .1 ) , , within a decade energy provision through wptns will exceed the required energy cost of the above applications .the necessity of wpt becomes immanent while observing that powering battery - based platforms using energy harvesting alone is not enough ( * ? ? ?* section i ) .simply put , ambient energy harvesting does not guarantee sufficient quality of energy provision ( * ? ? ?3.3 ) ( * ? ? ?* section i ) .for example , large - scale london , uk - based rf far field energy harvesting measurements at 270 london underground stations demonstrate that in the best case only 45% of such locations can sustain the required minimum rectifying operation ( * ? ? ?* table vi ) ( for a single input source , considering digital tv , gsm 900/1800 and 3 g transmitters ) .however , wpt(n ) has its own inherent deficiency . while there is a huge focus on making wpt(n ) more efficient considering its hardware and physics , its energy conversion coefficient is still low ( * ? ? ?* section i ) and absolutely not considered to be `` green '' .this leads to a large carbon footprint of wptn , which will be amplified as the technology spreads to the mass market ..energy consumption of various wptn devices , see section [ sec : problem_statement ] [ cols=">,>,>,>,>",options="header " , ] atmega328 dc characteristics follow from ( * ? ? ?* table 29 - 7 ) digi xbee dc characteristics follow from although following fig .[ fig : wptn_hardware_model ] we assume that the passive wakeup radio is used to wake up erx from the off state to communication with etx state , in the calculation we nevertheless include the cost of the idle state of the microcontroller and the radio of the erx .therefore , we calculate the erx power consumption as , where denoting total energy consumption of digi xbee board for total experiment time , composed of transmission energy ( ) , receive energy ( ) , and idle state energy ( ) , respectively , for and and are the number of packets that the erx transmits and receives , and is the energy consumed by the arduino uno board composed of active state energy ( ) , and idle state energy ( ) , respectively ) and ( [ eq : communication_energy_a ] ) are worst case approximations , we assume for simplicity during transmission and reception arduino was simultaneously in sleep state this is due to a small overhead of energy consumption by transmission and reception compared to the total time when the node was idle . ] .finally , we measure and analytically evaluate protocol - specific parameter , i.e. time to charge a time between transmission of charge request by the erx to the beginning of charge provision by the first - responding etx .we consider one erx and etxs , as in the experiment .erx is in charging range of etxs ( ) and in communication range of all etxs . at a given moment of time ( given erx position ) and is fixed .our goal is to derive formulas for expected time to charge an erx in wptn . [[ beaconing ] ] beaconing + + + + + + + + + in beaconing implementation we assumed only one round of charging , after which all etxs within the communication range of erx will be turned on .the duration of this round is , where denotes an uniform distribution from to .if erx randomly starts to send charge requests in wptn , then time to charge is . a cumulative distribution function ( cdf ) of under the assumption that is not random but constant and equal to its mean value , , is where ] .length of a successful round , , is different from the unsuccessful round , . considering average values of those variables , and , respectively , as a consequence the cdf of assumes no randomness of and , giving where . [[ reference - measurement ] ] reference measurement + + + + + + + + + + + + + + + + + + + + + to calculate accuracy for both protocols we need to measure the reference case first .the reference will denote whether etx should switch on during a particular time to charge erx .we measure the reference scenario as follows . 1 .we mark the appearance time , , and disappearance time , , of the erx at each position depicted in fig .[ fig : experiment_setup ] .the erx stays at one location for 20s and is allowed to move to a new position within 15s from switch off , respectively . ] ; 2 . during each round of movement of the erx ,one etx is charging at a time .after position 10 was reached by the erx as given in fig .[ fig : experiment_setup ] , a new etx is turned on and a currently charging etx is switched off ; 3 .we consider the following situation to be correct : if a resistor r4 of erx , then etx should switch on to charge the erx at this position , otherwise it should switch off .each event of voltage crossing threshold is added to a vector $ ] , where and , denote the start ( ) and stop ( ) of the reference charge and denote its successive number .we note that the voltage sampling period at resistor r4 of erx is 0.1s , just like in experiments in section [ sec : experiment_scenario ] .[ [ charge - accuracy - metric ] ] charge accuracy metric + + + + + + + + + + + + + + + + + + + + + + having the reference case we can compare the actual working time sequence of each etx ( for each protocol beaconing and probing ) with the reference vector and calculate charge accuracy as where is the corresponding vector of for protocol and denotes xnor operation . to obtain the metrics of interest from the measurements for the benchmark ( freerun protocol ) , see section [ sec : charger_energy_saving_protocol ] , we use the measured values in , described in section [ sec : charge_accuracy_metric ] , to calculate the harvested power at each position depicted in fig . [fig : experiment_setup ] .we then sum up the harvested power of four etx , as the theoretical harvested power in the testing scenario where four etxs are switched on all the time .then we measure the same performance parameters as for the other two protocols .results are presented in fig .[ fig : experiment_results_1 ] and fig .[ fig : experiment_results_2 ] and discussed in the subsequent sections .note that all experimental results were plotted using matlab s function .we have performed the experiment for five different communication threshold values , , to measure wptn performance simulating various etx / erx link qualities .the result is presented in fig .[ fig : experiment_results_1 ] . refer to fig .[ fig : exp1_1 ] . for every value of ,the energy harvested by beaconing is higher than for probing .this is due to restriction of probing , where at most one etx can charge the erx during a beacon period .the beaconing protocol allows multiple etx to be turned on at the same time . as the , the harvested energy decreases for both protocols beaconing and probing .naturally , the higher the threshold is , the less probability that the etx would be triggered on by neighboring erx .as expected , the freerun mode has the highest harvested energy in almost every testing point , because all etxs are switched on all the time .we are now ready to present the fundamental result of this paper , _ proving the `` green '' aspect of the designed wptn_. in addition to harvested energy we show total energy used ( in kj ) by all etx during the whole experiment , refer to fig .[ fig : exp1_0 ] .we clearly see the power saved by the beaconing and probing protocol , compared with the freerun mode ( for probing by almost five times ) . since freerun mode switches etx all the time , the energy consumption by etx is highest and constant over .we discuss the reason behind this gain in detail in subsequent sections . the power consumption of the probing protocol is higher than for beacon in our measurements .the main reason is that probing needs erx to receive the probing command from the etx , measure the signal strength and send feedback packets to the charger .the etxs request erx to measure harvested power in every beacon round .note that probing protocol uses three message types to trigger the charging phase , see protocol [ alg : erx_states ] , while beaconing protocol uses only one message to trigger charging , see protocol [ alg : beaconing_implementation ] . as the , the power consumption by the erx with probing decreases .the reason is that the larger the , the less probability that etx will accept charge request messages from the erx .then larger less number of etx to associate with the erx on probing , which further causes less number of communication messages at erx .compared with the beaconing protocol , probing stays on a stable level for each . at 70dbm ,the efficiency of probing is around three times larger than beaconing . at 50dbm , the efficiency of beaconing increases .the main reason is that represents the range that etx evaluates whether the erx can be successfully charged or not .the larger the , the smaller the threshold range is , and the higher energy erx can harvest in the range .therefore , smaller results in higher efficiency .the benefit of using smaller threshold that the power transmission efficiency increases .the drawback is that decreasing range causes decreasing harvested energy , see fig [ fig : exp1_1 ] .the freerun mode always takes the lowest charging efficiency because it can not estimate whether erx is inside or outside the wptn and what is the possible harvesting power .if the receiver is outside the wptn or in the area with very low power radio , switching on etxs will waste a lot of power . in the experiment ,the disappearing time of erx is 15s .we conjecture that if the disappearing time increases the efficiency of freerun mode will be even lower .probing protocol keeps relatively high and stable accuracy from 70dbm to 50dbm .high accuracy causes the high efficiency in probing based protocol as shown in fig [ fig : exp1_2 ] . in probing ,only one etx is allowed to charge the erx which potentially decreases the accuracy .we hypothesize that if multiple etxs could exploit probing - like protocol at the same time the accuracy and efficiency could further increase. the accuracy of beaconing increases as the threshold increases from 70dbm to 50dbm the higher is , the closer the erx must be to an etx for triggering the charging . andthe closer the erx is to the charger , the higher the probability that the etx can charge the erx .freerun mode naturally has the worst charging accuracy the erx can hardly harvest sufficient energy at certain positions while etxs are continuously switched on . to verify theoretical analysis of time to charge in both protocols, we have conducted an experiment , where we have placed erx less than 50 cm to each of etx devices ( to ensure erx is within charging range of all etxs ) . to emulate erx being within charging range of a given erx, we would connect or disconnect powercast device from arduino microcontroller .for each value of from ( one etx connected ) to ( four etxs connected ) we have performed an experiment where erx appears randomly in the network 50 times .afterwards , we measured the time it takes from appearing in the network to being charged .a cdf values of those experiments are presented in fig .[ fig : time_to_charge_results ] . in this figure experimental results ( solid lines ) are compared against theoretical results ( dashed lines ) . for fig .[ fig : beaconing_time_n4k1]fig .[ fig : beaconing_time_n4k4 ] additionally a cdf of is added .we see that beaconing is faster in reaching etx than probing , however with increasing time to charge for probing becomes very low as well ( almost instant connection after approximately two seconds ) . for beaconing ,irrespective of number of etx , the time to charge stays constant .the discrepancy between experimental and numerical results is due to approximation of not taking into account propagation and processing time .nevertheless the analysis follow the trends of the experimental results in all cases reasonably well .in this experiment , we have tested the performance of wptn , in which the erx is inside the communication range while outside the charging range of etx .we change the experiment setup in fig .[ fig : wptn_setup ] by turning etx 1 and 3 by 180 degrees around its axis .results in this testing scenario and previous section are depicted as _back _ , and _normal_. etx set to 70dbm . in both normal and back condition ,the harvested energy of beaconing protocol is larger than the probing protocol . by beaconing protocolthe harvested energy in back condition decreases by 45% from the normal condition , which fits with the experiment setup by turning two chargers 180 degrees back .the decreasing percent in probing from normal to back condition is 32% , which is smaller than in beaconing protocol .this is because probing selects the etx that can charge energy over the threshold .as expected , the power consumption of etxs in beacon protocol and freerun mode maintains at the same level in both normal and back conditions .beacon protocol does not give etxs the function to know whether the charging power is efficiently harvested not .therefore , as in the back condition , even if the erx is outside the charging range , the charger still switches on as it hears the request message from the erx .lots of energy is wasted in the back condition by beaconing protocol . in probing protocol, the ` back ' etxs ( etx 1 and etx 3 ) will switch off , after they evaluate the harvested energy at the erx is low . for both protocols , beaconing and probing ,the scheduling of the communication in erx are not influenced by the topology of wptn .so the power consumption in communication in both normal and back conditions are almost the same . by beaconing protocol, the efficiency in the back condition decreases by 45% from the normal condition .the decrease in probing protocol from normal to back condition is 25% , which is much smaller than for beaconing .the smaller decreasing percent is because probing protocol makes etx work only when the charged energy is over .the 25% decrease mainly comes from the probing period when the etx is turned on and asks the erx to measure the harvested power . in our hardware implementationthe probing period is very short ( 4s ) and we speculate that increasing the probing period can further increase the efficiency of probing based protocol in the back condition .the accuracy trends in the non - line of sight scenario are the same as in line - of - sight case , sec .[ sec : accuracy_los ] . also , as in the previous case probing protocol obtains the highest accuracy and confirms its supremacy over the other two considered approaches .with these results we do believe we open up a new research direction within wptn and we are aware of many points of improvement. we list the most important ones here . 1. considering beaconing protocol , the charge request rate should be optimized with respect to power consumption and harvested energy of the erx .for example , the beacon period can adapt to the number of erxs in the wptn .since as long as one erx calls the charger to switch on there is no need to trigger etxs for every erx .considering probing protocol , the probing frequency should be optimized with respect to power consumption of probing as well .erxs should optimize the probing scheduling considering static and dynamic conditions .for example , when the erx is static , there is no need to make the receiver measure harvested power with high frequency , since the harvested power is not expected to fluctuate much in such case .3 . wptn should optimize the combinations of the subset of switched on etxs to take advantage of the constructive signal combined at the erx . measuring all possible combination of the subset of the neighboring etxs to switch on consumes too much time and power at the erx .thus a novel charge control algorithms are required to enable green wptn with multiple etxs operating at a time .in this paper we have introduced a new class of charging control protocol for wireless power transfer networks ( wptns)denoted as ` green'that conserve energy of the chargers .the purpose of such protocol is to maximize three metrics of interest to wptns ( that we introduced here for the first time ) : ( i ) etx charge accuracy , ( ii ) etx charge efficiency and ( iii ) erx harvested power , which in - turn minimize unnecessary uptime of wptn energy transmitters .we prove that this problem is np - hard .to solve it we propose two heuristics , denoted as ` beaconing ' ( where energy receivers simply request power from transmitters ) and ` probing ' ( based on the principle of charge feedback from the energy receivers to the energy transmitters ) .the strength of our protocols lies in making few assumptions about the wtpn environment .we conclude that each protocol performs its task best in two special cases .experimentally we show that for large distances between chargers and receivers , probing is more efficient and accurate but harvests less energy than beaconing and has higher communication cost . as the charger to receiver distance increases ,the efficiency of the beaconing - based protocol increases ( since communication range is well correlated with charging range ) .we would like to thank dr .roberto riggio from create - net for providing the energino platform , prof .joshua smith s group at the university of washington for providing impinj r1000 rfid reader , and to mark stoopman from tu delft for help at the initial stage of the project .h. dai , g. chen , c. wang , s. wang , x. wu , and f. wu , `` quality of energy provisioning for wireless power transfer , '' _ ieee trans .parallel distrib . syst . _ , 2014 , accepted for publications .[ online ] .available : http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6762897 r. p. wicaksono , g. k. tran , k. sakaguchi , and k. araki , `` wireless grid : enabling ubiquitous sensor networks with wireless energy supply , '' in _ proc .ieee vtc - spring _ ,yokohama , japan , may 1518 , 2011 .s. j. thomas , r. r. harrison , a. leonardo , and m. s. reynolds , `` a battery - free multichannel digital neural / emg telemetry system for flying insects , '' _ ieee trans . biomed .circuits syst ._ , vol . 6 , no . 5 , pp . 424435 , oct . 2012 .j. holleman , d. yeager , r. prasad , j. r. smith , and b. otis , `` neuralwisp : an energy - harvesting wireless neural interface with 1-m range , '' in _ proc .ieee biocas _ ,baltimore , md , usa , nov . 2022 , 2008 . j. r. long , w. wu , y. dong , y. zhao , m. a. t. sanduleanu , j. gerrits , and g. van veenendaal , `` energy - efficient wireless front - end concepts for ultra lower power radio , '' in _ proc .ieee cicc _ ,san jose , ca , usa , sep . 2124 , 2008 .michel , y. k. shen , a. p. aiden , a. veres , m. k. gray , w. brockman , the google books team , j. p. pickett , d. hoiberg , d. clancyand peter norvig , j. orwant , s. pinker , m. a. nowak , and e. l. aiden , `` quantitative analysis of culture using millions of digitized books , '' _ science _ , vol .6014 , pp .176182 , jan . 2011 .[ online ] .available : http://www.sciencemag.org/content/331/6014/176.full.pdf a. massa , g. oliveri , f. viani , and p. rocca , `` array designs for long - distance wireless power transmission : state - of - the - art and innovative solutions , '' _ proc .vol . 101 , no .6 , pp . 14641481 , jun . 2013 .s. timotheou , i. krikidis , g. zheng , and b. ottersten , `` beamforming with miso interference channels with qos and rf energy transfer , '' _ ieee trans .wireless commun ._ , vol . 13 , no . 5 , pp . 26462658 , may 2014k. gomez , r. riggio , t. rasheed , d. miorandi , and f. granelli , `` energino : a hardware and software solution for energy consumption monitoring , '' in _ proc .international workshop on wireless network measurements _ , paderborn , germany , may 18 , 2012 .[ online ] .available : http://www.energino-project.org k. huang and v. k. n. lau , `` enabling wireless power transfer in cellular networks : architecture , modeling and deployment , '' jul .24 , 2012 , submitted to ieee trans .wireless commun .[ online ] .available : http://arxiv.org/abs/1207.5640 g. yang , c. k. ho , and y. l. guan , `` dynamic resource allocation for multiple - antenna wireless power transfer , '' nov .17 , 2013 , submitted to ieee trans . signal processing .[ online ] .available : http://arxiv.org/abs/1311.4111 d. w. k. ng and r. schober , `` resource allocation for coordinated multipoint networks with wireless information and power transfer , '' in _ proc .ieee globecom _ ,austin , tx , usa , dec . 812 , 2014 .[ online ] .available : http://arxiv.org/abs/1403.5730 x. liu , p. wang , d. niyato , and z. han , `` resource allocation in wireless networks with rf energy harvesting and transfer , '' _ ieee network _ , may 22 , 2014 , accepted for publication .[ online ] .available : http://arxiv.org/abs/1405.5630 m. y. naderi , p. nintanavongsa , and k. r. chowdhury , `` rf - mac : a medium access control protocol for re - chargeable sensor networks powered by wireless energy harvesting , '' _ ieee trans . wireless commun ._ , vol . 13 , no . 7 , pp . 39263937 , jul. 2014 .h. j. visser and r. j. m. vullers , `` rf energy harvesting and transport for wireless sensor network applications : principles and requirements , '' _ proc .ieee _ , vol. 101 , no . 6 , pp . 14101423 , jun .2013 .x. lu , d. niyato , p. wang , d. i. kim , and z. han , `` wireless charger networking for mobile devices : fundamentals , standards , and applications , '' oct .31 , 2014 .[ online ] .available : http://arxiv.org/abs/1410.6635 m. zhao , j. li , and y. yang , `` a framework of joint mobile energy replenishment and data gathering in wireless rechargeable sensor networks , '' _ ieee trans .mobile comput ._ , vol . 3 , no . 12 ,26892705 , dec . 2014 .e. casilari , j. m. cano - garcia , and g. campos - garrido , `` modeling of current consumption in 802.15.4/zigbee sensor motes , '' _ sensors _ , vol .10 , no .54435468 , jun . 2010 .[ online ] .available : http://www.mdpi.com/1424-8220/10/6/5443 \(2015 ) atmel 8-bit microcontroller with 4/8/16/32kbytes in - system programmable flash .[ online ] .available : http://www.atmel.com/images/atmel-8271-8-bit-avr-microcontroller-atmega48a-48pa-88a-88pa-168a-168pa-328-328p_datasheet_complete.pdf , `` xbee 802.15.4 module specification , '' 2015 .[ online ] .available : http://www.digi.com/products/wireless-wired-embedded-solutions/zigbee-rf-modules/point-multipoint-rfmodules/xbee-series1-module#specs
a wireless power transfer network ( wptn ) aims to support devices with cable - less energy on - demand . unfortunately , wireless power transfer itself especially through radio frequency radiation rectification is fairly inefficient due to decaying power with distance , antenna polarization , etc .. consequently , idle charging needs to be minimized to reduce already large costs of providing energy to the receivers and at the same time reduce the carbon footprint of wptns . in turn , energy saving in a wptn can be boosted by simply switching off the energy transmitter when the received energy is too weak for rectification . therefore in this paper we propose , and experimentally evaluate , two `` green '' protocols for the control plane of static charger / mobile receiver wptn aimed at optimizing the charger workflow to make wptn green . those protocols are : ` beaconing ' , where receivers advertise their presence to wptn , and ` probing ' exploiting the receiver feedback from wtpn on the level of received energy . we demonstrate that both protocols reduce the unnecessary wtpn uptime , however trading it for the reduced energy provision , compared to the base case of ` wptn charger always on ' . for example , our system ( in our experiments ) saves at most % of energy and increases 5.5 times the efficiency with only % less energy possibly harvested .
understanding , modeling and predicting the volatility of financial time series has been extensively researched for more than 30 years and the interest in the subject is far from decreasing .volatility prediction has a very wide range of applications in finance , for example , in portfolio optimization , risk management , asset allocation , asset pricing , etc .the two most popular approaches to model volatility are based on the autoregressive conditional heteroscedasticity ( arch ) type and stochastic volatility ( sv ) type models .the seminal paper of proposed the primary arch model while generalized the purely autoregressive arch into an arma - type model , called the generalized autoregressive conditional heteroscedasticity ( garch ) model .since then , there has been a very large amount of research on the topic , stretching to various model extensions and generalizations .meanwhile , the researchers have been addressing two important topics : looking for the best specification for the errors and selecting the most efficient approach for inference and prediction .besides selecting the best model for the data , distributional assumptions for the returns are equally important .it is well known , that every prediction , in order to be useful , has to come with a certain precision measurement . in this waythe agent can know the risk she is facing , i.e. uncertainty .distributional assumptions permit to quantify this uncertainty about the future .traditionally , the errors have been assumed to be gaussian , however , it has been widely acknowledged that financial returns display fat tails and are not conditionally gaussian .therefore , it is common to assume a student - t distribution , see , and , among others .however , the assumption of gaussian or student - t distributions is rather restrictive .an alternative approach is to use a mixture of distributions , which can approximate arbitrarily any distribution given a sufficient number of mixture components .a mixture of two normals was used by , and , among others .these authors have shown that the models with the mixture distribution for the errors outperformed the gaussian one and do not require additional restrictions on the degrees of freedom parameter as the student - t one . as for the inference and prediction ,the bayesian approach is especially well - suited for garch models and provides some advantages compared to classical estimation techniques , as outlined by .firstly , the positivity constraints on the parameters to ensure positive variance , may encumber some optimization procedures . in the bayesian setting, constraints on the model parameters can be incorporated via priors . secondly , in most of the cases we are more interested not in the model parameters directly , but in some non - linear functions of them . in the maximum likelihood ( ml )setting , it is quite troublesome to perform inference on such quantities , while in the bayesian setting it is usually straightforward to obtain the posterior distribution of any non - linear function of the model parameters .furthermore , in the classical approach , models are usually compared by any other means than the likelihood . in the bayesian setting , marginal likelihoods and bayes factors allow for consistent comparison of non - nested models while incorporating occam s razor for parsimony .also , bayesian estimation provides reliable results even for finite samples . finally , add that the ml approach presents some limitations when the errors are heavy tailed , also the convergence rate is slow and the estimators may not be asymptotically gaussian .this survey reviews the existing bayesian inference methods for univariate and multivariate garch models while having in mind their error specifications .the main emphasis of the paper is on the recent development of an alternative inference approach for these models using bayesian non - parametrics .the classical parametric modeling , relying on a finite number of parameters , although so widely used , has some certain drawbacks .since the number of parameters for any model is fixed , one can encounter underfitting or overfitting , which arises from the misfit between the data available and the parameters needed to estimate . then , in order to avoid assuming wrong parametric distributions , which may lead to inconsistent estimators , it is better to consider a semi- or non - parametric approach . bayesian non - parametrics may lead to less constrained models than classical parametric bayesian statistics and provide an adequate description of the data , especially when the conditional return distribution is far away from gaussian .up to our knowledge , there has been a very few papers using bayesian non - parametrics for garch models .these are for univariate garch , and for mgarch .all of them have considered infinite mixtures of gaussian distributions with a dirichlet process ( dp ) prior over the mixing distribution , which results into dp mixture ( dpm ) models .this approach so far proves to be the most popular bayesian non - parametric modeling procedure .the results over the papers have been consistent : the bayesian non - parametric approach leads to more flexible models and is better in explaining heavy - tailed return distributions , which parametric models can not fully capture .the outline of this survey is as follows .section [ section : univ_garch ] shortly introduces univariate garch models and different inference and prediction methods .section [ section : mult_garch ] overviews the existing models for multivariate garch and different inference and prediction approaches .section [ section : non_param ] introduces the bayesian non - parametric modeling approach and reviews the limited literature of this area in time - varying volatility models .section [ section : illustration ] presents a real data application .finally , section [ section : conclusions ] concludes .as mentioned before , the two most popular approaches to model volatility are garch type and sv type models . in this surveywe focus on garch models , therefore , sv models will not be included thereafter .also , we are not going to enter into the technical details of the bayesian algorithms and refer to for a more detailed description of bayesian techniques .the general structure of an asset return series modeled by a garch - type models can be written as : where ] is the conditional variance given and is the standard white noise shock .there are several ways to model the conditional mean , .the usual assumptions are to consider that the mean is either zero , equal to a constant ( ) , or follows an arma(, ) process .however , sometimes the mean is also modeled as a function of the variance , say , which leads to the garch - in - mean models . on the other hand , the conditional variance , ,is usually modeled using the garch - family models . in the basic garch model the conditional variance of the returns depends on a sum of three parts : a constant variance as the long - run average , a linear combination of the past conditional variances and a linear combination of the past mean squared returns . for instance , in the garch(1,1 ) model , the conditional variance at time is given by , for .there are some restrictions which have to be imposed such as , for positive variance , and for the covariance stationarity . proposed the exponential garch ( egarch ) model that acknowledges the existence of asymmetry in the volatility response to the changes in the returns , sometimes also called the `` leverage effect '' , introduced by .negative shocks to the returns have a stronger effect on volatility than positive .other arch extensions that try to incorporate the leverage effect are the gjr model by and the tgarch of , among many others . as puts it, `` there is now an alphabet soup '' of arch family models , such as aarch , aparch , figarch , starch etc , which try to incorporate such return features as fat tails , volatility clustering and volatility asymmetry .papers by , , , provide extensive reviews of the existing arch - type models . review arch type models , discuss their extensions , estimation and testing , also numerous applications .also , one can find an explicit review with examples and applications concerning garch - family models in and chapter 1 in .the main estimation approach for garch - family models is the classical maximum likelihood method .however , recently there has been a rapid development of bayesian estimation techniques , which offer some advantages compared to the frequentist approach as already discussed in the introduction .in addition , in the empirical finance setting , the frequentist approach presents an uncertainty problem .for instance , optimal allocation is greatly affected by the parameter uncertainty , which has been recognized in a number of papers , see and , among others .these authors conclude that in the frequentist setting the estimated parameter values are considered to be the true ones , therefore , the optimal portfolio weights tend to inherit this estimation error. however , instead of solving the optimization problem on the basis of the choice of unique parameter values , the investor can choose the bayesian approach , because it accounts for parameter uncertainty , as seen in and , for example .a number of papers in this field have explored different bayesian procedures for inference and prediction and different approaches to modeling the fat - tailed errors and/or asymmetric volatility .the recent development of modern bayesian computational methods , based on monte carlo approximations and mcmc methods have facilitated the usage of bayesian techniques , see e.g. .the standard gibbs sampling procedure does not make the list because it can not be used due to the recursive nature of the conditional variance : the conditional posterior distributions of the model parameters are not of a simple form .one of the alternatives is the _ griddy - gibbs _ sampler as in .they discuss that previously used importance sampling and metropolis algorithms have certain drawbacks , such as that they require a careful choice of a good approximation of the posterior density .the authors propose a griddy - gibbs sampler which explores analytical properties of the posterior density as much as possible . in this paperthe garch model has student - t errors , which allows for fat tails .the authors choose to use flat ( uniform ) priors on parameters with whatever region is needed to ensure the positivity of variance , however , the flat prior for the degrees of freedom can not be used , because then the posterior density is not integrable .instead , they choose a half - right side of cauchy .the posteriors of the parameters were found to be skewed , which is a disadvantage for the commonly used gaussian approximation . on the other hand , modeled the errors of a garch model with a mixture of two gaussian distributions .the advantage of this approach compared to that of student - t errors , is that if the number of the degrees of freedom is very small ( less than 5 ) , some moments may not exist .the authors have chosen flat priors for all the parameters , and discovered that there is little sensitivity to the change in the prior distributions ( from uniform to beta ) , unlike in , where the sensitivity for the prior choice for the degrees of freedom is high .more articles using a griddy - gibbs sampling approach are by , who have modeled asymmetric volatility with gaussian innovations and have used uniform priors for all the parameters , and by , who explored an asymmetric garch model with student - t errors .another mcmc algorithm used in estimating garch model parameters , is the _ metropolis - hastings _ ( mh ) method , which samples from a candidate density and then accepts or rejects the draws depending on a certain acceptance probability . modeled the errors as gaussian distributed with zero mean and unit variance while the priors are chosen as gaussian and a mh algorithm is used to draw samples from the joint posterior distribution .the author has carried out a comparative analysis between ml and bayesian approaches , finding , as in other papers , that some posterior distributions of the parameters were skewed , thus warning against the abusive use of the gaussian approximation .also , has performed a sensitivity analysis of the prior means and scale parameters and concluded that the initial priors in this case are vague enough .this approach has been also used by , and , among others .a special case of the mh method is the random walk metropolis - hastings ( rwmh ) where the proposal draws are generated by randomly perturbing the current value using a spherically symmetric distribution .a usual choice is to generate candidate values from a gaussian distribution where the mean is the previous value of the parameter and the variance can be calibrated to achieve the desired acceptance probability .this procedure is repeated at each mcmc iteration . have also carried out a comparison of estimation approaches , griddy - gibbs , rwmh and ml . apparently , rwmh has difficulties in exploring the tails of the posterior distributions and ml estimates may be rather different for those parameters where posterior distributions are skewed . in order to select one of the algorithms, one might consider some criteria , such as fast convergence for example . numerically compares some of these approaches in the context of garch .the griddy - gibbs method is capable in handling the shape of the posterior by using smaller mcmc outputs comparing with other methods , also , it is flexible regarding parametric specification of a model .however , it can require a lot of computational time .this author also investigates mh , adaptive rejection metropolis sampling ( arms ) , proposed by , and acceptance - rejection mh algorithms ( armh ) , proposed by . for more in detail about each method in garch models see and , among others . using simulated data , calculated geometric averages of inefficiency factors for each method .inefficiency factor is just an inverse of efficiency factor . according to this ,the armh algorithm performed the best .also , computational time was taken into consideration , where armh clearly outperformed mh and arms , while griddy - gibbs stayed just a bit behind .the author observes that even though the armh method showed the best results , the posterior densities for each parameter did not quite explore the tails of the distributions , as desired . in this casegriddy - gibbs performs better ; also , it requires less draws than armh . investigate one more convergence criteria , proposed by , which is based on cumulative sum ( cumsum ) statistics .it basically shows that if mcmc is converging , the graph of a certain cumsum statistic against time should approach zero .their employed griddy - gibbs algorithm converged in all four parameters quite fast .then , the authors explored the advantages and disadvantages of alternative approaches : the importance sampling and the mh algorithm . considering importance sampling , one of the main disadvantages , as mentioned before ,is to find a good approximation of the posterior density ( importance function ) .also , comparing with griddy - gibbs algorithm , the importance sampling requires much more draws to get smooth graphs of the marginal densities . for the mh algorithm , same as in importance sampling ,a good approximation needs to be found .also , compared to griddy - gibbs , the mh algorithm did not fully explore the tails of the distribution , unless for a very big number of draws . another important aspect of the bayesian approach , as commented before , are the advantages in model selection compared to the classical methods . reviews some bayesian model selection methods using mcmc for garch - type models , which allow for the estimation of either marginal model likelihoods , bayes factors or posterior model probabilities .these are compared to the classical model selection criteria showing that the bayesian approach clearly considers model complexity in a more unbiased way .also , includes a revision of bayesian selection methods for asymmetric garch models , such as the gjr - garch and threshold garch .they show how using the bayesian approach it is possible to compare complex and non - nested models to choose for example between garch and stochastic volatility models , between symmetric or asymmetric garch models or to determine the number of regimes in threshold processes , among others .an alternative approach to the previous parametric specifications is the use of bayesian non - parametric methods , that allow to model the errors as an infinite mixture of normals , as seen in the paper by .the bayesian non - parametric approach for time - varying volatility models will be discussed in detail in section [ section : non_param ] .to sum up , considering the amount of articles published quite recently regarding the topic of estimating univariate garch models using mcmc methods indicates still growing interest in the area .although numerous garch - family models have been investigated using different mcmc algorithms , there are still a lot of areas that need further research and development .returns and volatilities depend on each other , so multivariate analysis is a more natural and useful approach . the starting point of multivariate volatility models is a univariate garch , thus the most simple mgarch models can be viewed as direct generalizations of their univariate counterparts . consider a multivariate return series of size .then where ] and =i_k} ] .there is a wide range of mgarch models , where most of them differ in specifying . in the rest of this section we will review the most popular and widely used and the different bayesian approaches to make inference and prediction . for general reviews on mgarch models ,see , and ( chapter 10 ) , among others .regarding inference , one can also consider the same arguments provided in the univariate garch case above .maximum likelihood estimation for mgarch models can be obtained by using numerical optimization algorithms , such as fisher scoring and newton - raphson . estimated several bivariate arch and garch models and found that some classical estimates of the parameters were quite different from their bayesian counterparts .this was due to the non - normality of the parameters .thus , the authors suggest careful interpretation of the classical estimation approach . also , found it difficult to evaluate the classical estimates under the stationarity conditions , and consequently the resulting parameters , evaluated ignoring the stationarity constraints , produced non - stationary estimates .these difficulties can be overcome using the bayesian approach .the vec model was proposed by , where every conditional variance and covariance ( elements of the matrix ) is a function of all lagged conditional variances and covariances , as well as lagged squared mean - corrected returns and cross - products of returns .using this unrestricted vec formulation , the number of parameters increases dramatically .for example , if , the number of parameters to estimate will be 78 , and if , the number of parameters increases to 210 , see for the explicit formula for the number of parameters in vec models . to overcome this difficulty , simplified the vec model by proposing a diagonal vec model , or dvec , as follows : where indicates the hadamard product , , and are symmetric matrices . as noted in , is positive definite provided that , , and the initial matrix are positive definite .however , these are quite strong restrictions on the parameters .also , dvec model does not allow for dynamic dependence between volatility series . in order to avoid such strong restrictions on the parameter matrices , propose the bekk model , which is just a special case of a vec and , consequently , less general .it has the attractive property that the conditional covariance matrices are positive definite by construction .the model looks as follows : where is a lower triangular matrix and and are matrices . in the bekk modelit is easy to impose the definite positiveness of the matrix .however , the parameter matrices and do not have direct interpretations since they do not represent directly the size of the impact of the lagged values of volatilities and squared returns . present a paper that compares the performance of various bivariate arch and garch models , such as vec , bekk , etc , estimated using bayesian techniques .as the authors observe , they are the first to perform model comparison using bayes factors and posterior odds in the mgarch setting .the algorithm used for parameter estimation and inference is metropolis - hastings , and to check for convergence they rely on the cumsum statistics , introduced by , and used by in the univariate garch setting .using the real data the authors found that the t - bekk models performed the best , leaving t - vec not so far behind ; t - vec model , sometimes also called t - vech , is a more general form of a dvec , seen above , where the mean - corrected returns follow a student - t distribution .the name comes from a function called , which reshapes the lower triangular portion of a symmetric variance - covariance matrix into a column vector . to sum up, the authors choose t - bekk model as clearly better than the t - vec , because it is relatively simple and has less parameters to estimate . on the other hand, developed a prior distribution for a vech specification that directly satisfy both necessary and sufficient conditions for positive definiteness and covariance stationarity , while remaining diffuse and non - informative over the allowable parameter space .these authors employed mcmc methods , including metropolis - hastings , to help enforce the conditions in this prior .more recently , use the bekk - garch model to show the usefulness of a new posterior sampler called the adaptive hamiltonian monte carlo ( ahmc ) .hamiltonian monte carlo ( hmc ) is a procedure to sample from complex distributions .the ahmc is an alternative inferential method based on hmc that is both fast and locally adaptive .the ahmc appears to work very well when the dimension of the parameter space is very high .model selection based on marginal likelihood is used to show that full bekk models are preferred to restricted diagonal specifications .additionally , suggests an approach called constrained hamiltonian monte carlo ( chmc ) in order to deal with high dimensional bekk models with targeting , which allow for a parameter dimension reduction without compromising the model fit , unlike the diagonal bekk .model comparison of the full bekk and the bekk with targeting is performed indicating that the latter dominates the former in terms of marginal likelihood .factor - garch was first proposed by to reduce the dimension of the multivariate model of interest using an accurate approximation of the multivariate volatility .the definition of the factor - garch model , proposed by , says that bekk model in is a factor - garch , if and have rank one and the same left and right eigenvalues : , , where and are scalars and and are eigenvectors .several variants of the factor model have been proposed .one of them is the full - factor multivariate garch by : where is a vector of constants , which is time invariant , is a parameter matrix , is a vector of factors and is a diagonal variance matrix such that , where is the conditional variance of the factor at time such that , , .then , the factors in the vector are garch(1,1 ) processes and the vector is a linear combination of such factors . it can be easily shown that is always positive definite by construction . however , the structure of depends on the order of the time series in . have considered this problem to find the best ordering under the proposed model .furthermore , investigate a full - factor mgarch model using the ml and bayesian approaches .the authors compute maximum likelihood estimates using fisher scoring algorithm . as for the bayesian analysis ,the authors have adopted a metropolis - hastings algorithm , and found that the algorithm is very time consuming , especially in high - dimensional data .to speed - up the convergence , have proposed reparametrization of positive parameters and also a blocking sampling scheme , where the parameter vector is divided into three blocks : mean , variance and the matrix of constants . as mentioned before , the ordering of the univariate time series in full - factor models is important , thus to select `` the best '' model one has to consider possibilities for a multivariate dataset of dimension . instead of choosing one model and making inference ( as if the selected model was the true one ) , the authors employ a bayesian approach by calculating the posterior probabilities for all competing models and model averaging to provide `` combined '' predictions .the main contribution of this paper is that the authors were able to carry out an extensive bayesian analysis of a full - factor mgarch model considering not only parameter uncertainty , but model uncertainty as well . as already discussed above, a very common stylized feature of financial time series is the asymmetric volatility . proposed a new class of tree structured mgarch models that explore the asymmetric volatility effect .same as the paper by , the authors consider not only parameter - related uncertainty , but also uncertainty corresponding to model selection .thus in this case bayesian approach becomes particularly useful because an alternative method based on maximizing the pseudo - likelihood is only able to work after selecting a single model .the authors develop an mcmc stochastic search algorithm that generates candidate tree structures and their posterior probabilities .the proposed algorithm converged fast .such modeling and inference approach leads to more reliable and more informative results concerning model - selection and individual parameter inference .there are more models that are nested in bekk , such as the orthogonal garch for example , see and , among others .all of them fall into the class of direct generalizations of univariate garch or linear combinations of univariate garch models .another class of models are the nonlinear combinations of univariate garch models , such as conditional correlation ( ccc ) , dynamic condition correlation ( dcc ) , general dynamic covariance ( gdc ) and copula - garch models .a very recent alternative approach that also considers bayesian estimation can be found in who proposes a new dynamic component models of returns and realized covariance ( rcov ) matrices based on time - varying wishart distributions . in particular , bayesian estimation and model comparison is conducted with a existing range of multivariate garch models and rcov models .the ccc model , proposed by and the simplest in its class , is based on the decomposition of the conditional covariance matrix into conditional standard deviations and correlations .then , the conditional covariance matrix looks as follows : where is diagonal matrix with the conditional standard deviations and is a time - invariant conditional correlation matrix such that and . the ccc approach can be applied to a wide range of univariate garch family models , such as exponential garch or gjr - garch , for example . have estimated some real data using a variety of bivariate arch and garch models in order to select the best model specification and to compare the bayesian parameter estimates to those of the ml .these authors have considered three arch and three garch models , all of them with constant conditional correlations ( ccc ) .they have used a metropolis - hastings algorithm , which allows to simulate from the joint posterior distribution of the parameters . for model comparison and selection , obtained predictive distributions and assessed comparative validity of the analyzed models , according to which the ccc model with diagonal covariance matrix performed the best considering one - step - ahead predictions . a natural extension of the simple ccc model are the dynamic conditional correlation ( dcc ) models , firstly proposed by and .the dcc approach is more realistic , because the dependence between returns is likely to be time - varying .the models proposed by and consider that the conditional covariance matrix looks as , where is now a time varying correlation matrix at time .the models differ in the specification of . in the paper by ,the conditional correlation matrix is , where and are non - negative scalar parameters , such that , is a positive definite matrix such that and is a sample correlation matrix of the past standardized mean - corrected returns . on the other hand , in the paper by ,the specification of is , where , is a mean - corrected standardized returns , and are non - negative scalar parameters , such that and is unconditional covariance matrix of . as noted in , the model by not formulate the conditional correlation as a weighted sum of past correlations , unlike in the dcc model by , seen above .the drawback of both these models is that , , and are scalar parameters , so all conditional correlations have the same dynamics .however , as notes it , the models are parsimonious .moreover , as financial returns display not only asymmetric volatility , but also excess kurtosis , previous research , as in univariate case , has mostly considered using a multivariate student - t distribution for the errors . however , as already discussed above , this approach has several limitations . propose a mgarch - dcc model , where the standardized innovations follow a mixture of gaussian distributions .this allows to capture long tails without being limited by the degrees of freedom constraint , which is necessary to impose in the student - t distribution so that higher moments could exist .the authors estimate the proposed model using the classical ml and bayesian approaches . in order to estimate model parameters , dynamics of single assets and dynamic correlations , and the parameters of the gaussian mixture , have relied on rwmh algorithm .bic criteria was used for selecting the number of mixture components , which performed well in simulated data . using real data ,the authors provide an application to calculating the value at risk ( var ) and solving a portfolio selection problem .mle and bayesian approaches have performed similarly in point estimation , however , the bayesian approach , besides giving just point estimates , allows the derivation of predictive distributions for the portfolio var .an extension of the dcc model of is the asymmetric dcc also proposed by , which incorporates an asymmetric correlation effect .it means that correlations between asset returns decrease more in the bear market than they increase when the market performs well . generalizes the adcc model into the agdcc model , where the parameters of the correlation equation are vectors , and not scalars .this allows for asset - specific correlation dynamics . in the agdcc model ,the matrix in the dcc model is replaced with : where are mean corrected standardized returns , selects just negative returns , `` diag '' stands for either taking just the diagonal elements from the matrix , or making a diagonal matrix from a vector , is a sample correlation matrix of , and are vectors , , and . to ensure positivity and stationarity of , it is necessary to impose and , .the agdcc by is just a special case where , and .up to our knowledge , the only paper that considers the agdcc model in the bayesian setting is that propose to model the distribution of the standardized returns as an infinite scale mixture of gaussian distributions by relying on bayesian non - parametrics .this approach is presented in more detail in section [ section : non_param ] .the use of copulas is an alternative approach to study return time series and their volatilities .the main convenience of using copulas is that individual marginal densities of the returns can be defined separately from their dependence structure .then , each marginal time series can be modeled using univariate specification and the dependence between the returns can be modeled by selecting an appropriate copula function .a -dimensional copula , is a multivariate distribution function in the unit hypercube ^k} ] marginal distributions . under certain conditions, the sklar theorem affirms that ( see , ) , every joint distribution , whose marginals are given by , can be written as , where is a copula function of , which is unique if the marginal distributions are continuous . the most popular approach to volatility modeling through copulasis called the copula - garch model , where univariate garch models are specified for each marginal series and the dependence structure between them is described using a copula function .a very useful feature of copulas , as noted by , is that the marginal distributions of each random variable do not need to be similar to each other .this is very important in modeling time series , because each of them might be following different distributions .the choice of copulas can vary from a simple gaussian copula to more flexible ones , such as clayton , gumbel , mixed gaussian , etc . in the existing literature different parametric and non - parametric specificationscan be used for the marginals and copula function .also , the copula function can be assumed to be constant or time varying , as seen in , among others .the estimation for copula - garch models can be performed in a variety of ways .maximum likelihood is the obvious choice for fully parametric models .estimation is generally based on a multi - stage method , where firstly the parameters of the marginal univariate distributions are estimated and then used to condition in estimating the parameters of the copula .another approach is non- or semi - parametric estimation of the univariate marginal distributions followed by a parametric estimation of the copula parameters .as has showed , the two - stage maximum likelihood approach lead to consistent , but not efficient estimators .an alternative is to employ a bayesian approach , as done by .the authors have developed a one - step bayesian procedure where all parameters are estimated at the same time using the entire likelihood function and , provided the methodology , for obtaining optimal portfolio , calculating var and cvar . have used a gibbs sampler to sample from a joint posterior , where each parameter is updated using a rwmh . in order to reduce computational cost ,the model and copula parameters are updated not one - by - one , but rather by blocks , that consist of highly correlated vectors of model parameters . have also used bayesian inference for copula - garch models .these authors have proposed a methodology for modelling dynamic dependence structure by allowing copula functions or copula parameters to change across time .the idea is to use a threshold approach so these changes , that are assumed to be unknown , do not evolve in time but occur in distinct points .these authors have also employed a rwmh for parameter estimation together with a laplace approximation .the adoption of an mcmc algorithm allows the choice of different copula functions and/or different parameter values between two time thresholds .bayesian model averaging is considered for predicting dependence measures such as the kendall s correlation .they conclude that the new model performs well and offers a good insight into the time - varying dependencies between the financial returns . developed bayesian inference of a multivariate garch model where the dependence is introduced by a d - vine copula on the innovations .a d - vine copula is a special case of vine copulas which are very flexible to construct multivariate copulas because it allows to model dependency between pairs of margins individually .inference is carried out using a two - step mcmc method closely related with the usual two - step maximum likehood procedure for estimating copula - garch models .the authors then focus on estimating the var of a portfolio that shows asymmetric dependencies between some pairs of assets and symmetric dependency between others .all the previously introduced methods rely on parametric assumptions for the distribution of the errors .however , imposing a certain distribution can be rather restrictive and lead to underestimated uncertainty about future volatilities , as seen in . therefore , bayesian non - parametric methods become especially useful , since they do not impose any specific distribution on the standardized returns .bayesian non - parametrics is an alternative approach to the classical parametric bayesian statistics , where one usually gives some prior for the parameters of interest , whose distribution is unknown , and then observes the data and calculates the posterior .the priors come from the family of parametric distributions .bayesian non - parametrics uses a prior over distributions with the support being the space of all distributions .then , it can be viewed as a distribution over distributions .one of the most popular bayesian non - parametric modeling approach is based on dirichlet processes ( dp ) and mixtures of dirichlet processes ( dpm ) .dp was firstly introduced by .suppose that we have a sequence of exchangeably distributed random variables from an unknown distribution , where the support for is . in order to perform bayesian inference , we need to define the prior for .this can be done by considering partitions of , such as , and defining priors over all possible partitions .we say that has a dirichlet process prior , denoted as , if the set of associated probabilities given for any partition follows a dirichlet distribution , , where is a precision parameter that represents our prior certainty of how concentrated the distribution is around , which is a known base distribution on .the dirichlet process is a conjugate prior .thus , given independent and identically distributed samples from , the posterior distribution of is also a dirichlet process such that where is the empirical distribution function .there are two main ways for generating a sample from the marginal distribution of , where and : the polya urn and stick breaking procedures . on the one hand ,the polya urn scheme can be illustrated in terms of a urn with black balls ; when a non - black ball is drawn , it is placed back in the urn together with another ball of the same color . if the drawn ball is black , a new color is generated from and a ball of this new color is added to the urn together with the black ball we drew .this process gives a discrete marginal distribution for since there is always a probability that a previously seen value is repeated .on the other hand , the stick - breaking procedure is based on the representation of the random distribution as a countably infinite mixture : where is a dirac measure , and the weights are such that where .this implies that the weights as .the discreteness of the dirichlet process is clearly a disadvantage in practice .a solution was proposed by by using dpm models where a dp prior is imposed over over the distribution of the model parameters , , as follows : observe that is a random distribution drawn from the dp and because it is discrete , multiple s can take the same value simultaneously , making it a mixture model .in fact , using the stick - breaking representation , the hierarchical model above can be seen as an infinite mixture of distributions : where the weights are obtained as before : , , for , and where and .regarding inference algorithms , there are two main types of approaches . on the one hand , the marginal methods , such as those proposed by , and , which rely on the polya urn representation .all these algorithms are based on integrating out the infinite dimensional part of the model .recently , another class of algorithms , called conditional methods , have been proposed . these approaches , based on the stick - breaking scheme ,leave the infinite part in the model and sample a finite number of variables .these include the procedure by , who introduces slice sampling schemes to deal with the infiniteness in dpm , and the retrospective mcmc method of , that is later combined by with slice sampling method by to obtain a new composite algorithm , which is better , faster and easier to implement .generally , the stick - breaking compared to the polya urn procedures produce better mixing and simpler algorithms .as mentioned above , so far there has been little research in modeling volatilities with mgarch using the dpm models . up to our knowledge , these only include : for univariate garch , and and , for mgarch . have applied semi - parametric bayesian techniques to estimate univariate garch - type models .these authors have used the class of scale mixtures of gaussian distributions , that allow for the variances to change over components , with a dirichlet process prior on the mixing distribution to model innovations of the garch process . the resulting class of models is called dpm - garch type models . in order to perform bayesian inference on the new model , the authors employ a stick - breaking sampling scheme and make use of the ideas proposed in , and .the new scale mixture model was compared to a simpler mixture of two gaussians , student - t and the usual gaussian models .the estimation results in all three cases were quite similar , however , the scale mixture model is able to capture skewness as well as kurtosis and , based on the approximated log marginal likelihood ( lml ) and , provided the best performance in simulated and real data .finally , have applied the resulting model to perform one - step - ahead predictions for volatilities and var . in general, the non - parametric approach leads to wider bayesian credible intervals and can better describe long tails . propose a bayesian non - parametric modeling approach for the innovations in mgarch models .they use a mgarch specification , proposed by , which is a different representation of a well known dvec model , introduced above .the innovations are modeled as an infinite mixture of multivariate normals with a dp prior .the authors have employed polya urn and stick - breaking schemes and , using two data sets , compared the three model specifications : parametric mgarch with student - t innovations ( mgarch - t ) , garch - dpm- that allows for different covariances ( scale mixture ) and mgarch - dpm , allowing for different means and covariances of each component ( location - scale mixture ) . in general , both semi - parametric models produced wider density intervals . however , in mgarch - t model a single degree of freedom parameter determines the tail thickness in all directions of the density , meanwhile the non - parametric models are able to capture various deviations from normality by using a certain number of components .these results are consistent with the ones in . as for predictions , both semi - parametric models performed equally good and outperformed the parametric mgarch - t specification .finally , the paper by can be seen as a direct generalization of the paper by to the multivariate framework . here , same as in ,the authors have proposed using an infinite scale mixture of normals for the standardized returns . for the mgarch modela gjr - adcc was chosen , allowing for asymmetric volatilities and asymmetric time - varying correlations .moreover , the authors have carried - out a simulation study that illustrated the adaptability of the dpm model .finally , the authors provided one real data application to portfolio decision problem concluding that dpm models are less restrictive and more adaptive to whatever distribution the data comes from , therefore , can better capture the uncertainty about financial decisions .to sum up , the findings in the above papers are consistent : the bayesian semi - parametric approach leads to more flexible models and is better in explaining heavy - tailed return distributions , which parametric models can not fully capture .the parameters are less precise , i.e. wider bayesian credible intervals are observed because the semi - parametric models are less restricted .this provides a more adequate measure of uncertainty .if in the gaussian setting the credible intervals are very narrow and the real data is not gaussian , this makes the agent overconfident about her decisions , and she takes more risk than she would like to assume . observes that the combination of bayesian methods and mcmc computational algorithms provide new modeling possibilities and calls for more research regarding non - parametric bayesian time series modeling .this illustration study using real data has basically two goals : firstly , to show the advantages of the bayesian approach , such as the ability to obtain posterior densities of quantities of interest and the facility to incorporate various constraints on the parameters . secondly , to illustrate the flexibility of the bayesian non - parametric approach for garch modeling .the data used for estimation are the log - returns ( in percentages ) , obtained from close prices adjusted for dividends and splits , of two market indices : ftse100 and s&p500 from november 10th , 2004 till december 10th , 2012 , resulting into a sample size of 2000 observations .ftse100 is a share index of the 100 companies listed on the london stock exchange with the highest market capitalization .s&p500 is a stock market index based on the common stock prices of 500 top publicly traded american companies .the data was obtained from yahoo finance .figure [ f : returns ] and table [ t : descriptive_table ] present the basic plots and descriptive statistics of the two log - return series .[ f : returns ] ccc ' '' '' & ftse100 & s&p500 + mean & 0.0112 & 0.0099 + median & 0.0344 & 0.0779 + variance & 1.7164 & 1.9617 + skewness & -0.0974 & -0.3001 + kurtosis & 10.5464 & 12.5674 + correlation & 0.6060 + as seen from the plot and descriptive statistics , the data is slightly skewed and with high kurtosis , therefore , assuming a gaussian distribution for the standardized returns would be inappropriate. therefore , we estimate this bivariate time series using an adcc model by , presented in section 3.4 . , which incorporates an asymmetric correlation effect .the univariate series are assumed to follow gjr - garch models in order to incorporate the leverage effect in volatilities . as for the errors , we use an infinite scale mixture of gaussian distributions .therefore , we call the final model gjr - adcc - dpm . inference and prediction is carried out using bayesian non - parametric techniques , as seen in .the selection of the mgarch specification is arbitrary and other models might work equally well .for the sake of comparison , we estimate a restricted gjr - adcc - gaussian model using maximum likelihood and bayesian approaches .the estimation results are presented in table [ table : estimation ] .[ table : estimation ] c c c c c c c & & & + & estimate & st .& mean & ci&mean & ci + + & 0.0166 & 0.0020 & 0.0192 & ( 0.0130 , 0.0258 ) & 0.0181 & ( 0.0104 , 0.0264 ) + & 0.0190 & 0.0016 & 0.0249 & ( 0.0174 , 0.0316 ) & 0.0219 & ( 0.0153 , 0.0293 ) + & & & 0.0058 & ( 0.0002 , 0.0177 ) & 0.0046 & ( 0.0003 , 0.0112 ) + & & & 0.0053 & ( 0.0002 , 0.0173 ) & 0.0059 & ( 0.0002 , 0.0151 ) + & 0.9087 & 0.0045 & 0.9010 & ( 0.8841 , 0.9152 ) & 0.8956 & ( 0.8762 , 0.9139 ) + & 0.9079 & 0.0050 & 0.8888 & ( 0.8705 , 0.9088 ) & 0.8851 & ( 0.8675 , 0.9041 ) + & 0.1535 & 0.0085 & 0.1587 & ( 0.1351 , 0.1871 ) & 0.1586 & ( 0.1057 , 0.2089 ) + & 0.1483 & 0.0092 & 0.1737 & ( 0.1398 , 0.2020 ) & 0.1758 & ( 0.1134 , 0.2142 ) + & 0.0075 & 0.0020 & 0.0071 & ( 0.0014 , 0.0145 ) & 0.0095 & ( 0.0040 , 0.0156 ) + & 0.9898 & 0.0029 & 0.9818 & ( 0.9665 , 0.9898 ) & 0.9806 & ( 0.9693 , 0.9901 ) + & & & 0.0076 & ( 0.0002 , 0.0153 ) & 0.0039 & ( 0.0001 , 0.0114 ) + the estimated parameters are very similar for all three approaches , except for , the asymmetric correlation coefficient .since and are so close to zero , the ml has some trouble in estimating those parameters .overall , the is small , indicating little evidence of asymmetrical behavior in correlations .figure [ fig : tails ] shows the estimated marginal predictive densities of the one - step - ahead returns in log scale using the bayesian approach .we can observe the differences in tails arising from different specification of the errors .the dpm model allows for a more flexible distribution , therefore , for more extreme returns , i.e. fatter tails .the estimated densities were obtained using the procedure described in . for bayesian gaussian and dpm models , scaledwidth=100.0% ]table [ table : volatilities ] presents the estimated mean , median and credible intervals of one - step - ahead volatility matrices in bayesian context .the matrix element ( 1,1 ) represents the volatility for the ftse100 series , ( 2,2 ) for the s&p500 , and the elements in the diagonal ( 1,2 ) and ( 2,1 ) represent the covariance of both financial returns .figure [ fig : vols_returns ] draws the posterior distributions for volatilities and correlation .the estimated mean volatilities for both , dpm and gaussian approaches , are very similar , however , the main differences arise from the shape of the posterior distribution . credible intervals for dpm model correlation are wider providing a more realistic measure of uncertainty about future correlations between two assets .this is a very important implication in financial setting , because if an investor chooses to be gaussian , she would be overconfident about her decision and unable to adequately measure the risk she is facing .see for a more detailed comparison of dpm and alternative parametric approaches in portfolio decision problems .[ table : volatilities ] c c c c c c c & & & + & constant & ml gaussian & mean & ci&mean & ci + & & & median & ci length&median & ci length + + & 1.7164 & 0.4007 & 0.4098 & ( 0.3681 , 0.4538 ) & 0.3996 & ( 0.3550 , 0.4512 ) + & & & 0.4099 & 0.0857 & 0.3983 & 0.0962 + & 1.1120 & 0.2911 & 0.2800 & ( 0.2571 , 0.3077 ) & 0.2751 &( 0.2421 , 0.3123 ) + & & & 0.2790 & 0.0506 & 0.2742 & 0.0702 + & 1.9617 & 0.4939 & 0.4635 & ( 0.4159 , 0.5193 ) & 0.4431 & ( 0.3912 , 0.5059 ) + & & & 0.4606 & 0.1034 & 0.4408 & 0.1146 + to sum up , this illustration has shown the main differences between the standard estimation procedures and the new non - parametric approach .even though the point estimates for the parameters and the one - step - ahead volatilities are very similar , the main differences arise from the thickness of tails of predictive distributions of one - step - ahead returns and the shape of the posterior distribution for the one - step - ahead volatilities .in this paper we reviewed univariate and multivariate garch models and inference methods , putting emphasis on the bayesian approach .we have surveyed the existing literature that concerns various bayesian inference methods for mgarch models , outlining the advantages of the bayesian approach versus the classical procedures .we have also discussed in more detail the recent bayesian non - parametric method for garch models , which avoid imposing arbitrary parametric distributional assumptions .this new approach is more flexible and can describe better the uncertainty about future volatilities and returns , as has been illustrated using real data .we are grateful to an anonymous referee for helpful comments .the first and second authors are grateful for the financial support from mec grant eco2011 - 25706 .the third author acknowledges financial support from mec grant eco2012 - 38442 .81 natexlab#1#1[1]`#1 ` [ 2]#2 [ 1]#1 [ 1]http://dx.doi.org/#1 [ ] [ 1]pmid:#1 [ ] [ 2]#2 , & ( ) . . , .( ) . . , __ , . . , & ( )( ) . . , __ , . , & ( ) . . in ( ed. ) , _ _ chapter .( pp . ) . : .( ) . . , __ , . . , & ( ), , & ( ) . . , __ , . . , & ( ), , & ( ) . . , __ , . , , & ( ) . . ,_ , . , & ( ) ., _ _ , . , & ( ) . . ,_ , . , & ( ) . . ,, ( pp . ) .( ) . . , _( ) . . , _( ) . . , __ , . , , & ( ) ., , & ( ) . . in , & ( eds . ) , _ _ ( pp . ) . : ., , & ( ) . . , _( ) . . , . , & ( ), , & ( ) . . , _, , & ( ) . . , __ , . . , & ( )_ , . , & ( ) .( ) . . , _( ) . . , _( ) . . , _( ) . . , __ , . , & ( ) ., , & ( ) . . , __ , . , & ( ) .( ) . . , __ , . , & ( ) .( ) . . , _, , & ( ) . . , _, , & ( ) . . , _, , & ( ) . . , _, , & ( ) . . ,_ , . , & ( ) ._ , . , & ( ) ._ , . , & ( ) ., & ( ) . . , __ , . , & ( ) .. , ( pp . ) . , & ( ) .. in , , , & ( eds . ) , _ _ chapter .( pp . ) . : .( ed . ) . , &_ _ , . , & ( ) .( ) . . , _( ) . . , _, , & ( ) . . , _( ) . . , _( ) . . , __ , . , & ( ) ., _ _ , . , & ( ) .( ) . . , _( ) . . , _( ) . . , _ _ , . , & ( ) . . , _, ( pp . ) . , & ( ) . . , _( ) . . , _( ) . . in , , , & ( eds . ) , _ _ chapter .( pp . ) . : . , & ( ) . .( ed . ) . ., & ( ) . . in , , , & ( eds . ) , _ _ ( pp . ) . : .( ) . . , _( ) . . in , & ( eds . ) , _ _ . : .( ) . . in , , , & ( eds . ) , _ _ ( pp . ) . : . ( ) . . , _( ed . ) . : , & ( ) . . , _ _ , . .( ) . . , _, , & ( ) . . , . , , & ( ) . . , _, , & ( ) . . , _, , & ( ) . . , _( ) . . , _ _ , . .( ) . . , __ , . . , & ( )( ) . . , _
this survey reviews the existing literature on the most relevant bayesian inference methods for univariate and multivariate garch models . the advantages and drawbacks of each procedure are outlined as well as the advantages of the bayesian approach versus classical procedures . the paper makes emphasis on recent bayesian non - parametric approaches for garch models that avoid imposing arbitrary parametric distributional assumptions . these novel approaches implicitly assume infinite mixture of gaussian distributions on the standardized returns which have been shown to be more flexible and describe better the uncertainty about future volatilities . finally , the survey presents an illustration using real data to show the flexibility and usefulness of the non - parametric approach . * keywords : * bayesian inference ; dirichlet process mixture ; financial returns ; garch models ; multivariate garch models ; volatility .
we consider collective decision making processes such as a market that acts as a central mechanism for coordinating the actions of autonomous participants .we address the questions : how does one measure the quality of the collective decision making process , and how weak can the central market mechanism be ?in many applications , there is significant interest in decentralizing computation while still being able to arrive at results that can not be computed entirely locally .we use a simple model to capture the informational complexity of computing global functions by aggregating results from participants who are endowed with arbitrary allotments of local information .this allows us to draw conclusions about the requirements on allotments and protocols , for efficient collective information processing .a key aspect of our model is the specification of meta - information based on distinguishing perfect information , single - blind arrangements , and double - blind arrangements .our technical framework is built on the notions of communication complexity .we assume that participants possess information which is not available to other participants ; we call this the _ private _ information .this work is motivated by several applications .a rather timely application is found in the domain of participants in electronic markets . often , such as in financial markets , participating agents would benefit from an understanding of the global system dynamics . for instance, agents might like to have signals that indicate the presence of herding , bubbles and other aggregate phenomena .typically , the local view of a single agent does not provide sufficient information to reliably detect this .moreover , in such a domain , one is tightly constrained by what information can be revealed , incentives to reveal this information , and other aspects related to privacy in computation .if we seek efficient decentralized information processing mechanisms under these constraints , then we would like to be able to determine what is or is not possible , employing only a coarse characterization of resources and endowment of information .recent studies such as in anonymized financial chat rooms provide interesting insights into the behaviour of such collectives , such as the characterization of equilibria in which a subset of traders profit from the information of others .this is but one example of a larger body of economic literature related to phenomena in networked markets .however , in that literature , it is not typical to investigate our question of how the allotment structure and communication protocols relate to the efficiency with which specific types of computation are achieved .for instance , change detection is of fundamental importance in financial markets how weak a protocol is sufficient to decide a change has occurred ?recent work on the topic of complexity of financial computations by indicates that this is a fertile direction to pursue .similar issues arise in many other application domains , such as mobile sensor networks and distributed robotics .describe a mobile sensor network for optimal data gathering , using a combination of underwater and surface level sensing robots to optimally gather information such as chemical and thermal dynamics in a large volume of water ( typically measured in square miles ) .similar systems have been utilized for tracking oil spills and algal blooms .a key computational method utilized by such distributed robotic networks involves distributed optimization . the deployment of modules in such a network needs to satisfy a spatial coverage requirement , described as a cost function , so that each module plans trajectories to optimize this criterion .the sensor fusion problem , to determine a combined estimate of an uncertain parameter , may also be posed an as optimization problem in the sense of maximizing information gain . despite this rigorous approach , relatively little is known about how to compare different formulations of these optimization problems given that we are interested in a certain type of global function ( say , number of peaks in a chemical concentration profile or some distributional aspect of the overall field ) using weak local sensing and the ability to move sensing nodes , how does one compare or otherwise characterize protocols and other aspects of the problem formulation ?a line of work that begins to touch upon some of these questions is that of ghrist and collaborators .use tools from algebraic topology to solve the problem of counting targets using very weak local information ( such as unlabelled counts in a local neighbourhood ) .present an approach to detection of holes in coverage through decentralized computation of homology . here again , the focus being on aspects of the specific function being computed , the authors do not address the relationship between the protocols and problem formulation , and the efficiency of computation .extending the idea of decentralized computation in social systems , consider the problem faced by a program committee , such as one that might review this paper .we seek a decentralized computation of a ranking problem .similar ranking problems also occur in executive decision making such as the hiring decision in academic departments .the key issue here is that of parsimonious information sharing , coupled implicitly or explicitly with the meta - information problem .these challenges arise due to limitations on the capacities of the decision makers to exchange information with each other .the common theme underlying all of these applications is the computation of a function based on allotment of portions of the information to parties who have reasonable amounts of computational resource but would like to keep exchange of information limited .we wish to understand how weak the corresponding protocols can be , for various types of functions .major categories of functions of interest include change detection and ranking .we model change detection by an abstract version of the key underlying problem , of determining whether the data forms a constant or a step function .the main statistical property we consider here is the computation of the mean .we are interested in understanding just how much communication must occur to answer various questions of interest .therefore , instead of working with detailed models for the questions about market behaviour or sensor networks discussed previously , we have deliberately kept the models we study as _ simple _ as possible ._ this makes our lower bound results stronger ._ determining an answer to any more realistic question will require even more information to be exchanged than in these simple models , as long as the more realistic model includes the simpler problem at its core .it therefore makes sense in our setting to study the simplest possible embodiment of each of the core problems . for the upper bounds , our results are a first step and will need to be extended to more realistic models .our model is based on the notion of _ communication complexity _ , which has been highly influential in computer science .a _ boolean _function models yes / no decisions , by requiring that the function take either the value 0 or the value 1 . a quantity with value either 0 or 1 is known as a _ bit _ , and quantities that are drawn from a larger range of values can be expressed by using multiple bits ; a function that is defined over a domain containing different values is said to have an -bit _ input_. say two players alice and bob wish to compute a boolean function on a -bit input , but alice only has access to the first bits and bob to the other bits .alice and bob are not _ computationally _ constrained , but they are _ informationally _ constrained .the question now is : how many bits of information do alice and bob need to exchange to compute on a given -bit input ? a _ protocol _ for this problem specifies , given the inputs to the players and the communication so far , which player is the next to send information , as well as what information is actually sent .there is a trivial protocol where alice merely sends her part of the input to bob .bob now has all the information he needs to compute , and he sends back the 1-bit answer to alice .the _ cost _ of a protocol is the total number of bits that are exchanged ; this simple protocol has a cost of for _ any _ function .the field of two - party communication complexity studies , for various functions of interest , whether more efficient protocols exist . as an example , for the equality function which tests whether alice and bob s inputs are exactly the same , it is known that the communication upper bound of is tight for _ deterministic _ protocols , but there is an improved protocol with cost when the players messages are allowed to be randomized and it is sufficient for the final answer to be correct with high probability .the notion of communication complexity can be generalized from the two - party setting to the _ multi - party _ setting . herethe number of players is not limited to two , each player has some information about the global input , and they wish to compute some boolean function of the global input .there are two standard models for how the input is distributed among the players : the number - in - hand model ( nih ) and the number - on - forehead ( nof ) model .suppose there are players and the global input is bits long .in the nih model , there is some fixed partition of the global input into parts , and each player gets one of these parts . in the nof model , again there is a fixed partition into parts , but the -th player gets all the parts _ except _ the -th part .the main motivation for our model is that in many situations , such as financial markets or sensor networks , information is distributed among players in a more complex fashion than in the noh or nif models .moreover , the players might not have control over which pieces of information they have access to the _ allotment _ of inputs to players might be arbitrary , perhaps even adversarial . as an example , creators of financial instruments may decide which assets to bundle into pools that are then offered for sale .purchasers of such instruments might wish to check fairness of allocation , without revealing to each other their precise holdings ; or regulators might wish to check that sellers behaved impartially but without relying on full disclosure . yet , the players might still wish to compute some function of their global input in this less structured setting .now different kinds of question arise than in the standard communication complexity setting . for a given function ,which kinds of allotment structures allow for protocols ? does the meta - knowledge of what the allotment structure actually is make a difference to whether there is an efficient protocol or not ?these questions are interesting even for simple functions which have been thoroughly investigated in the standard setting . to be more formal , let be a function which players wish to compute on a global input of size bits .an allotment structure is a sequence of subsets of = \{1,2,\dots , n\} ] lies in at least one set of the allotment structure . if the covering property did not hold , consider an index which does not belong to the allotment , and an input such that is _ sensitive _ to at index , meaning that is different from , where is with the value of the bit flipped . by the non - triviality of ,such an input must exist .clearly any protocol outputs the same answer for as for since does not belong to the allotment , and hence the protocol can not be correct . henceforth , we automatically assume that a macroscope has the covering property .there are no general necessary conditions on the allotment structure beyond the covering property for computation of non - trivial functions .but intuitively , the more `` even '' the allotment is , in the sense of each bit being allotted to the same number of players , the easier it is to compute a symmetric function of the inputs .we define an _ even _ allotment structure as an allotment structure for which there is a number such that each index ] , which is 1 if all the inputs are equal , and 0 otherwise .this requires a slight adaptation of our model to inputs which are -ary rather than binary , but this adaptation can be done in a natural way .we are able to characterize the cost of protocols for single - blind constancy macroscopes _ optimally _ in terms of the allotment structure . given an allotment structure , we define the intersection graph of the structure as follows .the graph has vertices , and there is an edge between vertex and vertex for ] that occurs in its portion of the input .in addition , each player sends a 1-bit message saying whether its portion of the input is constant or not .the players know that constancy holds if each 1-bit message encodes `` yes '' , and in addition , the values in ] , and if all other players in the connected component see a constant value , it follows that all players in the connected component see the same constant value .we argue that this bound is optimal up to a factor of 2 .we will give separate arguments that bits of communication are required and that bits of communication are required . from these separate bounds , it follows that bits of communication are required . to see that bits of communication are necessary , consider any allotment structure such that is non - empty for each player .define a function \rightarrow [ n] ] such that the communication pattern of players in is exactly the same when the players in all receive the input as when they all receive the input .now consider two inputs input in which all co - ordinates are the constant and input in which all co - ordinates outside have value and co - ordinates in have value .the communication pattern of the protocol is the same for and , however the constancy function is true for and false for .this is a contradiction .thus , for single - blind constancy macroscopes , the critical property of the allotment structure is the number of connected components of the intersection graph .the fewer the number of connected components , the more efficiently the macroscope can be solved .we next study the situation for double - blind macroscopes .[ double - blind - constancy ] every -player double - blind constancy macroscope on -ary inputs can be solved with cost . moreover , there are -player double - blind constancy macroscopes which require cost .the protocol giving the upper bound is simple .each player sends a message encoding one of possibilities : either the players portion of the input is non - constant , or if it is constant , which of the possible values it is .the protocol accepts if each message encodes the same value ] , player sends two indices and , where is the largest index in for which and is the smallest index in for which .given all these messages , each player can calculate the value of the smallest index for which simply by taking the minimum of over all players , as well as the largest index for which , simply by taking the maximum of over all players .note that iff .we conjecture that the bound of theorem [ double - blind - bsf ] is tight for double - blind bsf macroscopes for protocols that use only one round of communication . for single - blind macroscopes , however , we can do better .[ single - blind - bsf ] every -player single - blind bsf macroscope on bits can be solved with cost . the protocol witnessing the upper bound is as follows .each player sends a message consisting of two parts .the first part is 2 bits long , and specifies which of the following is the case : ( 1 ) the player s portion of the input is constant , ( 2 ) there is a single transition from 0 to 1 in the player s input , ( 3 ) neither ( 1 ) nor ( 2 ) holds . the second part is bits long .the interpretation of the second part of the message is as follows : if case ( 1 ) holds for the first part , then the second part encodes which constant ( either 0 or 1 ) the player is given .if case ( 2 ) holds , then the second part encodes the index at which a transition occurs , i.e. , a number ] for each . given a parameter , we study the cost of protocols for -averaging macroscopes , where each player needs to arrive at an -additive approximation to the average of the numbers by using the protocol to communicate . [ single - blind - averaging ] let be fixed .every -player single - blind -averaging macroscope on inputs can be solved with cost .notice again that there is no dependence of the cost on , merely on the number of players and the approximation error .we again use critically the meta - information of players about the allotment structure .each player knows , for each index , the number of distinct players receiving input .the message sent by player is an -additive approximation to the quantity .since the quantity is between and , the approximation can be specified using bits . given the messages of all players ,each player can compute an -approximation to the average simply by summing all the individual approximation .since the individual approximations are -additive approximations , the sum will be an -approximation to the average .in the case of averaging macroscopes , we _ can _ show that the double - blind restriction is a significant one , in that it leads to a dependence of the cost on the number of players .the proof uses a dimensionality argument .[ double - blind - averaging ] let be a parameter .there are -player double - blind -averaging macroscopes on inputs which require cost .consider a -player double - blind macroscope for -averaging , with player 1 receiving an allotment ] .our assumption that the allotment structure is covering implies ] .if player 1 has allotment and player 2 has allotment , then in a correct protocol , player 2 eventually knows an -approximation to } x_j / n$ ] .since it also knows all values except , this implies that it knows an -approximation to .thus an -approximation to should be extractable from player 1 s message .this should hold for every , which means player 1 must send at least bits . by a symmetric argument ,player 2 must send at least bits , which means that at least bits are communicated in all .theorem [ double - blind - averaging ] has the disadvantage that it does not say much about , which may be large in comparison to .however , we believe that a refinement of the proof which argues about information revealed about subsets of inputs rather than individual inputs can be used to establish an improved lower bound .the previous two sections have focused on the details of a model for collective information processing and properties of this model .we now take a step back to discuss why these results are of relevance to the motivating examples identified in the introduction .consider the problem of agents in financial markets .it is increasingly the case that , with the emergence of diverse communication methods and agents deciding at time scales ranging from microseconds to days , common knowledge is nt so commonly available in practice .this has significant implications for dynamics and much of the modern discussion regarding markets is related to such issues . in thissetting , there is a need for diagnostic tools that could provide useful signals by computing global properties , i.e. functions , based on local information that can be used subject to limitations on protocols .for instance , has there been a change from a ` constant ' level in a global sense ? or , is there a significant difference in the form of a step change between segments of a networked market ?we illustrate the use of concepts from complexity theory to address abstract versions of such questions .indeed , our techniques could be used to answer further such questions about statistical distributions , ranking queries , etc .a key novelty , even in comparison to the state of the art in communication complexity , is that we consider an arbitrary endowment of inputs and meta - information such as single and double blind protocols .an important general direction for future work in this area would be to extend our analysis to more directly address the subtleties of the above mentioned dynamics as they occur in applications of interest .also , we would like to better understand the relationships between our model of decentralized computation under informational constraints and previous established models , such as , which employ different methods and focus more on temporally extended sequential protocols such as auctions . in terms of more specific questions , our model could be extended in various ways .we have considered the case of a _ simple _ structure of communication , with simultaneous messages sent in one round of communication , and a possibly complex allotment structure of the inputs .we could allow more sophisticated communication structure .for example , double - blind protocols with two rounds of communication can emulate single - round single - blind protocols with some loss in efficiency , simply by using the first round to share publicly information about the input allotment , and then running the single - blind protocol in the second round .more generally , communication might be restricted to occur only between specific _ pairs _ of players . in the context of sensor networks , for instance, it is natural to model both the location of information and the structure of communication as governed by the topology of the ambient space .even studying very simple functions such as constancy in such general models appears to be interesting .we emphasize that we are interested in understanding the communication requirements of even very simple functions in modelling frameworks that render them non - trivial .this distinguishes our work from the existing research on communication complexity , where functions such as constancy and bsf are trivial because the model of communication is so simple .we are especially interested in modelling some aspects of collective information processing , such as information overlap and meta - information , which have been neglected so far and which we believe can have a significant impact on the efficiency of communication .the way we capture these notions is quite flexible and can be used both to model computation of continuous quantities such as average and computation of discrete quantities such as boolean functions , as we have illustrated with our results .another direction that we find compelling is modelling meta - information in a more sophisticated way . as of now, we have the single - blind model and the double - blind model .but there are various intermediate notions that are reasonable to study . for example, each player might know the number of players and the number of inputs but nothing about which inputs are given to which other players . or a player might know which other players also receive the inputs it receives , but nothing about inputs it does not receive . or some global property of the allotment structure , such as that the allotment structure is even , might be known . notice that in the case of averaging macroscopes , there _ is _ an efficient protocol whose cost does nt depend on the number of players if the allotment structure is even and the size of each allotment is known .the protocol simply involves the players summing all their inputs and dividing by a universal constant . in general, one can ask : assuming that there are efficient single - blind protocols known , what is the minimal information about the allotment structure required to give efficient protocols ?ranking is an important subject that we have not addressed in this paper .some interesting problems can be captured as ranking macroscopes , and we leave their study for further work . we have seen that in some cases more information hinders making a global decision , rather than helping . more generally , why is allotment structure important ? we are striving to fully understand how the allotment structure of information affects our ability to efficiently answer questions that require global information .s.r . would like to acknowledge the support of the uk engineering and physical sciences research council ( grant number ep / h012338/1 ) and the european commission ( tomsy grant agreement 270436 , under fp7-ict-2009.2.1 call 6 ) .a.s . acknowledges the support of epsrc via platform grant ep / f028288/1 .r.s . acknowledges the support of epsrc via grant ep / h05068x/1 .arora , s. , barak , b. , brunnermeier , m. ge , r. 2011 , ` computational complexity and information asymmetry in financial products ' , _ communications of the acm _ * 54*(5 ) , 101107 .http://dx.doi.org/10.1145/1941487.1941511[doi:10.1145/1941487.1941511 ] chandra , a. k. , furst , m. l. lipton , r. j. 1983 , multi - party protocols , _ in _ ` stoc 1983 : proceedings of the fifteenth annual acm symposium on theory of computing ' , acm , pp .http://dx.doi.org/10.1145/800061.808737[doi:10.1145/800061.808737 ] darley , v. outkin , a. v. 2007 , _ a nasdaq market simulation : insights on a major market from the science of complex adaptive systems _ , vol . 1 of _ complex systems and interdisciplinary science _ , world scientific. leonard , n. , paley , d. , lekien , f. , sepulchre , r. , fratantoni , d. davis , r. 2007 , ` collective motion , sensor networks , and ocean sampling ' , _ proceedings of the ieee _ * 95*(1 ) , 4874 .http://dx.doi.org/10.1109/jproc.2006.887295[doi:10.1109/jproc.2006.887295 ] yao , a. 1982 , protocols for secure computations , _ in _ ` focs 1982 : proceedings of 23rd annual symposium on foundations of computer science ' , ieee computer society , pp .
we introduce a new model of collective decision making , when a global decision needs to be made but the parties only possess partial information , and are unwilling ( or unable ) to first create a global composite of their local views . our macroscope model captures two key features of many real - world problems : allotment structure ( how access to local information is apportioned between parties , including overlaps between the parties ) and the possible presence of meta - information ( what each party knows about the allotment structure of the overall problem ) . using the framework of communication complexity , we formalize the efficient solution of a macroscope . we present general results about the macroscope model , and also results that abstract the essential computational operations underpinning practical applications , including in financial markets and decentralized sensor networks . we illustrate the computational problem inherent in real - world collective decision making processes using results for specific functions , involving detecting a change in state ( constant and step functions ) , and computing statistical properties ( the mean ) .
statistical aspects of the evolution of languages have attracted , in the last few years , a great deal of attention among physicists and mathematicians. one of the better established quantitative empirical facts about extant languages is their size distribution , namely , the frequency of languages with a given number of speakers . naturally , explaining the origin of this distribution is one the aimed goals of mathematical modeling in this field .recent work has built up on variations of two basic models of language evolution , schulze s model and viviane s model, both of them mostly focused on the effects of mutation of linguistic features which give rise to new languages , and including the possibility of language extinction .these models , however , disregard the fact that , over periods which are short as compared with the typical time scales of language evolution , the speakers of a given language can substantially vary in number just by the effect of population dynamics . for instance , in the last five centuries a period which includes the culturally devastating european invasion of the rest of the globe perhaps 50 % of the world s languages went extinct ( among them , two thirds of the 2,000 preexisting native american languages ) or changed drastically . in the same period ,however , the world s population grew by a factor of twelve or more. demographic effects have been very recently incorporated to a model of language evolution by tuncay, in the form of a stochastic multiplicative model for population growth ( see also ref . ) . with suitable tuning of its several parameters , tuncay s model is able to reasonably reproduce the observed distribution of language sizes , as a result of numerical simulations. the complexity of the model which , in its full form, includes population growth along with language inheritance , branching , assimilation , and extinction makes it however difficult to identify the specifc mechanism that shapes the distribution of sizes . in this paper ,i show that the empirical distribution of language sizes can be accurately explained taking into account just the effect of demographic processes .this possibility was already pointed out , for the specific case of the languages of new guinea , by novotny and drozd. i propose a two - parameter stochastic model where the populations speaking different languages evolve independently of each other . during the evolution which , in the realization of the present model , is assumed to span 1,000 years language creation , assimilation , and extinction are disregarded .this assumption does not discard mutations inside a given language , which may lead to its internal evolution , but each language preserves its identity as a cultural and demographic unit along the whole period .the model is analytically tractable , and its two parameters can be fitted _ a priori _ from empirical data .numerical simulations confirm the prediction that the distribution is essentially independent of details in the initial condition the distribution of sizes ten centuries ago so that , in a sense , the present distribution is the unavoidable consequence of just demographic growth .the model is further validated by showing that the distribution of sizes of languages belonging to a given family has the same shape as the overall distribution .these results strongly suggest that population dynamics is a necessary ingredient in models of linguistic evolution .it is a well established empirical fact that the frequency of languages with a given number of speakers is satisfactorily approximated by a log - normal distribution. accordingly , the distribution of language sizes as a function of the log - size , , is approximately given by a gaussian , ,\ ] ] where and are , respectively , the mean value and the mean square dispersion of .the quantity gives the fraction of languages with log - sizes in .significant departures from the log - normal distribution are limited to small language sizes , up to the order of a few tenths speakers. ethnologue statistical summaries, whose data correspond to collections done mostly during the 1990s , list the number of languages with sizes within decade bins ( 1 to 9 speakers , 10 to 99 speakers , 100 to 999 speakers , and so on ) , and give the number of speakers within each bin .the total number of languages in the list is , accounting for an overall population a little above speakers .the sizes of languages of the database are unknown . in terms of the distribution ,the number of languages in the bin between and speakers is while the total population speaking the languages in the same bin is the values of and provided by the ethnologue statistical summaries can thus be used to estimate the parameters and in the distribution of eq .( [ gauss ] ) .least - square fitting yields figure [ f1 ] shows , as a histogram over the variable , ethnologue s data for .the curve is a plot of the function with the parameters of eq .( [ fit ] ) , whose integral over the histogram bins approximates the empirical values of . for easier comparison with the histogram , further multiplied by the bin width .the aim in the following is to provide a model which explains the fitted distribution .the log - normal shape of the distribution of language sizes suggests that a multiplicative stochastic process is at work in the evolution of the number of speakers of each language .this is in turn consistent with the hypothesis that , over sufficiently long time scales , the population speaking a given language evolves autonomously , driven just by demographic processes. let be the number of speakers of language at time , and assume that at time one year later , say the population has changed to where the growth rate is a positive stochastic variable drawn from some specified distribution .as shown below , the mean value and the mean square dispersion over this distribution can be estimated from empirical data . in terms of the initial population , the number of speakers at time is i suppose now that , during the whole -step process , the distribution of the growth rate is ( i ) the same for all languages , and ( ii ) does not vary with time .moreover , ( iii ) no language is created or becomes extinct .admittedly , these are rather bold assumptions for the world s history during the last 1,000 years . however , in view of the lack of reliable data over such period , they are at least justified by the sake of simplicity .i identify the evolution of the world s languages as realizations of the multiplicative stochastic process ( [ pt0 ] ) . by virtue of assumption ( i ) ,all the realizations are statistically equivalent . in this interpretation , the present distribution of log - sizes , given by eqs .( [ gauss ] ) and ( [ fit ] ) , is the probability distribution for the variables obtained from those realizations .thus , my aim is to quantitatively relate the distribution to the outcome of the stochastic process . the total population at time is averaging this expression over realizations of the stochastic variable , and assuming that the growth rate is not self - correlated in time , we find , where is the mean growth rate and is the initial total population .in order to apply this analysis to the world s population in the last ten centuries , let us take , which is the estimated population by the year 1000. ascribing the total population accounted for by ethnologue s data to the year 2000 , and associating this number with the population averaged over realizations of the growth rate , we have and .this makes it possible to evaluate the mean growth rate per year as to evaluate the dispersion of the growth rate , it is useful to introduce its relative deviation with respect to the average , , as .\ ] ] the average value and the mean square dispersion of the deviation are , respectively , where is the mean square dispersion of the growth rate . assuming that is always sufficiently small to approximate , eq .( [ pt ] ) can be rewritten for the log - size as .\ ] ] this equation shows explicitly that , besides the deterministic growth given by the term , the evolution of the logarithm of the population speaking a given language is driven by an additive stochastic process .thus , by virtue of the central limit theorem, the distribution must converge to a gaussian like in eq .( [ gauss ] ) , starting from any distribution of initial log - sizes . for the timebeing , however , the question remains whether the times relevant to the process are enough to allow for the development of the gaussian shape and , in particular , to suppress any effect ascribable to a specific initial distribution .unfortunately , the initial sizes the number of speakers of each language 1,000 year ago are not known for most languages .however , their effect on the present size distribution can be readily assessed .averaging eq .( [ qt ] ) over realizations of the stochastic process and over the distribution of initial log - sizes and always assuming that the deviations are small yields , for the mean square dispersion of , where is the mean square dispersion of the initial log - sizes .the empirical estimation for is the value of given in eq .( [ fit ] ) . in turn, an upper bound can be given for , as the maximal mean square dispersion in the log - size distribution of languages with a total population of speakers , and at least one speaker per language .this maximal dispersion is obtained with languages with exactly one speaker , and just one language with the remaining speakers .clearly , this is an unlikely distribution for the languages 1,000 years ago , but represents the worst - case instance , with the largest contribution of the initial condition to the present dispersion of log - sizes . in this extreme situation ,the estimation for the initial mean square dispersion is .meanwhile , according to eq .( [ fit ] ) , . in the right - hand side of eq .( [ sq2 ] ) , therefore , the largest contribution by far is given by the second term , and that of the initial distribution is essentially negligible .this makes it possible to calculate the mean square dispersion of the deviations as for , thus completing the statistical characterization of the growth rate per year , .note that this value of is in agreement with the assumption of small relative deviations in the growth rate . summarizing the results of this section ,i have argued that the present log - normal distribution of language sizes can be seen as the natural consequence of population dynamics driven by a stochastic multiplicative process , eq .( [ pt0 ] ) , where the evolution of each language is interpreted as a realization of the process . using data on the total population growth during the last 1,000 years a period over which i neglect language birth and death and fitting only one parameter ( ) from the distribution itself , i was able to statistically characterize the growth rate per year which explains the present distribution , giving its mean value and mean square dispersion .also , i have advanced that the dispersion of language sizes ten centuries ago has essentially no effect on its present value .it is now useful to validate these conclusions with numerical realizations of the model , and with applications within language families .in this section , i present results for series of numerical realizations of the multiplicative stochastic process ( [ pt0 ] ) .the mean value and the mean square dispersion of the growth rate are fixed according to the values estimated in section [ sect3 ] , eqs .( [ alfaav ] ) and ( [ sigmad ] ) . in order to speed up the computation , individual values of drawn from a square distribution centered at , with a width which insures the correct mean square dispersion . in agreement with my main assumptions ,i avoid the possibility that languages die out by replacing the absorbing boundary at , below which a language should become extinct , by a reflecting boundary . in view of the discussion in the previous section ,the convergence of the distribution of log - sizes to a gaussian is guaranteed by the central limit theorem .the emphasis in the simulations is thus put on the possible effects of the distribution of initial sizes .figure [ f2 ] shows , as normalized histograms , numerical results for single series of realizations of the stochastic process after steps , from four different initial conditions . in ( a ) , all the languages have exactly the same initial size . in ( b ) , the initial sizes are uniformly distributed between and . in ( c ) , the distribution of initial sizes is also uniform , but spans the interval .finally , in ( d ) the initial distribution is more heterogeneous , with half the languages having size and the other half having size .the parameters , , , and , which characterize these initial distributions , are fixed by the condition that the total population is .the curve in all plots is the gaussian of eq .( [ gauss ] ) with the parameters of eq .( [ fit ] ) .the agreement in cases ( a ) to ( c ) is excellent . only in case( d ) , where the initial distribution is specially heterogeneous and , certainly , not a likely representation of the distribution of language sizes ten centuries ago the deviations are larger , though the agreement is still very reasonable .the difference between the numerical results of case ( d ) and the expected gaussian function resides not only in the width of the distribution but also in its mean value . to a much lesser extent , this discrepancy is also visible in case ( b ) .this shift between the distribution peaks can be understood in terms of the average of eq .( [ qt ] ) over both the realizations of the growth rate and the initial log - sizes , besides the contribution of the multiplicative stochastic process , given by the term proportional to the time , the mean value in eq .( [ mq ] ) depends on the average initial log - size . in spite of the fact that the total initial population and the number of languages are the same for all simulationswhich always gives the same average size per language the average log - size depends on the specific initial distribution .thus , the final mean values for different initial conditions are generally shifted with respect to each other . as a consistency test for the assumption that languages do not die out along the evolution period considered here , i have also run simulations taking into account the absorbing boundary at .namely , when the size of a language decreases below one speaker during its evolution , it is considered to become extinct and that particular realization of the stochastic process is interrupted . among the four initial conditions considered above , those who undergo larger extinctions are , not unexpectedly ,( b ) and ( d ) as they produce the largest shifts to the left in the log - size distributions . in both cases , however , the fraction of extinct languages is around 1 % , which validates the above assumption quite satisfactorily . a crucial consequence of the hypotheses on which the present model is based in particular, the mutual independence of the size evolution of different languages is that its predictions should hold not only for the ensemble of all the world s languages , but also for any sub - ensemble to which the homogeneity assumptions ( i ) and ( ii ) reasonably apply . in other words ,the final log - normal shape of the size distribution should also result from the evolution of , say , the languages of a given geographical region , or belonging to a given language family .this can be readily assessed from empirical data on the number of speakers of individual languages and , in fact , has already been pointed out for a set of some 1,000 new guinean languages. here , i analyze the size distribution of languages belonging to each one of the four largest families , according to ethnologue s classification. population data for individual languages were obtained from ethnologue s online databases .figure [ f3 ] shows histograms of log - sizes for those families . to ease the comparison , the column width and the horizontal scale are the same as in fig .curves stand for least - square fittings with gaussian functions as in eq .( [ gauss ] ) .the resulting parameters are and for niger - congo ; and for austronesian ; and for trans - new guinea ; and and for indo - european . the quality of the agreement between the data and the gaussian fitting is clearly comparable to that of the whole language ensemble in fig . [ f1 ] . note the interesting fact that the mean square dispersion which , according to the present model , results from the dispersion in the population growth rate is sensibly larger for the indo - european family than for the other three. surely , this is a direct consequence of the highly diverse fate of european languages in the last few centuries . in any case , the four mean square dispersions are not far from the overall value given in eq .( [ fit ] ) .in this paper , i have argued that the present log - normal distribution of language sizes is essentially a consequence of demographic dynamics in the population of speakers of each language .in fact , an isolated population can largely vary in number within time scales which are short as compared with those involved in substantial language evolution . to support this suggestion ,i have proposed a stochastic multiplicative process for the population dynamics of individual languages , where language birth and death are disregarded . within some bold assumptions on the geographical and temporal homogeneity of the process, the model is completely specified by two parameters , which give the average growth rate of the population and its mean square dispersion .the average growth rate is completely defined by the initial and the final world population .i have chosen to apply the model on the period spanning the last 1,000 years , for which reliable data on the world s population growth are available .the mean square dispersion of the growth rate is the only parameter which i fitted in an admittedly _ ad - hoc _ manner , using the present distribution of language sizes .it seems unlikely to find estimations for this parameter from independent historical sources , which would require to have reliable records on the population change year by year .note that the dispersion in the growth rate is determined by a variety of factors , including fluctuations in birth and mortality frequencies and migration events. once the two parameters are fitted , the model is able to produce , as the result of evolution along ten centuries , excellent predictions of the present distribution of language sizes .numerical simulations show that the final distribution is largely independent on the initial condition .this emphasizes the point that , irrespectively of the long - range historical processes that may have determined the distribution of language sizes of 1,000 years ago including language birth and death , branching , mutation , competition , assimilation , and/or replacement population dynamics is by itself able to explain the present distribution .this conclusion had already been advanced for the case of new guinean languages in ref . .in fact , realizing that the same log - normal profile is found in the size distribution inside language families , is a further validation of the present model . in view of the present arguments, one can moreover safely assert that the distribution of language sizes was already a log - normal function , with different parameters , in year 1000 .it is clear that in the last few years with the advent of a host of new mechanisms of globalization which endanger cultural diversity many , or most , of the world s languages are threatened by the risk of extinction. this risk is peculiarly acute for those languages whose number of speakers is below a few hundreds including the range where the distribution of sizes differs from the log - normal profile ( cf . fig .it should be a program of obviously urgent interest to study in detail what are the relevant processes at work in that range , even if they escape the domain of the statistical physicists approaches .i am grateful to susanna c. manrubia , who brought to my attention the significance of the distribution of language sizes to the problem of language evolution in 1996 .also , i acknowledge critical reading of the manuscript by a. a. budini .l. campbell and m. mithun , eds . , _ the languages of native america : a historical and comparative assessment _( university of texas press , austin , 1979 ) .see also http://www.nativeamericans.com/natives.htm .
it is argued that the present log - normal distribution of language sizes is , to a large extent , a consequence of demographic dynamics within the population of speakers of each language . a two - parameter stochastic multiplicative process is proposed as a model for the population dynamics of individual languages , and applied over a period spanning the last ten centuries . the model disregards language birth and death . a straightforward fitting of the two parameters , which statistically characterize the population growth rate , predicts a distribution of language sizes in excellent agreement with empirical data . numerical simulations , and the study of the size distribution within language families , validate the assumptions at the basis of the model .
in recent years , quantum statistical mechanics and quantum information theory have played an increasingly central role in quantum gravity .such interplay has proved particularly insightful both in the context of the holographic duality in ads / cft , as well as for the current background independent approaches to quantum gravity , including loop quantum gravity ( lqg ) , the related spin - foam formulation , and group field theory .interestingly , the different background independent approaches today share a microscopic description of space - time geometry given in terms of discrete , pre - geometric degrees of freedom of combinatorial and algebraic nature , based on spin - network hilbert spaces . in this context , entanglement has provided a new tool to characterise the quantum texture of space - time in terms of the structure of correlations of the spin networks states . along this line, several recent works have considered the possibility to use specific features of the short - range entanglement in quantum spin networks ( area law , thermal behaviour ) to select quantum geometry states which may eventually lead to smooth spacetime geometry classically .this analysis usually focuses on states with few degrees of freedom , leaving open the question of whether a _ statistical_ characterisation may reveal new structural properties , independent from the interpretation of the spin network states . in this work we proposes the use of the information theoretical notion of quantum _ canonical typicality _ , as a tool to investigate and characterise universal local features of quantum geometry , going beyond the physics of states with few degrees of freedom . in quantum statistical mechanics ,_ canonical typicality _ states that almost every pure state of a large quantum mechanical system , subject to the fixed energy constraint , is such that its reduced density matrix over a sufficiently small subsystem is approximately in the _ canonical _ state described by a thermal distribution la gibbs .such a statement goes beyond the thermal behaviour . for a generic closed system in a quantum pure state ,subject to some _ global constraint _, the resulting canonical description will not be thermal , but generally defined in relation to the constraint considered .again , in this case , some specific properties of the system emerge at the local level , regardless of the nature of the global state .these properties depend on the physics encoded in the choice of the global constraints . within this generalised framework ,we exploit the notion of _ typicality _ to study whether and how `` universal '' statistical features of the local correlation structure of a spin - network state emerge in connection with the choice of the global constraint .we focus our analysis on the space of the -valent invariant intertwiners , which are the building blocks of the spin network states . in lqg, such intertwiners can be thought of dually as a region of space with an boundary .we reproduce the typicality statement in the full space of -valent intertwiners with fixed total area and we investigate the statistical behaviour of the canonical reduced state , dual to a small patch of the boundary , in the large limit . eventually , we study the entropy of such a reduced state and its area scaling behaviour in different thermodynamic regimes .the content of the manuscript is organised as follows .section [ tysn ] introduces the statement of canonical typicality in a formulation particularly suitable for the spin network hilbert space description .section [ snst ] shortly reviews the notion of state of quantum geometry in terms of spin network basis . in section [ cano ]we reformulate the statement of quantum typicality in this context .we derive the notion of canonical reduced state of the -valent intertwiner spin network system in section [ redu ] and we prove the existence of a regime of typicality for such system in section [ typ ] .the entropy of the typical state and its thermodynamical interpretation are investigated in section [ thermo ] .we conclude in section [ fine ] with a short discussion of our results .technical details of all the computations are given in the supplementary material .we start with a brief summary of the result achieved in .suppose we have a generic _ closed _ system , which we call `` universe '' , and a bipartition into `` small system '' and `` large environment '' .the universe is assumed to be in a pure state .we also assume that it is subject to a completely arbitrary _ global constraint _for example , in the standard context of statistical mechanics it can be the fixed energy constraint .such constraint is concretely imposed by restricting the allowed states to the subspace of the states of the total hilbert space which satisfy the constraint : and are the hilbert spaces of the system and environment , with dimensions and , respectively .we also need the definition of the canonical state of the system , obtained by tracing out the environment from the microcanonical ( maximally mixed ) state & & \mathcal{i}_{\mathcal{r } } \equiv \frac{\mathbb{1}_{\mathcal{r } } } { d_{\mathcal{r}}}\end{aligned}\ ] ] where is the projector on , and .this corresponds to assigning _ a priori _ equal probabilities to all states of the universe consistent with the constraints . in thissetting , given an arbitrary pure state of the universe satisfying the constraint , i.e. , the reduced state ] . studying the ratio we can see that and play a rather symmetric role in making this quantity small .the region of interest is certainly or , or both .as we will argue in the next section , is precisely the regime of interest for the thermodynamical limit .therefore we focus on this region , where there are two different regimes : or . in both cases thereare wide regions of the parameters space where the inequality \ll 1 ] defines some kind of _ effective _ dimension of the system , suggesting that the following two things happen in such regime : first , the canonical state has approximately a tensor product structure ; second that the total spin is equally distributed among all spins in the universe therefore the accessible hilbert space of each spins is roughly limited by a representation of the order of .the validity of this interpretation can be checked assuming a tensor product structure of links with single - link hilbert space limited to the representation and computing the entropy as the logarithm of the dimension of this space .if we can find an such that the difference is proportional only to small corrections , we can say that our argument is not too far from what is happening in such a regime . with these assumptionsthe effective dimension of the hilbert space of the system is in the regime we can write it as which gives the difference between the two entropies is given only by small corrections of order when .this simple computation provides evidence that the result in eq.([eq : ent2 ] ) follows from the two aforementioned assumptions . eventually , we compute the behaviour of the entropy in the intermediate regime . with respect to the previous cases ,this regime does not add anything new to the analysis .the observed behaviour is extensive in the number of links of the system , with a coefficient which is slightly different from the previous one : )\end{aligned}\ ] ] the relevant computation can be found in the supplementary material .in this manuscript we extend the so - called typicality approach , originally formulated in statistical mechanics contexts , to a specific class of tensor network states given by invariant spin networks . in particular , following the approach given in ,we investigate the notion of canonical typicality for a simple class of spin network states given by -valent intertwiner graphs with fixed total area .our results do not depend on the physical interpretation of the spin - network , however they are mainly motivated by the fact that spin networks provide a gauge - invariant basis for the kinematical hilbert space of several background independent quantum gravity approaches , including loop quantum gravity , spin - foam gravity and group field theories .the first result is the very existence of a regime in which we show the emergence of a canonical typical state , of which we give the explicit form .geometrically , such a reduced state describes a patch of the surface comprising the volume dual to the intertwiner .the structure of correlations described by the state should tell us how local patches glue together to form a closed connected surface in the quantum regime .we find that , within the typicality regime , the canonical state tends to an exponential of the total spin of the subsystem with an interesting departure from the gibbs state .the exponential decay la gibbs of the reduced state is perturbed by a parametric dependence on the norm of the total angular momentum vector of the subsystem ( closure defect ) .such a feature provides a signature of the non local correlations enforced by the global gauge symmetry constraint .this is our second result .we study some interesting properties of the typical state within two complementary regimes , and . in both cases , we find that the area - law for the entropy of a surface patch must be satisfied at the local level , up to sub - leading logarithmic corrections due to the unavoidable dependence of the state from the closure defect . however , the area scaling interpretation of the entropy in the two regimes is quite different . in the regime ,the result is related to the definition of a generalised gibbs equilibrium state .the area is playing the role of the energy , as imposed by the specific choice of the global constraint , requiring total area conservation . on the other hand , in the regime, the area scaling is given by the extensivity of the entropy in the number of links comprising the reduced state , as for the case of the generalised ( non -gauge invariant ) spin networks .in this regimen , each link contributes independently to the result , indicating that the global constraints are very little affecting the local structure of correlations of the spin network state .still , interestingly , the remainder of the presence of the constraints can be read in the definition of what looks like an effective dimension for the single link hilbert space .we interpret these results as the proof that , within the typicality regime , there are certain ( local ) properties of quantum geometry which are `` universal '' , namely independent of the specific form of the global pure spin network state and descending directly from the physical definition of the system encoded in the choice of the global constraints .we would like to stress that our result is purely kinematic , being a statistical analysis on the hilbert space of spin - network states .for the case of a simple intertwiner state , such study necessarily requires to consider a system with a _ large number _ of edges , beyond the very large dimensionality of the hilbert space of the single constituents . in this sense ,the presented statistical analysis and thermal interpretation is very different from what recently done in , considering quantum geometry states characterised by few constituents with a high dimensional hilbert space .in fact , we expect a large number statistical analysis to play a prominent role in facing the problem of the continuum in quantum gravity .therefore we think it is important to propose and develop new technical tools which are able to deal with a large numbers of elementary constituents and extract physically interesting behaviours .+ the kinematic nature of the statement of typicality , together with its general formulation in terms of constrained hilbert spaces given in , provide an important tool to study the possibility of a thermal characterisation of reduced states of quantum geometry , regardless of any hamiltonian evolution in time . beyond the simple case considered in the paper and in a more general perspective, we expect typicality to be useful to understand how large the effective hilbert space of the theory can be , given the complete set of constraints defining it .it will also help in understanding which typical features we should expect to characterise a state in such space .if we think of dynamics as a flow on the constrained hilbert space , we generally expect that , even if the initial state is highly un - typical after a certain `` time''-scale we will find the system in a state which is extremely close to the typical state .this happens because , as it has been shown in the original paper on typicality , the number of states close to the typical state are the overwhelming majority . + finally , it is interesting to look at the proposed `` generalised '' thermal characterisation of a local surface patch , within the standard lqg description of the horizon , as a closed surface made of patches of quantized area . differently from the _ isolated horizon _analysis ( see e.g. ) , in our description the thermal character of the local patch is not ( semi)classically induced by the thermal properties of a black hole horizon geometry , but emerges from a purely quantum description . in this sense , our picture goes along with the informational theoretic characterisation of the horizon proposed in .in fact , we think that typicality could be used to define an information theoretic notion of quantum horizon , as the boundary of a generic region of the quantum space with an emergent thermal behaviour .we leave this for future work .the authors are grateful to daniele oriti , aldo riello and thibaut josset for interesting discussions and careful readings of the draft of the paper .in order to better understand the result it is useful to look at its most important step , which is the so - called levy - lemma . take an hypersphere in dimensions , with surface area .any function of the point which does not vary too much will have the property that its value on a randomly chosen point will approximately be close to the mean value .}{\mathrm{vol}\left [ \phi \in s^d \right ] } \leq 4 \ ,\mathrm{exp } \left [ - \frac{d+1}{9 \pi^3 } \epsilon^2 \right]\end{aligned}\ ] ] where ] is the total volume of the hilbert space .integrals over the hilbert space are performed using the unique unitarily invariant haar measure . + the levy lemma is essentially needed to conclude that all but an exponentially small fraction of all states are quite close to the canonical state .this is a very specific manifestation of a general phenomenon called `` concentration of measure '' , which occurs in high - dimensional statistical spaces .the effect of such result is that we can re - think about the `` a priori equal probability '' principle as an `` apparently equal probability '' stating that : as far as a small system is concerned almost every state of the universe seems similar to its average state , which is the maximally mixed state .+ j. c. baez , _ spin networks in gauge theory _ , advances in mathematics 117 253 - 272 ( 1996 ) ; j. c. baez , _ spin networks in nonperturbative quantum gravity _ , in the interface of knots and physics , louis kauffman , ed .( american mathematical society , providence , rhode island , 1996 ) .
in this work we extend the so - called typicality approach , originally formulated in statistical mechanics contexts , to invariant spin network states . our results do not depend on the physical interpretation of the spin - network , however they are mainly motivated by the fact that spin - network states can describe states of quantum geometry , providing a gauge - invariant basis for the kinematical hilbert space of several background independent approaches to quantum gravity . the first result is , by itself , the existence of a regime in which we show the emergence of a typical state . we interpret this as the prove that , in that regime there are certain ( local ) properties of quantum geometry which are `` universal '' . such set of properties is heralded by the typical state , of which we give the explicit form . this is our second result . in the end , we study some interesting properties of the typical state , proving that the area - law for the entropy of a surface must be satisfied at the local level , up to logarithmic corrections which we are able to bound .
this paper provides the mathematical foundation for stochastically continuous affine processes on the cone of positive semidefinite symmetric -matrices .these matrix - valued affine processes have arisen from a large and growing range of useful applications in finance , including multi - asset option pricing with stochastic volatility and correlation structures , and fixed - income models with stochastically correlated risk factors and default intensities . for illustration ,let us consider a multi - variate stochastic volatility model consisting of a -dimensional logarithmic price process with risk - neutral dynamics and stochastic covariation process , which is a proxy for the instantaneous covariance of the price returns . here denotes a standard -dimensional brownian motion , the constant interest rate , the vector whose entries are all equal to one and the vector containing the diagonal entries of . the necessity to specify as a process in such that it qualifies as covariation process is one of the mathematically interesting and demanding aspects of such models .beyond that , the modeling of must allow for enough flexibility in order to reflect the stylized facts of financial data and to adequately capture the dependence structure of the different assets .if these requirements are met , the model can be used as a basis for financial decision - making in the area of portfolio optimization , pricing of multi - asset options and hedging of correlation risk .the tractability of such a model crucially depends on the dynamics of .a large part of the literature in the area of multivariate stochastic volatility modeling has proposed the following affine dynamics for : \\[-8pt ] x_0&=&x\in s_d^+ , \nonumber\end{aligned}\ ] ] where is some suitably chosen matrix in , some invertible matrices , a standard -matrix of brownian motions possibly correlated with , and a pure jump process whose compensator is an affine function of .the main reason for the analytic tractability of this model is that , under some technical conditions , the following affine transform formula holds : = e^{\phi(t , z , v)+{\operatorname{tr}}(\psi(t , z , v ) x)+ v^\top y}\ ] ] for appropriate arguments and . the functions and solve a system of nonlinear ordinary differential equations ( odes ) , which are determined by the model parameters . setting , and and taking , we arrive at = e^{-\phi(t , u)-{\operatorname{tr}}(\psi(t , u ) x)},\qquad u\in s_d^+.\ ] ] in this paper , we characterize the class of all stochastically continuous time - homogeneous markov processes with the key property ( [ introaffine])henceforth called affine processes on .our main result shows that an affine process is necessarily a feller process whose generator has affine coefficients in the state variables .the parameters of the generator satisfy some well - determined admissibility conditions , and are in a one - to - one relation with those of the corresponding odes for and .conversely , and more importantly for applications , we show that for any admissible parameter set there exists a unique well - behaved affine process on .furthermore , we prove that any stochastically continuous infinitely decomposable markov process on is affine with zero diffusion , and vice versa . on the one hand ,our findings extend the model class ( [ eq : cov process ] ) , since a more general drift and jumps are possible .indeed , we allow for full generality in , as long as , for a general linear drift part and for an inclusion of ( infinite activity ) jumps. this of course enables more flexibility in financial modeling . for example , due to the general linear drift part , the volatility of one asset can generally depend on the other ones , which is not possible for . on the other hand , we now know the exact assumptions under which affine processes on actually exist .our characterization of affine processes on is thus exhaustive . beyond that ,the equivalence of infinitely decomposable markov processes with state space and affine processes without diffusion is interesting in its own right .this paper complements duffie , filipovi and schachermayer , who analyzed time - homogeneous affine processes on the state space .have been explored in . ]matrix - valued affine processes seem to have been studied systematically for the first time in the literature by bru , who introduced the so called wishart processes .these are generalizations of squares of matrix ornstein uhlenbeck processes , that is , of the form ( [ eq : cov process ] ) for and , for some real parameter .note that is a stronger assumption than what we require on and .bru then establishes existence and uniqueness of a local -valued solution to ( [ eq : cov process ] ) under the additional assumptions that has distinct eigenvalues , , and that and commute ( see , theorem 2 ) . in the more special casewhere and , bru shows global existence and uniqueness for ( [ eq : cov process ] ) for any with distinct eigenvalues ( see , theorem 2 and last part of section 3 ) .. but these are degenerate solutions , as they are only defined on lower - dimensional subsets of the boundary of ( see , corollary 1 ) . ]bru s results concerning strong solutions have recently been extended to the case of matrix valued jump - diffusions ; see .wishart processes have subsequently been introduced in the financial literature by gourieroux and sufana and gourieroux et al .financial applications thereof have then been taken up and carried further by various authors , including da fonseca et al . and buraschi , cieslak and trojani .grasselli and tebaldi give some general results on the solvability of the corresponding riccati odes .barndorff - nielsen and stelzer provide a theory for a certain class of matrix - valued lvy driven ornstein uhlenbeck processes of finite variation .leippold and trojani introduce -valued affine jump diffusions and provide financial examples , including multi - variate option pricing , fixed - income models and dynamic portfolio choice .all of these models are contained in our framework .we want to point out that the full characterization of positive semidefinite matrix - valued affine processes needs a multitude of methods . in order to prove the fundamental property of regularity of affine processesanother adaption of the famous analysis of montgommery and zippin is necessary , which has been worked out in and for the state space . for the necessary conditions on drift , diffusion and jump parameters we need the theory of infinitely divisible distributions on .most interestingly , the constant drift part must satisfy a condition depending on the magnitude of the diffusion component ( see proposition [ th : constant drift ] ) , which is in accordance with the choice of the drift in bru s work on wishart processes , as explained above .this enigmatic additional condition on the drift is derived by studying the process with respect to well chosen test functions , including in our case the determinant of the process .it is worth noting , as already visible in dimension one , that a naive application of classical geometric invariance conditions does not bring the correct necessary result on the drift but a stronger one .indeed , take a one - dimensional affine diffusion process solving then a back - of - the - envelope calculation would yield the stratonovich drift at the boundary point of value , leading to the necessary parameter restriction , which is indeed too strong .it is well known that the correct parameter restriction is .we see two reasons why geometric conditions on the drift can not be applied : first , precisely at the boundary of our state spaces the diffusion coefficients are not lipschitz continuous anymore , and , second , the boundary of the cone of positive semi - definite matrices is not a smooth submanifold but a more complicated object . for the sufficient directionrefined methods from stochastic invariance theory are applied .having established viability of a particular class of jump - diffusions on , existence of affine processes on the necessary parameter conditions is shown through the solution of a martingale problem .uniqueness follows by semigroup methods which need the theory of multi - dimensional riccati equations . summing up, we face two major problems in the analysis of positive matrix valued affine processes .first , the candidate stochastic differential equations necessarily lead to volatility terms which are not lipschitz continuous at the boundary of the state space .this makes every existence , uniqueness and invariance question delicate .second , the jump behavior transversal to the boundary is of finite total variation . for affine processes on ,results and proofs deviate in essential points from the theory on state spaces of the form given in , which is a consequence of the more involved geometry of this nonpolyhedral cone .the program of the paper as outlined below therefore includes a comparison with the approach in .section [ section : definition of affine property ] contains the main definition and a summary of the results of this article . in section [ subsec : regular and feller ] , we then derive two main properties , namely the regularity of the process and the feller property of the associated semigroup .the feller property , in turn , is a simple consequence of an important positivity result of the characteristic exponents , which is proved in lemma [ prop : feller prop ] .this lemma is further employed as a tool for the treatment of the generalized riccati differential equations in section [ section : riccati ] ( see proof of proposition [ prop_ricc_sol ] ) . the global existence and uniqueness of these equationsis then used to show uniqueness of the martingale problem for affine processes ( see proof of proposition [ th : existence markov ] ) . in section[ section : nec paramter restrictions ] , we define a set of admissible parameters specifying the infinitesimal generator of affine semigroups and prove the necessity of the parameter restrictions ( see proposition [ th : necessary admissibility ] ) . the sufficient direction is then treated in section [ sec : existence ] .it is known that , for , there exist continuous affine processes on which are in contrast to those on the state space _ infinitely divisible ( see example [ counterex : bru ] ) . the analysis of this paper reveals the failure of infinite divisibility as a consequence of the drift condition ( see proof of theorem [ th : char infdec infdiv ] ) .this has substantial influence on the approach chosen here to prove existence of affine processes associated with a given parameter set : being in general hindered to recognize the solutions of the generalized riccati differential equations as cumulant generating functions of sub - stochastic measures , as done in , section 7 , we solve the martingale problem for the associated lvy type generator on , as exposed in section [ sec : existence ] and appendix [ app : viability ] . in section [ subsec : alternative existence ] , however , we deliver a variant of the existence proof of for pure jump processes , which is possible in this case due to the absence of a diffusion component . finally , section [ secproofs ] contains the proofs of the main results which build on the propositions of the previous sections . for the stochastic background and notation , we refer to standard text books such as and .we write and . moreover : * denotes the space of symmetric -matrices equipped with the scalar product .note that is isomorphic , but not isometric , to the standard euclidean space .we denote by the standard basis of , that is , the component of is given by , where denotes the kronecker delta .additionally , we sometimes consider the following basis elements which are positive semidefinite and form a basis of : * stands for the cone of symmetric -positive semidefinite matrices , for its interior in , the cone of strictly positive definite matrices .the boundary is denoted by , the complement is denoted by , and denotes the one - point compactification .recall that is self - dual [ w.r.t .the scalar product , that is , both cones , and , induce a partial and strict order relation on , respectively : we write if , and if .* is the space of -matrices and the orthogonal group of dimension over .* denotes the -identity matrix . throughout this paper , a function understood as the restriction of a function which satisfies for all . without loss of generality .we avoid using the operator , that is , to identify with a vector in by stringing the columns of together , while only taking the entries with . throughout this article, we shall consider the following function spaces for measurable .we write for the borel -algebra on . corresponds to the banach space of bounded real - valued borel measurable functions on with norm .we write for the space of real - valued continuous functions on , for , for the space of functions with compact support and for the banach space of functions with and norm .furthermore , is the space of times differentiable functions on , the interior of , such that all partial derivatives of up to order belong to . as usual, we set , and we write and , for .we consider a time - homogeneous markov process with state space and semigroup acting on functions , we note that may not be conservative .then there is a standard extension of the transition probabilities to the one - point compactification of by defining for all and , with the convention that for any function on .thus becomes conservative on .[ def : affine process sd+ ] the markov process is called _ affine _ if : it is stochastically continuous , that is , weakly on for every and , and its laplace transform has exponential - affine dependence on the initial state for all and , for some functions and .note that stochastic continuity of implies that and are jointly continuous in ; see lemma [ lem : order preserving affine](iii ) below .moreover , due to the markov property , this also means that for all and .in contrast to , we take stochastic continuity as part of the definition of affine processes , and consider the laplace transform instead of the characteristic function .the latter is justified by the nonnegativity of , the former is by convenience since , as we will see in proposition [ th : regularity ] below , it automatically implies regularity in the following sense . the affine process is called _ regular _ if the derivatives exist and are continuous at . we remark that there are simple examples of markov processes which satisfy definition [ def : affine process sd+](ii ) but are not stochastically continuous ; see , remark 2.11 .however , such processes are of limited interest for applications and will not be considered . in the following, we shall provide an equivalent characterization of the affine property in terms of the generator of . as we shall see in ( [ eq : generator ] ) , the diffusion , drift , jump and killing characteristics of depend in an affine way on the underlying state .we denote by some bounded continuous truncation function with in a neighborhood of . then the involved parameters are admissible in the following sense .[ def : necessary admissibility ] an _ admissible parameter set _ associated with consists of : * a linear diffusion coefficient * a constant drift term * a constant killing rate term * a linear killing rate coefficient * a constant jump term : a borel measure on satisfying * a linear jump coefficient : a -matrix of finite signed measures on such that for all and the kernel satisfies * a linear drift coefficient : a family such that the linear map of the form satisfies \\[-8pt ] & & \eqntext{\mbox{for all with .}}\end{aligned}\ ] ] we shall comment more on the admissibility conditions in section [ subsecdis ] below .the following three theorems contain the main results of this article .their proofs are given in section [ secproofs ] .first , we provide a characterization of affine processes on in terms of the admissible parameter set introduced in definition [ def : necessary admissibility ] . as for the domain of the generator ,we consider the space of rapidly decreasing -functions on , defined in ( [ defs+ ] ) below .it is shown in appendix [ secstoneweier ] that , for , as well as . [th : main theorem ] suppose is an affine process on . then is regular and has the feller propertylet be its infinitesimal generator on .then and there exists an admissible parameter set such that , for , \\[-8pt ] & & { } + \int_{s_d^+\setminus\{0\ } } \bigl(f(x+\xi)-f(x)\bigr)m(d\xi)\nonumber\\ & & { } + \int_{s_d^+\setminus\{0\}}\bigl(f(x+\xi)-f(x)-\langle \chi(\xi ) , \nabla f(x)\rangle\bigr)m(x , d\xi),\nonumber\end{aligned}\ ] ] where is defined by ( [ eq : betaijdef ] ) , by ( [ eq : mudef ] ) and moreover , and in ( [ def : affine process ] ) solve the generalized riccati differential equations , for , with \\[-8pt ] & & { } -\int_{s_d^+\setminus\{0\ } } \biggl(\frac{e^{-\langle u , \xi\rangle}-1+\langle \chi(\xi ) , u \rangle}{\|\xi\|^2\wedge 1}\biggr)\mu(d \xi),\nonumber\end{aligned}\ ] ] where .conversely , let be an admissible parameter set. then there exists a unique affine process on with infinitesimal generator ( [ eq : generator ] ) and ( [ def : affine process ] ) holds for all , where and are given by ( [ eq : f - riccati ] ) and ( [ eq : r - riccati ] ) .[ remcons ] it can be proved as in that is conservative if and only if and is the only -valued local solution of ( [ eq : r - riccati ] ) for .the latter condition clearly requires that .hence , a sufficient condition for to be conservative is and and where denotes the jordan decomposition of .indeed , it can be shown similarly as in , section 9 , that the latter property implies lipschitz continuity of on . due to the feller property ,as established in theorem [ th : main theorem ] , any affine process on admits a cdlg modification , still denoted by ( see , e.g. , , chapter iii.2 ) .it can and will thus be realized on the space of cdlg paths with for whenever or . for every ,we denote by the law of given and by the natural filtration generated by .we also consider the usual augmentation of , where is the augmentation of with respect to .then is right continuous and is still a markov process under . we shall now relate conservative affine processes to semimartingales , where semimartingales are understood with respect to the stochastic basis for every .[ thmsemim ] let be a conservative affine process on and let be the related admissible parameter set associated with the truncation function .then is a semimartingale whose characteristics with respect to are given by where is given by ( [ eq : betaijdef ] ) , by ( [ eq : cijkl ] ) and by ( [ eq : mudef ] ) .furthermore , there exists , possibly on an enlargement of the probability space , a -matrix of standard brownian motions such that admits the following representation : where satisfies and denotes the random measure associated with the jumps of .hence , is continuous if and only if and vanish .let be the set of all families of probability measures on the canonical probability space such that is a stochastically continuous markov processes on with =1 ] , where denotes the smallest eigenvalue of .then for all ] .consequently , lemma [ lem : boundary ] yields that .hence , for all ] which yields again and for all , ] and , we have {dd}=\sum_{j\neq d}\frac{\varphi_n(x)(\sqrt{\lambda_j\varphi_n(x ) + \varepsilon}-\sqrt{\varepsilon})}{\sqrt{\lambda_j\varphi _ n(x)+\varepsilon}+\sqrt{\varepsilon}}[o^{\top}\sigma^{\top } \sigma o]_{dd}.\ ] ] since , we obtain by condition ( [ cond : drift eps ] ) {dd}\geq \bigl[o^{\top}\bigl(b-(d-1)\sigma^{\top}\sigma\bigr)o\bigr]_{dd } \geq0,\ ] ] which proves ( [ eq : inward cond ] ) for with . in the general case, we can proceed similarly . for with ,the elements of are given by , where with .this follows again from lemma [ lem : zero divisor ] and ( [ eq : normal cone sd+ ] ) .now , ( [ eq : inward cond ] ) can be written as which proves the assertion . combining lemmas [ lemaconva ] and [ lemprop : martingale problem approx ] , we obtain the announced existence result for the martingale problem for .[ lemth : stoch exist ] for every , there exists an -valued cdlg solution to the martingale problem for with .that is , is a martingale , for all .by lemma [ lemprop : martingale problem approx ] , there exists a solution to the martingale problem for with sample paths in ( the space of -valued cdlg paths ) , and hence also in .we now claim that is relatively compact considered as a sequence of processes with sample paths in .is relatively compact , that is , the closure of in is compact . here, denotes the family of probability distributions on and the distribution of . ] for the proof of this assertion , we shall make use of theorems 9.1 and 9.4 in chapter 3 of . in order to meet the assumption of , chapter 3 , theorem 9.4, we take as subalgebra of . then , for every and , we have }}|\mathcal{a}^{\varepsilon , \delta , n}f(x_{t}^{\varepsilon , \delta , n})|\bigr ] < \infty,\ ] ] since there exists a constant such that for all , where are the semi - norms as defined in ( [ eq : seminorm ] ) ( see also the proof of proposition [ th : generator ] ) .thus , the requirements of , chapter 3 , theorem 9.4 , are satisfied .note that in the notation of , chapter 3 , theorem 9.4 , corresponds in our case to such that , chapter 3 , condition ( 9.17 ) , is automatically fulfilled .it then follows by the conclusion of , chapter 3 , theorem 9.4 , that is relatively compact [ as family of processes with sample paths in for each .furthermore , since we consider , the compact containment condition is always satisfied , that is , for every and , there exists a compact set for which ] and and are as above . recall that is a solution of ( [ sde ] ) if is continuous and ( [ sde ] ) holds for all a.s .in particular , note that this null set depends on . fix andlet .furthermore , let be stopping times and for , , -measurable random variables . consider the following equations : then there exists a constant depending only on , , , the lipschitz constants of and and such that for , \nonumber\\ & & \qquad\leq c\mathbb{e}\biggl[\|u_1-u_2\|^p+|\theta_1 \wedge t-\theta_2 \wedge t|^{{p/2}}\\ & & \qquad\quad\hspace*{70.5pt}{}+{\int_0^t\sup_{u\leq s}}\|x_u - y_u\|^p \,ds\biggr].\nonumber\end{aligned}\ ] ] by the same arguments as in the proof of , lemma 11.5 , we first obtain the following estimate : moreover , for the stochastic integral part, we apply the burkholder davis gundy inequality \\ & & \qquad\leq k\mathbb{e}\biggl[\biggl(\int_0^t\bigl\|\sigma(x_u)1_{\{\theta_1\le u\ } } -\sigma(y_u)1_{\{\theta_2\le u\}}\bigr\|^2 \,du\biggr)^{{p/2}}\biggr]\\ & & \qquad\leq k\mathbb{e}\biggl[\biggl(\int_{\theta_1 \wedge t}^{(\theta_1 \vee\theta _2)\wedge t}\|\sigma(x_u)\|^2 \,du\biggr)^{p/2}+\biggl(\int_{\theta_2 \wedge t}^{(\theta_1 \vee\theta_2)\wedge t}\|\sigma(y_u)\|^2 \,du\biggr)^{p/2}\\ & & \qquad\quad\hspace*{125.4pt } { } + \biggl(\int_{(\theta_1 \vee\theta_2)\wedge t}^t\|\sigma(x_u)-\sigma ( y_u)\|^2 \,du\biggr)^{{p/2}}\biggr]\\ & & \qquad \leq k\mathbb{e}\biggl[|\theta_1\wedge t-\theta_2 \wedge t|^ { { p/2}}+\int_{0}^t\|\sigma(x_s)-\sigma(y_s)\|^p \,ds\biggr]\\ & & \qquad \leq k\mathbb{e}\biggl[|\theta_1 \wedge t-\theta_2 \wedge t|^{{p/2}}+\int_{0}^t\sup_{u\leq s}\|x_u - y_u\|^p \,ds\biggr],\end{aligned}\ ] ] where always denotes a constant which varies from line to line .the last estimate in both inequalities follows from the the lipschitz continuity of and . by assembling these pieces ,the proof is complete . hereis a fundamental existence result , which is not stated in this general form in the standard literature .therefore , we provide a full proof .[ th : regulardiff ] there exists a function \times{{\mathbb r}}^n \times \omega\times{{\mathbb r}}_+ \rightarrow{{\mathbb r}}^n]-measurable .denotes the predictable -field . ] solves ( [ sde ] ) for all .let be a stopping time and an measurable random variable , then solves for every \times{{\mathbb r}}^n ] , and , \\ & & \qquad\leq k\biggl(\|x - y\|^{p/2}+|\theta_1-\theta_2|^{p/2}\\ & & \qquad\quad\hspace*{17.3pt}+\int_0^t \mathbb{e}\bigl[\sup_{u\leq s}\|\widetilde{z}(\theta_1,x , u)-\widetilde{z}(\theta_2,y , u)\|^p\bigr ] \,ds\biggr)\end{aligned}\ ] ] for some constant . hence , by gronwall s lemma , &\leq & ke^{kt}(\|x - y\|^{p/2}+|\theta _ 1-\theta_2|^{p/2})\\ & \leq & c\|(\theta_1,x)-(\theta_2,y)\|^{p/2}.\end{aligned}\ ] ] let now be the set of dyadic rational numbers in and the set of dyadic rational numbers in .furthermore , we define by \times[-t , t]^n) ] and , respectively .denote then does the job .indeed , as are disjoint and for all , we have ( see , e.g. , , page 39 ) for all a.s ._ step _ 2 . for general , , approximate by the simple stopping times let be a sequence of -measurable random variables , each taking finitely many values , and in ( such obviously exists ) .moreover , for all ( see , chapter 1 , problem 2.24 ) . by step 1 , each satisfies the respective sde .moreover , from estimate ( [ eq : estimate ] ) and grownwall s lemma we deduce that for any , there exists a constant such that \\ & & \qquad\le c e^{ct}\mathbb{e}\bigl[\bigl\|u^{(k)}-u^{(k')}\bigr\|^2+\bigl|\theta^{(k)}\wedge t-\theta^{(k')}\wedge t\bigr|\bigr].\end{aligned}\ ] ] hence , is a cauchy sequence and thus converging with respect to ]-measurability of , as stated in theorem [ th : regulardiff](ii ) , and the fact that and are -measurable on .here is our existence result for ( [ eq : jump diffusion ] ) .there exists a cdlg -adapted process and a probability measure on with , such that is a solution of ( [ eq : jump diffusion ] ) on .we follow the arguments in the proof of , theorem 5.1 , which is based on , theorem 3.6 , and proceed in three steps ._ step _ 1 .we start by solving ( [ eq : jump diffusion ] ) along every path . to this end , let us define recursively : , , and for : where is any fixed point in and satisfies the properties of theorem [ th : regulardiff ] . by lemma [ lemma : 1 ] ,every is continuous in for all and -adapted on since is -measurable .thus , the process is cdlg -adapted and solves ( [ eq : jump diffusion ] ) on for and any fixed path ._ step _ 2 .it remains to show that there exists a probability measure such that is the compensator of and holds true .for this purpose , we shall make use of , theorem 3.6 .let us define the following random measure by observe that is predictable , since is cdlg and -adapted .theorem 3.6 in now implies that there exists a unique probability kernel from to , such that is the compensator of the random measure associated to the jumps of .on we then define the probability measure by whose restriction to is equal to . _ step _ 3 .we finally show that defined by ( [ xdef ] ) solves ( [ eq : jump diffusion ] ) on for all .note that is an -brownian motion .this implies that is a solution of ( [ sde ] ) on , satisfying the properties of theorem [ th : regulardiff ] .it thus remains to show that -a.s .let be the random measure associated with the jumps .as is bounded , we have for all , \times { { \mathbb r}}^n)\bigr]=\mathbb{e}_{\mathbb{p}}\bigl[\nu([0,t]\times { { \mathbb r}}^n)\bigr]=\mathbb{e}_{\mathbb{p}}\biggl[\int_0^t k(x_t,{{\mathbb r}}^n)\,dt\biggr]\leq c t\ ] ] for some constant .this implies that \times{{\mathbb r}}^n )< \infty ] or equivalently a.s . consider a nonempty closed convex set .we now provide sufficient conditions for the solution in ( [ xdef ] ) to be -valued .this result is based on , theorem 4.1 .we recall the notion of the _ normal cone _ of at , consisting of inward pointing vectors .see , for example , , definition iii.5.2.3 , except for a change of the sign .[ th : martingale problem convex ] assume that also has a lipschitz continuous derivative .suppose furthermore that for all and , where denotes the column of .then , for every initial point , the process defined in ( [ xdef ] ) is a -valued solution of ( [ eq : jump diffusion ] ) .we have to show that a.s . for all .we proceed by induction on . for , ,is simply given by due to , theorem 4.1 , conditions ( [ eq : invariance cond1 ] ) and ( [ eq : invariance cond2 ] ) imply that for all , a.s . let us now assume that for all , a.s ., thus in particular a.s . if , then we immediately obtain otherwise , let satisfy . then , &=&\mathbb { e}[f(x_{t_{n-1}-}+\delta j_{t_{n-1}})]\\ & = & \mathbb{e}\biggl[\int_{{{\mathbb r}}^n\setminus\{0\}}f(x_{t_{n-1}-}+\xi ) k(x_{t_{n-1}-},d\xi)\biggr]=0,\end{aligned}\ ] ] since by ( [ eq : cond jumps ] ) , a.s . and .hence , a.s . , implying that a.s .as this holds true for all with , it follows that a.s .thus , again by , theorem 4.1 , and conditions ( [ eq : invariance cond1 ] ) and ( [ eq : invariance cond2 ] ) a.s . takes values in , which proves the induction hypothesis .the definition of then yields the assertion .in this section , we deliver a differentiable variant of the stone weierstrass theorem for -functions on .this approximation statement is essential for the description of the generator of an affine semigroup , as is elaborated in section [ section : inf generator ] .we employ multi - index notation in the sequel . for ,a multi - index is an element having length .the factorial is defined by .the partial order is understood componentwise , and so are the elementary operations .that is , if and only if for .moreover , for , the multinomial coefficient is defined by we define the monomial , and the differential operator . corresponding to a polynomial , we introduce the differential operator .let denote the locally convex space of rapidly decreasing -functions on ( see , chapter 7 ) , and define the space of rapidly decreasing -functions on via the restriction equipped with the increasing family of semi - norms becomes a locally convex vector space ( see , theorem 1.37 ) . for technical reasons , we also introduce for the semi - norms on , where denotes the closed ball with radius and center .note that .we first give an alternative description of .[ lemc1 ] we have the inclusion is trivial .hence , we prove .so let for some with for all and some .we choose a standard mollifier supported in and satisfying , . for introduce the neighborhoods of .the convolution of the indicator function for with satisfies on and it vanishes outside .furthermore , all derivatives of are bounded , since where the last estimate holds because . now we set . by construction , and vanishes outside , because does .what is left to show is that . since vanishes outside , it is sufficient to deliver all estimates of its derivatives on .let , then we have by the leibniz rule by assumption is bounded on , and is bounded on all of .hence , by the last equation , we have for all , which by definition means . [ lemc2 ]let . then for each , and for all we have .in particular , we have that is , for some . since , there exists a positive constant such that , for all .hence , we obtain by a straightforward calculation , , for all .next , let , and write , where and and pick multi - indices .then we have by the binomial formula now since ranges in a compact set , and since we see that must be bounded uniformly in .hence , , for all . together with lemma [ lemc1 ], this implies .we are now prepared to deliver the following density result for the -linear hull of in .[ th : density ] is dense in .denote by and the topological dual of and , respectively .the former , , is known as the space of tempered distributions .the distributional action is denoted by and for and , respectively .now suppose by contradiction , that is not dense in .then by , theorem 3.5 , there exists some such that on .hence , , for all .the restriction yields a continuous linear embedding .hence , the restriction of to , given by yields an element of with . pick an according to lemma [ lemc2 ] . by the definition of , we have , for all . by the bros glaser theorem ( see , theorem ix.15 ) , there exists a function with , polynomially bounded [ i.e. , for suitable constants we have , for all ] and a real polynomial such that in . hence , we obtain for any but the last factor is just the laplace transform of .this implies , hence , which in turn implies that vanishes on all of , a contradiction .we thank martin keller - ressel and alexander smirnov for discussions and helpful comments .
this article provides the mathematical foundation for stochastically continuous affine processes on the cone of positive semidefinite symmetric matrices . this analysis has been motivated by a large and growing use of matrix - valued affine processes in finance , including multi - asset option pricing with stochastic volatility and correlation structures , and fixed - income models with stochastically correlated risk factors and default intensities . , , + and . .
removal of the smallest eigenvalues of a matrix can significantly improve the performance of iterative methods .this is important for lattice qcd where stochastic methods for disconnected diagrams and new matter formulations such as overlap fermions are computationally intensive .in addition , there is often a need for these systems to be shifted to take account of multiple quark masses or other parameters .we think that it is important for any iterative method we develop to be applicable to general non - hermitian systems , for this is often the case for lattice systems .gmres is an iterative method that can be used in lattice studies with stochastic noise methods .convergence can be improved by deflating eigenvalues .two years ago it was shown that deflating eigenvalues can be especially helpful for lattice problems with multiple right - hand sides . herewe report on generalizations of these techniques .first we show how deflated restarted gmres can accomodate multiple shifts .we give numerical results for a wilson matrix with very small eigenvalues .we then consider the case of combining multiple shifts with multiple right - hand sides . finally we also discuss and give numerical results on the possibility of combining deflated gmres for the first system with a bicgstab algorithm which uses those deflated eigenvalues for the subsequent solutions .the shifted systems are expressed by where the s(i ) are the numerical shifts . the method proposed in based upon keeping the residual vectors for the shifted systems parallel to one another .that is , one uses only a single krylov subspace to solve all the systems .this is possible for gmres only by forcing the residuals to be parallel to one another after a restart and thus the minimum residual condition is applied to only the unshifted ( most difficult ) system .we find that this also results in an almost optimal solution for the shifted systems .= 2.9 in -.1 cm -.5 cm our method for deflating gmres ( called gmres - dr ) simulataneously solves linear equations and computes eigenvalues .this is possible because it has a krylov subspace for solving the linear equations as well as krylov subspaces designed for solving the eigenvalue equation .these approximate eigenvectors are refined and improved as the gmres - dr cycles proceed . we have determined that gmres - dr can be efficiently applied to multiply shifted systems because of its krylov properties .fig.1 shows the result of a shifted run of gmres - dr(50,30 ) , i.e. , the subspace is 50-dimensional and includes 30 approximate eigenvectors .it is compared to the residual norm for regular gmres(50 ) .one can see the deflation start to take place around 200 matrix - vector products ( the approximate eigenvectors are accurate enough by then ) .typically , many right - hand sides are used to do stochastic estimates of matrix elements .it would be beneficial if the deflated eigenvalues and eigenvectors calculated for the first right - hand side could be reused .then deflation could begin from the start instead of having to wait for the required eigenvectors to be recalculated , with more dramatic results than in fig.1 .a method to accomplish this called gmres - proj was given in .gmres - proj alternates cycles of gmres with projections over the approximate eigenvectors .next we consider using this for the case of multiple shifts for each of the multiple right - hand sides . gmres - dr can deflate eigenvalues and handle multiple shifts at the same time for the first right - hand side ( the right - hand sides for each shifted system can be kept parallel ) .however , for the second and subsequent right - hand sides there is no way to deflate eigenvalues by projecting over the previously computed approximate eigenvectors and still keep the right - hand sides for the different shifted systems parallel .if the eigenvectors were exact , this would be possible .so one approach is to use only fairly accurate approximate eigenvectors and do a correcting iteration at the end for each shifted system separately .we now propose a better fix for this problem which does not require accurate eigenvectors . gmres - dr produces approximate eigenvectors that are represented in the form of an arnoldi - like recurrence relation : where the columns of span the space of approximate eigenvectors , has one extra column added to , and is a by matrix . using this, we can perform the projection over the approximate eigenvectors so that the right - hand sides for the different shifted systems are all multiples of each other , except for error in the direction of , the last column of .this error can be easily corrected at the end , if we are willing to solve one additional right - hand side , namely .there is also potential to develop a deflated version of bicgstab for multiply shifted systems using this idea .this will be reported later , but for now we will look at deflating bicgstab for multiple right - hand sides in the non - shifted case .the deflation on the additional right - hand sides can employ other methods after the initial eigenvalue generation by gmres - dr .we have looked at bicgstab , a non restarted method popular in the lattice community , in this context .we again combine a projection over approximate eigenvectors with the iterative method .however , the projection can only be performed once before the bicgstab iteration begins .therefore this projection must reduce the critical eigencomponents to a low enough level that no further reduction is needed during the bicgstab iteration .the minimal residual projection is the best possible one in the sense that it minimizes the norm of the residual vector .however , it can be shown that even if one knows the exact eigenvectors , the corresponding component in the residual vector will not generally be zeroed out ( unless the matrix is hermitian ) . to deal with this problem , we will give a projection that is better for this task .it uses both left and right approximate eigenvectors .left - right projection for deflating eigenvectors : \1 .let the current approximate solution be and the current system of equations be .let be orthonormal with columns spanning the subspace of approximate right eigenvectors .let similarly be orthonormal with columns from approximate left eigenvectors .solve , where .the new approximate solution is .this projection can reduce the small eigenvector components better than minimum residual and in the case of exact left and right eigenvectors , the components can be zeroed out . of course , it is necessary to compute left eigenvectors .however , for many lattice problems , including the wilson - dirac matrix , there is a simple relationship between the left and right eigenvectors .= 2.9 in -.1 cm -.5 cm fig.2 above shows a run on a smaller lattice ( using matlab ) of the deflated bicgstab algorithm using eigenvectors calculated from an initial gmres - dr solution of the first right - hand side .the deflated solutions take a smaller number of iterations than the nondeflated implementation .notice that deflation does not care about the nature of the right - hand sides used , so these may be noise vectors or units vectors , as one desires .this applies to all techniques used .this material is based upon work supported by the national science foundation under grants no .0070836 ( theoretical physics ) and 0310573 ( computational mathematics ) and ncsa and utilized the sgi origin 2000 system at the university of illinois .rm acknowledges the baylor university summer sabbatical program .y. saad and m. h. schultz , gmres : a generalized minimum residual algorithm for solving nonsymmetric linear systems `` , siam j. sci .comput . 7 , 856 ( 1986 ) .r. b. morgan , gmres with deflated restarting '' , siam j. sci . comput .24 , 20 ( 2002 ) .r. b. morgan and w. wilcox , nuclear physics b ( proc .suppl . ) 106 , 1067 ( 2002 ) .a. frommer , restarted gmres for shifted linear systems " , siam j. sci . comput .53 , 15 ( 1998 ) .
work on generalizing the deflated , restarted gmres algorithm , useful in lattice studies using stochastic noise methods , is reported . we first show how the multi - mass extension of deflated gmres can be implemented . we then give a deflated gmres method that can be used on multiple right - hand sides of in an efficient manner . we also discuss and give numerical results on the possibilty of combining deflated gmres for the first right hand side with a deflated bicgstab algorithm for the subsequent right hand sides .
+ this contribution deals with the implementation of procedures and methods , worked out in our group during the last few years , in order to provide algorithmic solutions to the problem of determining the first passage time ( fpt ) probability densities ( pdf ) and its relevant statistics for continuous state - space and continuous parameter gaussian processes describing the stochastic modeling of a single neuron s activity . in most modeling approaches , it is customary to assume that a neuron is subject to input pulses occurring randomly in time , ( see , for instance , and references therein ) . as a consequence of the received stimulations , it reacts by producing a response that consists of a spike train .the reproduction of the statistical features of such spike trains has been the goal of many researches who have focused the attention on the analysis of the interspike intervals .indeed , the importance of the interspike intervals is due to the generally accepted hypothesis that the information transferred within the nervous system is usually encoded by the timing of occurrence of neuronal spikes . to describe the dynamics of the neuronal firing we consider a stochastic process representing the change in the neuron membrane potential between two consecutive spikes ( cf . , for instance , ) . in this context, the threshold voltage is viewed as a deterministic function and the instant when the membrane potential crosses as a fpt random variable .the modeling of a single neuron s activity by means of a stochastic process has been the object of numerous investigations during the last four decades . a milestone contribution in this directionis the much celebrated paper by gerstein and mandelbrot in which a random walk and its continuous diffusion limit ( the wiener process ) was proposed with the aim of describing a possible , highly schematized , spike generation mechanism .however , despite the excellent fitting of a variety of data , this model has been the target of severe criticism on the base of its extreme idealization in contrast with some electrophysiological evidence : for example , this model does not take into account the spontaneous exponential decay of the neuron membrane potential .an improved model is the so called ornstein - uhlenbeck ( ou ) model , that embodies the presence of such exponential decay .however , the ou model does not allow to obtain any closed form expression for the firing pdf , except for some very particular cases of no interest within the neuronal modeling context .rather cumbersome computations are thus required to obtain evaluations of the statistics of the firing time .successively , alternative neuronal models have been proposed , that include more physiologically features .the literature on this subject is too vast to be recalled here .we limit ourselves to mentioning that a review of most significant neuronal models can be found in , and in the references therein .in particular , in it is presented an outline of appropriate mathematical techniques by which to approach the fpt problem in the neuronal context .we shall now formally define the firing pdf for a model based on a stochastic process with continuous sample paths .first , assume =1 ] will be identified with the firing pdf of a neuron whose membrane potential is modeled by and whose firing threshold is . throughout this paper, we shall focus our attention on neuronal models rooted on diffusion and gaussian processes , partially motivated by the generally accepted hypothesis that in numerous instances the neuronal firing is caused by the superposition of a very large number of synaptic input pulses which is suggestive of the generation of gaussian distributions by virtue of some sort of central limit theorems .it must be explicitly pointed out that models based on diffusion processes are characterized by the lack of memoryas a consequence of the underlying markov property . however , in the realistic presence of correlated input stimulations , the markov assumption breaks down and one faces the problem of considering more general stochastic models , for which the literature on fpt problem appears scarce and fragmentary .simulation procedures then provide possible alternative investigation tools especially if they can be implemented on parallel computers , ( see ) .the goal of a typical simulation procedure is to sample values of the fpt by a suitable construction of time - discrete sample paths of the process and then to record the instants when such sample paths first cross the boundary .in such a way , one is led to obtain estimates of the firing pdf and of its statistics , that may be implemented for data fitting purposes .the aim of this paper is to outline numerical and theoretical methods to characterize the fpt pdf for gaussian processes .attention will be focused on markov models in section [ section2 ] , and on non - markov models in section [ section3 ] .finally , section [ section4 ] will be devoted to the description of some computational results .+ we start briefly reviewing some essential properties of gauss - markov processes .let , where is a continuous parameter set , be a real continuous gauss - markov process with the following properties ( cf . ) : _ ( i ) _ : : ] is continuous in ; _ ( iii ) _ : : is non - singular , except possibly at the end points of where it could be equal to with probability one .a gaussian process is markov if and only if its covariance satisfies it is well known , that well - behaved solutions of ( [ eq:(2.1 ) ] ) are of the form where is a monotonically increasing function by virtue of the cauchy - schwarz inequality , and because of the assumed nonsingularity of the process on .the conditional pdf of is a normal density characterized respectively by conditional mean and variance \\ v(t|t_0 ) & = & h_2(t)\,\biggl[h_1(t)-{h_2(t)\over h_2(t_0)}\;h_1(t_0)\biggr],\end{aligned}\ ] ] with .it satisfies the fokker - planck equation and the associated initial condition +{1\over 2}\;{\partial^2 \over \partial x^2}\ ; [ a_2(t)\,f(x , t|y,\tau)],\\ \\ & & \lim_{\tau \uparrow t}\,f(x , t|y,\tau)=\delta(x - y),\end{aligned}\ ] ] with and given by \;{h_2^{\prime}(t)\over h_2(t)}\,,\qquad a_2(t ) = h_2 ^ 2(t)\;r^{\prime}(t),\ ] ] the prime denoting derivative with respect to the argument . the class of the gauss - markov processes , such that is characterized by means and covariances of the following two forms : + or ] is the fpt pdf of from at time to the continuous boundary , with \over h_2[r^{-1}(\vartheta_0)]},\qquad s^*(\vartheta)={s[r^{-1}(\vartheta)]-m[r^{-1}(\vartheta ) ] \over h_2[r^{-1}(\vartheta)]}\,.\ ] ] results on the fpt pdf for the standard wiener process can thus in principle be used via ( [ eq:(2.12 ) ] ) to obtain the fpt pdf of any continuous gauss - markov process .for instance , if is linear in , ] can be obtained via ( [ eq:(2.12 ) ] ) . instead, if ] can be obtained via the indicated transformation .the above procedure often exhibits the serious drawback of ensuing unacceptable time dilations ( see ) .for instance , exponentially large times are involved when transforming the ornstein - uhlenbeck process to the wiener process , which makes such a method hardly viable .hence , it is desirable to dispose of a direct and efficient computational method to obtain evaluation of the firing pdf .along such a direction , in it has been proved that the conditioned fpt density of a gauss - markov process can be obtained by solving the non - singular volterra second kind integral equation \!=\ ! - 2 \psi[s(t),t|x_0,t_0 ] + 2 \int_{t_0}^t\ !g[s(\tau),\tau|x_0,t_0]\ , \psi[s(t),t|s(\tau),\tau]\;d \tau\nonumber\\ & & \hspace*{10cm}\bigl(x_0<s(t_0)\bigr ) \label{eq:(3.1)}\end{aligned}\ ] ] with and = \biggl\{{s^{\prime}(t ) - m^{\prime}(t)\over 2 } \ ; -{s(t)-m(t)\over 2}\;{h^{\prime}_1(t)h_2(\tau ) -h^{\prime}_2(t)h_1(\tau)\overh_1(t)h_2(\tau)-h_2(t)h_1(\tau ) } \nonumber \\ & & \hspace*{3.3 cm } - \ ; { y - m(\tau)\over 2}\;{h^{\prime}_2(t)h_1(t)-h_2(t ) h^{\prime}_1(t)\over h_1(t)h_2(\tau)-h_2(t)h_1(\tau)}\biggr\ } \ ; f[s(t),t\,|\,y,\tau ] \label{psi}\end{aligned}\ ] ] where ] and =1 ] will be assumed to be such that and ( this last assumptions being equivalent to the mean square differentiable property ) .the fpt pdf of through is then given by the following expression =w_1(t|x_0)+\sum_{i=1}^\infty ( -1)^i\int_0^t \!\ ! dt_1 \!\ !\int_{t_1}^t \!\ !dt_2 \cdots \!\ !\int_{t_{i-1}}^t \!\ ! \!\ !dt_i w_{i+1}(t_1,\ldots , t_i , t|x_0 ) , \label{(0)}\ ] ] with \ , p_{2n } [ s(t_1),\ldots , s(t_n ) ; z_1 , \ldots , z_n|x_0],\nonumber\end{aligned}\ ] ] where is the joint pdf of conditional upon . due to the great complexity of the involved multiple integrals , expression ( [ ( 0 )] ) does not appear to be manageable for practical uses , even though it has recently been shown that it allows to obtain some interesting asymptotic results .since ( [ ( 0 ) ] ) is a leibnitz series for each fixed estimates of the fpt pdf can in principle be obtained since its partial sum of order provides a lower or an upper bound to depending on whether is even or odd .however , also the evaluation of such partial sums is extremely cumbersome . in conclusion , at the present time for this class of gaussian processes , no effective analytical methods , nor viable numerical algorithmsare available to evaluate the fpt pdf .a simulation procedure seems to be the only residual way of approach . to this aim, we have restored and updated an algorithm due to franklin in order to construct sample paths of a stationary gaussian process with spectral density of a rational type and deterministic starting point .the idea is the following .let us consider the linear filter where is the impulse response function and is the input signal . by fourier transformation , ( [ ( 3 ) ] ) yields where and are the spectral densities of input and output , respectively , and where denotes the fourier transform of equation ( [ ( 4 ) ] ) is suggestive of a method to construct a gaussian process having a preassigned spectral density .it is indeed sufficient to consider a white noise having spectral density as the input signal and then select in such a way that . if the spectral density of is of rational type , namely if where and are polynomials with setting , from ( [ ( 4 ) ] ) it follows where to calculate the output signal it is thus necessary to solve first the differential equation to obtain the stationary solution , and then evaluate the simulation procedure is designed to construct sample paths of the process at the instants where is a positive constant time increment .the underlying idea can be applied to any gaussian process having spectral densities of a rational type and , since the sample paths of the simulated process are generated independently of one another , this simulation procedure is particularly suited for implementation on supercomputers .extensive computations have been performed on parallel computers to explore the different shapes of the fpt pdf as induced by the oscillatory behaviors of covariances and thresholds ( cf ., for instance , and ) .grafici / fig1.eps 8 cm plot of the boundary given in ( [ soglia ] ) for and ( bottom to top ) fig : fig1 grafici / fig2.eps 10 cm plots refer to fpt pdf through the boundary ( [ soglia ] ) with and for a zero - mean gaussian process characterized by correlation function ( [ ( funcorr ) ] ) . in figure 2(a ) given by ( [ ( fptclosed ) ] ) has been plotted .the estimated fpt pdf with is shown in figure 2(b ) , with in figure 2(c ) and with in figure 2(d ) .fig : fig2 grafici / fig3.eps 10 cm same as in figura [ fig : fig2 ] with .+ the aim of this section is to compare the behavior of the fpt pdf s among gauss - markov processes and gaussian non - markov processes , in order to analyze how the lack of memory affects the shape of the density , also with reference to the specified type of correlation function . for simplicity ,set and =1, ] . to be specific , we consider a stationary gaussian process with zero mean and correlation function which is the simplest type of correlation having a concrete engineering significance .when the correlation function is of type ( [ ( funcorr ) ] ) , is not mean square differentiable , since .thus the series expansion ( [ ( 0 ) ] ) does not hold . however , specific assumptions on the parameter help us characterize the shape of the fpt pdf .we start assuming so that the correlation function ( [ ( funcorr ) ] ) can be factorized as hence , choosing and in ( [ eq:(2.2 ) ] ) , becomes gauss - markov .therefore , for any boundary the fpt pdf can be numerically evaluated by solving the integral equation ( [ eq:(3.1 ) ] ) . in the following , we consider boundaries of the form \biggr\ } , \label{soglia}\ ] ] with .it is evident that and that tends to 0 as increases .furthermore , as decreases , the boundary becomes flatter . in figure[ fig : fig1 ] , the boundary given in ( [ soglia ] ) is plotted for and for two choices of the parameter , i.e. and . as proved in for boundaries of the form ( [ soglia ] ) , the fpt pdf of a gauss - markov process admits the following closed form : , \label{(fptclosed)}\ ] ] where $ ] is the transition pdf of the gauss - markov process for a zero - mean gauss - markov process characterized by the correlation function ( [ ( funcorr ) ] ) with and , the fpt pdf given by ( [ ( fptclosed ) ] ) through the boundary ( [ soglia ] ) is plotted in figure 2(a ) for and in figure 3(a ) for . note that as increases the mode increase , whereas the corresponding ordinate decrease . setting in ( [ ( funcorr ) ] ) , the gaussian process is no longer markov and its spectral density is given by thus being of a rational type .since in ( [ ( spectral ) ] ) the degree of the numerator is less than the degree of the denominator , it is possible to apply the simulation algorithm described in section 3 in order to estimate the fpt pdf of the process .the simulation procedure has been implemented by a parallel fortran 90 code on a 128-processor ibm sp4 supercomputer , based on mpi language for parallel processing .the number of simulated sample paths has been set equal to .the estimated fpt pdf through the boundary ( [ soglia ] ) with and are plotted in figures 2(b)(d ) for gaussian processes with correlation function ( [ ( funcorr ) ] ) having , respectively . for the same processes , figures 3(b)(d )show the estimated fpt pdf through the boundary ( [ soglia ] ) with and .note that as increases , the shape of the fpt pdf becomes flatter and the related mode increases .furthermore , as figures 2(a)-2(b ) and figures 3(a)-3(b ) show , is very similar to for small values of .work performed within a joint cooperation agreement between japan science and technology corporation ( jst ) and universit di napoli federico ii , under partial support by indam ( g.n.c.s ) .99 first passage time densities evaluation for simulated gaussian process . _ cybernetics and system _ * 1 * , ( 2000 ) , 301306 .simulation of gaussian processes and first passage time densities evaluation . _ lecture notes in computer science _ * 1798 * , ( 2000 ) , 319333 .parallel simulations in fpt problems for gaussian processes . science and supercomputing at cineca .report 2001 , ( 2001 ) , 405412 . a computational approach to first - passage - time problem for gauss - markov processes .prob . _ * 33 * , ( 2001 ) , 453482 . on the asymptotic behavior of first passage time densities for stationary gaussian processes and varying boundaries . _ methodology and computing in applied probability _( to appear ) .numerical simulation of stationary and non stationary gaussian random processes ._ siam review _ * 7 * , ( 1965 ) , 6880 . random walk models for the spike activity of a single neuron ._ biophysical journal _ * 4 * , ( 1964 ) , 4168 .certain properties of gaussian processes and their first - passage times . _ j. r. statist .soc . _ _ ( b ) _ * 27 * , ( 1965 ) , 505522 .diffusion processes and related topics in biology .springer - verlag , new york , ( 1977 ) .an outline of theoretical and algorithmic approaches to first passage time problems with applications to biological modeling . _ math .japonica _ * 50 * , ( 1999 ) , 247322 . a note on first passage time for gaussian processes and varying boundaries ._ ieee trans .inf . theory _ * 29 * , ( 1983 ) , 454457 . on the evaluation of first passage time densities for gaussian processes ._ signal processing _ * 11 * , ( 1986 ) , 339357 .diffusion models of neuron activity . in _ the handbook of brain theory and neural networks _arbib , ed.),the mit press , cambridge , ( 2002 ) , 343 - 348 . _e. di nardo : _ dipartimento di matematica , universit della basilicata , contrada macchia romana , potenza , italy + _ a.g .nobile : _ dipartimento di matematica e informatica , universit di salerno , via s. allende , i-84081 baronissi ( sa ) , italy + _ e. pirozzi and l.m .ricciardi : _ dipartimento di matematica e applicazioni , universit di napoli federico ii , via cintia , napoli i-80126 , italy
this paper focuses on the outline of some computational methods for the approximate solution of the integral equations for the neuronal firing probability density and an algorithm for the generation of sample - paths in order to construct histograms estimating the firing densities . our results originate from the study of non - markov stationary gaussian neuronal models with the aim to determine the neuron s firing probability density function . a parallel algorithm has been implemented in order to simulate large numbers of sample paths of gaussian processes characterized by damped oscillatory covariances in the presence of time dependent boundaries . the analysis based on the simulation procedure provides an alternative research tool when closed - form results or analytic evaluation of the neuronal firing densities are not available .
recently modern information - communication - technology ( ict ) has opened us access to large amounts of stored digital data on human communication , which in turn has enabled us to have unprecedented insights into the patterns of human behavior and social interaction .for example we can now study the structure and dynamics of large - scale human communication networks and the laws of mobility , as well as the motifs of individual behavior .one of the robust findings of these studies is that human activity over a variety of communication channels is inhomogeneous , such that high activity bursts of rapidly occurring events are separated by long periods of inactivity .this feature is usually characterized by the distribution of inter - event times , defined as time intervals between , e.g. , consecutive e - mails sent by a single user .this distribution has been found to have a heavy tail and show a power - law decay as . in human behaviorobvious causes of inhomogeneity are the circadian and other longer cycles of our lives as results of natural and societal factors .malmgren _ et al . _ suggested that an approximate power - law scaling found in the inter - event time distribution of human correspondence activity is a consequence of circadian and weekly cycles affecting us all , such that the large inter - event times are attributed to nighttime and weekend inactivity .as an explanation they proposed a cascading inhomogeneous poisson process , which is a combination of two poisson processes with different time scales .one of them is characterized by the time - dependent event rate representing the circadian and weekly activity patterns , while the other corresponds to the cascading bursty behavior with a shorter time scale .their model was able to reproduce an apparent power - law behavior in the inter - event time distribution of email and postal mail correspondence .in addition they calculated the fano and allan factors to indicate the existence of some correlations for the email data as well as for their model of inhomogeneous poisson process , with quite good comparison .however , the question remains whether in addition to the circadian and weekly cycle driven inhomogeneities there are also other correlations due to human task execution that contribute to the inhomogeneities observed in communication patterns , as suggested , e.g. , by the queuing models .there is evidence for this by goh and barabsi , who introduced a measure that indicates the communication patterns to have correlations .recently , wu _ et al ._ have studied the modified version of the queuing process proposed in by introducing a poisson process as the initiator of localized bursty activity .this was aimed at explaining the observation that the inter - event time distributions in short message ( sm ) correspondence follow a bimodal combination of power - law and poisson distributions .the power - law ( poisson ) behavior was found dominant for ( ) .since the event rates extracted from the empirical data have the time scales larger than ( also measured empirically ) , a bimodal distribution was successfully obtained . however , in their work the effects of circadian and weekly activity patterns were not considered , thus needing to be investigated in detail . as the circadian and weekly cycles affect human communication patterns in quite obvious ways , taking place mostly during the daytime and differently during the weekends , our aim in this paper is to remove or de - season from the data the temporal inhomogeneities driven by these cycles .then the study of the remaining de - seasoned data would enable us to get insight to the existence of other human activity driven correlations .this is important for two reasons .first , communication patterns tell about the nature of human behavior .second , in devising models of human communication behavior the different origins of inhomogeneities should be properly taken into account ; is it enough to describe the communication pattern by an inhomogeneous poissonian process or do we need a model to reflect correlations in other human activities , such as those due to task execution ? in this paper , we provide a systematic method to de - season the circadian and weekly patterns from the mobile phone communication data .firstly , we extract the circadian and weekly patterns from the time - stamped communication records and secondly , these patterns are removed by rescaling the timings of the communication events , i.e. phone calls and sms .the rescaling is performed such that the time is dilated or contracted at times of high or low event activity , respectively .finally , we obtain the inter - event time distributions by using the rescaled timings and comparing them with the original distributions to check how the heavy tail and burstiness behavior are affected . as the main results we find that the de - seasoned data still shows heavy tail inter - event time distributions with power - law scalings thus indicating that human task execution is a possible cause of remaining burstiness in mobile phone communication .this paper is organized as follows . in section [sect : analysis ] , we introduce the methods for de - seasoning the circadian and weekly patterns systematically in various ways . by applying these methods the values of burstiness of inter - event time distributions are obtained and subsequently discussed .finally , we summarize the results in section [ sect : summary ] .we investigate the effect of circadian and weekly cycles on the heavy - tailed inter - event time distribution and burstiness in human activity by using the mobile phone call ( mpc ) dataset from a european operator ( national market share ) with time - stamped records over a period of days starting from january 2 , 2007 .the data of january 1 , 2007 are not considered due to its rather unusual pattern of human communication .we have only retained links with bidirectional interaction , yielding users , links , and events ( calls ) .for the analysis of short message ( sm ) dataset , see appendix .we perform the de - seasoning analysis by defining first the observable . for an individual service user , denotes the number of events at time , where ranges from seconds , i.e. the start of january 2 , 2007 at midnight , to seconds ( 119 days ) .the total number of events is called the strength of user . in general , for a set of users , the number of events at time is denoted by . can represent one user , a set of users , or the whole population .when the period of cycle is given , the event rate with is defined as for convenience , we redefine the periodic event rate with period as with any non - negative integer for . by means of the event rate, we define the rescaled time as following this rescaling corresponds to the transformation of the time variable by with . here means that there exists no cyclic pattern in the frame of rescaled time .the time is dilated ( contracted ) at the moment of high ( low ) activity . in order to check whether the rescaling affects the inter - event time distributions andwhether still some burstiness exists we reformulate the inter - event time distributions by using rescaled event times and compare them with the original distributions .the definition of the rescaled inter - event time from the rescaled time is straightforward . considering two consecutive events of a user occurring at times and ,the original inter - event time is , then the corresponding rescaled inter - event time is defined as follows to find out how much the de - seasoning affects burstiness , we measure the burstiness of events , as proposed in , where the burstiness parameter is defined as here and are the standard deviation and the mean of the inter - event time distribution , respectively . the value of is bounded within the range of $ ] such that for the most bursty behavior , for neutral or homogeneous poisson behavior , and for completely regular behavior .the burstiness of the original inter - event time distribution , denoted by , is to be compared with that of of the rescaled inter - event time distribution for given period . with the de - seasoning the burstinessis expected to decrease and here we are most interested in by what amount the burstiness decreases when using day or days , i.e. removing the circadian or weekly patterns . [ cols="^,^ " , ] we apply the same de - seasoning analysis described in the main text to the sm dataset .figure [ fig : sms_individual ] shows the various circadian activity patterns of individual users .the main difference from the mpc dataset is that the activity peak is around 11 pm .this feature becomes evident if the averaged event rates are obtained from the same strength groups or from the groups with broad ranges of strength , shown in the left columns of figs [ fig : sms_strength_merge ] and [ fig : sms_group ] .for the details of the strength dependent grouping , see table [ table : sms_group ] .the inter - event time distributions and their values of burstiness are also compared . at first, the distributions can not be described by the simple power - law form but by the bimodal combination of power - law and poisson distributions as suggested by . as shown in fig .[ fig : sms_burstiness ] , the values of burstiness slowly decrease as the period increases up to 7 days , which implies that de - seasoning the circadian and weekly patterns does not considerably affect the bursty behavior . finally , we perform the power spectrum analysis for sm dataset in fig . [fig : sms_powerspec ] and find again that the cyclic patterns longer than can not be removed by the de - seasoning with period .all these results confirm our conclusion that the heavy tail and burstiness are not only the consequence of circadian and other longer cycle patterns but also due to other correlations , such as human task execution .financial support by aalto university postdoctoral program ( hj ) , from eu s 7th framework program s fet - open to ictecollective project no .238597 and by the academy of finland , the finnish center of excellence program 2006 - 2011 , project no .129670 ( mk , kk , jk ) , as well as by tekes ( fidipro ) ( jk ) are gratefully acknowledged .the authors thank a .-barabsi for the mobile phone communication dataset used in this research .10 j .- p .onnela , j. saramki , j. hyvnen , g. szab , m. argollo de menezes , k. kaski , a .-barabsi , and j. kertsz .analysis of a large - scale weighted network of one - to - one human communication ., 9:179 , 2007 .r. d. malmgren , j. m. hofman , l. a. n. amaral , and d. j. watts . characterizing individual communication patterns . in _kdd 09 : proceedings of the 15th acm sigkdd international conference on knowledge discovery and data mining _ , pages 607616 , new york , ny , usa , 2009 .acm press .
the temporal communication patterns of human individuals are known to be inhomogeneous or bursty , which is reflected as the heavy tail behavior in the inter - event time distribution . as the cause of such bursty behavior two main mechanisms have been suggested : a ) inhomogeneities due to the circadian and weekly activity patterns and b ) inhomogeneities rooted in human task execution behavior . here we investigate the roles of these mechanisms by developing and then applying systematic de - seasoning methods to remove the circadian and weekly patterns from the time - series of mobile phone communication events of individuals . we find that the heavy tails in the inter - event time distributions remain robustly with respect to this procedure , which clearly indicates that the human task execution based mechanism is a possible cause for the remaining burstiness in temporal mobile phone communication patterns .
the author thanks to anonymous referee for helpful and constructive comments .99 j. e. hirsch , proc .nat . acad .sci . * 46 * , 16569 ( 2005 ) [ arxiv : physics/0508025 ] .p. ball , nature * 436 * , 900 ( 2005 ) .d. f. taber , science * 309 * , 4 ( 2005 ) .l. bornmann and h .- d .daniel , embo reports * 10(1 ) * , 2 ( 2009 ) .w. glnzel , scientometrics * 67 * , 315 ( 2006 ) .l. bornmann and h .- d .daniel , scientometrics * 65 * , 391 ( 2005 ) .s. lehmann , a. d. jackson and b. e. lautrup , arxiv : physics/0512238 ; nature * 444 * , 1003 ( 2006 ) .j. e. hirsch , proc .sci . * 104 * , 19193 ( 2007 ) . [ arxiv:0708.0646 ] .l. egghe , issi newsletter * 2(1 ) * , 8 ( 2006 ) .s. b. popov , arxiv : physics/0508113 .d. katsaros , l. akritidis and p. bozanis , j. am . soc . inf .* 60n5 * , 1051 ( 2009 ) .m. schreiber , europhys .lett . * 78 * , 30002 ( 2007 ) [ arxiv : physics/0701231 ] . c. e. shannon , bell syst. tech .j. * 27 * , 379 ( 1948 ) ; * 27 * , 623 ( 1948 ) .r. p. feynman , _feynman lectures on computation , _ edited by anthony j.g .hey and robin w. allen ( addison - weslay , reading , 1996 ) , p. 123. s. kullback and r. a. leibler , annals math .statist .* 22 * , 79 ( 1951 ) .t. m. cover and j. a. thomas , _ elements of information theory _ ( wiley - interscience , new york , 1991 ) .r. gray , _ entropy and information theory _ ( springer - verlag , new york , 1990 ) .s. guiasu , _ information theory with applications _ ( mcgraw - hill , new york , 1977 ) .p. g. hall , statist .* 24 * , 25 ( 2009 ) [ arxiv:0910.3546 ] .r. adler , j. ewing , p. taylor , statist .* 24 * , 1 ( 2009 ) [ arxiv:0910.3529 ] .b. w. silverman , statist .* 24 * , 15 ( 2009 ) [ arxiv:0910.3532 ] .m. schreiber , arxiv:1005.5227v1 [ physics.soc-ph ] .s. redner , j. stat .l03005 ( 2010 ) [ arxiv:1002.0878 ] . m. mitzenmacher , internet mathematics * 1n2 * , 226 ( 2004 ). g. k. zipf , _ human behavior and the principle of least effort , _( addison - wesley , cambridge , 1949 ) .w. li , glottometrics * 5 * , 14 ( 2003 ) . m. v. simkin and v. p. roychowdhury , arxiv : physics/0601192v2 [ physics.soc-ph ] .m. e. j. newman , contemp .phys . * 46 * , 323 ( 2005 ) [ arxiv : cond - mat/0412004 ] .d. j. de s. price , science * 149 * , 510 ( 1965 ) .s. redner , eur .j. b * 4 * , 131 ( 1998 ) [ arxiv : cond - mat/9804163 ] .z. k. silagadze , complex syst .* 11 * , 487 ( 1997 ) [ arxiv : physics/9901035 ]. this paper demonstrates a violation of causality : written in 1999 , it was published in 1997 !l. egghe and r. rousseau , scientometrics * 69 * , 121 ( 2006 ) .k. barcza and a. telcs , scientometrics * 81 * , 513 ( 2009 ) .j. e. iglesias and c. pecharromn , scientometrics * 73 * , 303 ( 2007 ) . m. v. simkin and v. p. roychowdhury , complex syst .* 14 * , 269 ( 2003 ) [ arxiv : cond - mat/0212043 ] .m. v. simkin and v. p. roychowdhury , annals improb .11 * , 24 ( 2005 ) [ arxiv : cond - mat/0305150 ] .albert einstein , quoted in .f. radicchi , s. fortunato , b. markines and a. vespignani , phys .e * 80 * , 056103 ( 2009 ) [ arxiv:0907.1050v2 ] .
a new indicator , a real valued -index , is suggested to characterize a quality and impact of the scientific research output . it is expected to be at least as useful as the notorious -index , at the same time avoiding some its obvious drawbacks . however , surprisingly , the -index is found to be quite a good indicator for majority of real - life citation data with their alleged zipfian behaviour for which these drawbacks do not show up . the style of the paper was chosen deliberately somewhat frivolous to indicate that any attempt to characterize the scientific output of a researcher by just one number always has an element of a grotesque game in it and should not be taken too seriously . i hope this frivolous style will be perceived as a funny decoration only . [ [ section ] ] sound , sound your trumpets and beat your drums ! here it is , an impossible thing performed : a single integer number characterizes both productivity and quality of a scientific research output . suggested by jorge hirsch , this simple and intuitively appealing -index has shaken academia like a storm , generating a huge public interest and a number of discussions and generalizations . a russian physicist with whom i was acquainted long ago used to say that the academia is not a christian environment . it is a pagan one , with its hero - worship tradition . but hero - worshiping requires ranking . and a simple indicator , as simple as to be understandable even by dummies , is an ideal instrument for such a ranking . -index is defined as given by the highest number of papers , , which has received or more citations . empirically , with ranging between three and five . here stands for the total number of citations . and now , with this simple and adorable instrument of ranking on the pedestal , i m going into a risky business to suggest an alternative to it . am i reckless ? not quite . i know a magic word which should impress pagans with an irresistible witchery . claude shannon introduced the quantity which is a measure of information uncertainty and plays a central role in information theory . on the advice of john von neumann , shannon called it entropy . according to feynman , von neumann declared to shannon that this magic word would give him `` a great edge in debates because nobody really knows what entropy is anyway '' . armed with this magic word , entropy , we have some chance to overthrow the present idol . so , let us try it ! citation entropy is naturally defined by ( [ sentropy ] ) , with where is the number of citations on the -th paper of the citation record . now , in analogy with ( [ hind ] ) , we can define the citation record strength index , or -index , as follows where is the maximum possible entropy for a citation record with papers in total , corresponding to the uniform citation record with . note that ( [ sind ] ) can be rewritten as follows where is the so called kullback - leibler relative entropy , widely used concept in information theory . for our case , it measures the difference between the probability distribution and the uniform distribution . the kullback - leibler relative entropy is always a non - negative number and vanishes only if and probability distributions coincide . that s all . here it is , a new index afore of you . concept is clear and the definition simple . but can it compete with the -index which already has gained impetus ? i do not know . in fact , it does not matter much whether the new index will be embraced with delight or will be coldly rejected with eyes wide shut . i sound my lonely trumpet in the dark trying to relax at the edge of precipice which once again faces me . nevertheless , i feel -index gives more fair ranking than -index , at least in the situations considered below . some obvious drawbacks of the -index which are absent in the suggested index are the following : * -index does not depend on the extra citation numbers of papers which already have or more citations . increasing the citation numbers of most cited papers by an order of magnitude does not change -index . compare , for example , the citation records and which have and respectively . * -index will not change if the scientist losses impact ( ceases to be a member of highly cited collaboration ) . for example , citation records and both have , while -index drops from 4.8 to 3.0 . * -index will not change if the citation numbers of not most cited papers increase considerably . for example , and + both have , while -index increases from 3.0 to 6.9 . of course , -index itself also has its obvious drawbacks . for example , it is a common case that an author publishes a new article which gains at the beginning no citations . in this case the entropy will typically decrease and so will the -index . i admit such a feature is somewhat counter - intuitive for a quantity assumed to measure the impact of scientific research output . however , the effect is only sizable for very short citation records and in this case we can say that there really exists some amount of objective uncertainty in the estimation of the impact . anyway , you can hardly expect that a simple number can substitute for complex judgments implied by traditional peer review . of course , the latter is subjective . nothing is perfect under the moon . it is tempting , therefore , to try to overcome this possible subjectivity of peer review by using `` simple and objective '' numerical measures . especially because some believe academic managers `` do nt have many skills , but they can count '' . in reality , however , it is overwhelmingly evident that simple numerical indicators , like -index proposed here , neither can eliminate subjectivity in management science nor prevent a dull academic management . but they can add some fun to the process , if carefully used : `` citation statistics , impact factors , the whole paraphernalia of bibliometrics may in some circumstances be a useful servant to us in our research . but they are a very poor master indeed '' . it will be useful to compare two `` servants '' on some representative set of real - life citation records and fig.[sh ] gives a possibility . one hundred citation records were selected more or less randomly from the _ citebase _ citation search engine . fig.[sh ] shows -index plotted against -index for these records . what a surprise ! and indexes are strongly correlated and almost equal for a wide range of their values . of course , the coefficient , , in ( [ sind ] ) was chosen to make these two indexes relatively equal for _ some _ citation records , but i have not expected them to remain close for _ all _ citation records . there is some mystery here . let us try to dig up what it is . recent studies indicate that from the one hand the normalized entropy does not change much from record to record with , and on the other hand , the scaling law ( [ hind ] ) for the -index is well satisfied with . these two facts and a simple calculation imply that and indexes are expected to be approximately equal . therefore , the real question is why these regularities are observed in the citation records ? common sense and some experience on the citation habits tell us that these habits are the subject of preferential attachment the papers that already are popular tend to attract more new citations than less popular papers . it is well known that the preferential attachment can lead to power laws . namely , with regard to citations , if the citations are ranked in the decreasing order then the zipf s law says that empirical studies reveal that the zipf s law is a ubiquitous and embarrassingly general phenomenon and in many cases . the citation statistics also reveals it . therefore let us assume the simplest case , distribution ( [ zipf ] ) with . then condition determines the hirsch index and we see that , if the individual citation records really follow the zipf distribution , the hirsch index is a really good indicator as it determines the only relevant parameter of the distribution . in fact , the number of papers are finite and we have a second parameter , the total number of papers . for sufficiently large , therefore , from ( [ hindc ] ) we get the following scaling law instead of ( [ hind ] ) . however , varies from to then varies from to and this explains the observations of . note that the relation ( [ hindc ] ) was already suggested by egghe and rousseau , on the basis of zipf s law , in . for other references where the connections between zipf s law and -index are discussed see , for example , . as for -index , zipfian distribution implies the probabilities ( assuming is large ) and , hence , the following entropy but therefore , this expression gives for and for which are quite close to what was found in ( although for a small data set ) . because is small , we have the following scaling behaviour for the -index from ( [ sind1 ] ) : fig.[ssn ] and fig.[hhn ] demonstrate an empirical evidence for these scaling rules from the _ citebase _ citation records mentioned above . as we see , the scalings ( [ sindn ] ) and ( [ hindn ] ) are quite pronounced in the data . however , there are small number of exceptions . therefore , citation patterns are not always zipfian . inspection shows that in such cases the zipfian behavoir is spoiled by the presence of several renowned papers with very high number of citations . if they are removed from the citation records , the zipfian scalings for and indexes are restored for these records too . to conclude , we wanted to overthrow the king but the king turned out to be quite healthy . the secret magic that makes -index healthy is zipfian behaviour of citation records . under such behaviour , a citation record really has only one relevant parameter and the hirsch index just gives it . however , not all citation records exhibit zipfian behaviour and for such exceptions the new index related to the entropy of the citation record probably makes better justice . but , i m afraid , this is not sufficient to sound our trumpets and cry `` le roi est mort . vive le roi ! '' the magic has an another side however . the zipfian character of citation records probably indicate the prevalence of preferential attachment and some randomness in the citation process . we can use a propagation of misprints in scientific citations to estimate how many citers really read the original papers and the striking answer is that probably only 20% do , the others simply copying the citations . of course , it will go too far to assume that `` copied citations create renowned papers '' , but these observations clearly give a caveat against taking all the indexes based on the number of citations too seriously . `` not everything that can be counted counts , and not everything that counts can be counted '' . such local measures of the research impact estimation should be taken with a grain of salt and at least supplemented by different instruments for analyzing the whole citation network , like the one given in .
as the increase of connectivity in and between different complex systems , spread of many diseases or viruses is becoming more and more prevalent in our society .for instance , outbreaks of many infectious diseases , including severe acute respiratory syndromes ( sars ) , swine flu ( h1n1 ) , and the recent ebola virus , caused great damage and loss of life .the spread of computer and mobile phone viruses brought about a great deal trouble to human life and serious damage to economy .understanding the intrinsic mechanisms of those spreading processes and designing efficient control strategies become very important and urgent tasks , which bring together a lot of researchers from areas of biology , sociology , mathematics , physics , engineering , etc .mathematical modeling of epidemic spreading has a long history of more than two hundred years .generally , the population is divided into several classes : susceptible , infected and recovered individuals .susceptible individuals represent those who can contract the infection .infected individuals were previously susceptible individuals and got infected by the disease .recovered individuals are those who have recovered from the infection . in the susceptible - infected - susceptible ( sis ) model , infected individuals can recover from the disease and become susceptible individuals again . while in the susceptible - infected - recovered ( sir ) model , infected individuals no longer get infected after recovery from the disease , which are assumed to get the permanent immunity . in classical epidemiology, a common assumption is that individuals in a class is treated similarly , and have equal probability to contact with everyone else .however , the recent abundance of data demonstrates that both the connectivity pattern and the contact rate are heterogeneous among real - world complex networks , which means the traditional deterministic differential equations and many other related results of epidemic processes are inadequate in real - world situations .this great stimulates the research of epidemics on real - world complex networks . due to the complexity of real - world networks ,the mean - field approach and the generating function approach are used to drive the analytic results of epidemics spreading .one of the remarkable results obtained by pastor - satorras and vespignani shows that in the limit of a network of infinite size , the epidemic threshold of the sis model tends to zero asymptotically in scale - free networks with power - law parameter in ( 2 , 3 ] . for sir model , it was found that in the thermodynamic limit , not only the threshold tends to vanish , but also the time for the stabilization of the infection becomes very small . by using the message - passing approach ,karrer and newman calculated the probabilities for any node and any time to be in state s , i , and r on tree structure .many other explicit results of sir model are obtained by mapping the sir model to the percolation process . also , effects of degree correlations , clustering , weights and directions of edges on epidemic spreading are broadly discussed . on the other hand ,various efficient immunization protocols have been designed for controlling the spread of epidemics on networks .recently much attention has been transferred to epidemic spreading in temporal and multiplex networks .addition to diseases or viruses , there are usually many other substances spreading in networks like information packets , goods , ideas , etc . , which depend on the specific types of the networks .epidemic spreading is often coupled with the delivery of these substances .for example , hiv spreads through the exchange of body fluids among individuals in contact networks .computer viruses spread with the delivery of information packets in computer networks .flu often spreads by air traffics among different spatial areas .therefore , understanding the mechanisms of these coupled spreading processes and how these processes affect each other is significant for designing efficient epidemic immunization strategies .meloni et al first studied the effects of traffic flow on epidemic spreading .they found that the epidemic threshold in the sis model decreases as flow increases , and emergence of traffic congestion slows down the spread of epidemics .then , yang et al further studied the relation between traffic dynamics and the sis epidemic model , and found that the epidemic can be controlled by fine tuning the local or global routing schemes .furthermore , they obtained that the epidemic threshold can be enhanced by cutting some specific edges in the network .the impacts of traffic dynamics on sir epidemic model havent been reported in literature . in this paper , we study the traffic - driven sir spreading dynamics in complex networks .we focus on the instantaneous size of infected population , and the final size of ever infected population . based on these two properties ,we study how the packets transmission process governed by given routing protocols affects the epidemic spreading .our model includes two coupled processes : packet delivery process and the epidemic spreading process .we will introduce our model in the context of computer networks .we assume that nodes in the network are identical which can generate , receive and deliver information packets .each node has a queue obeying the first - in - first - out ( fifo ) rule for storing packets .the length of the queue is set infinite .load of a node is the number of packets in its queue .every node generates packets at a rate .for example if , each time an arbitrary node generates one packet definitely and another one with probability 0.5 .the destination nodes of the packets are chosen randomly , and the packets will be removed from the network after arriving at the destination nodes .the transmission of packets is governed by the efficient routing protocol proposed by yan et al . for an arbitrary path of length between node and , denoted as , its routing cost is defined as follows : where is the degree of node , and is a control parameter .the sum runs over all the intermediate nodes of path .the efficient paths for delivering packets are defined to be those which have the minimum routing costs .if there are many efficient paths between two nodes , we randomly chose one for delivering packets . according to eq . 1, determines the routing cost of a path .when , paths with large - degree nodes usually have large routing costs .thus , efficient paths tend to be those paths composed of small - degree nodes when , and vice versa . each time a node can deliver at most packets .when , all the packets can be delivered without delay , there is no traffic congestion .the overall load of the network is constant after a short transient time .when is a constant value , there is a critical packet generation rate .when , there is no traffic congestion .when , the network generates more packets than it can deliver .as a result , the overall load of the network increases with time , which is the traffic congestion phenomenon .we use the order parameter to characterize the traffic congestion , which is as follows : where is the total number of packets in the network at time .when , the generated and delivered packets are balanced , and the network is under the free flow state . when , packets accumulate continuously in the network , which indicates that there exists traffic congestion . as in the traditional sir model ,nodes in the network are divided into three classes : susceptible nodes , infected nodes and recovered nodes .initially , all the nodes in the network are susceptible nodes , which perform the normal function of generating , delivering , and receiving packets , and the network flow is stable. then we randomly select a node to be an infected one which is the original source of the infection .infected nodes generate infection packets instead of normal packets at each time step . with the delivery of these infection packets ,more and more susceptible nodes get infected after receiving the infection packets . infected nodes get recovered and become recovered nodes with probability at each time step . recovered nodes generate normal packets , and they can also make the infection packets into normal packets . thus , recovered nodes block the epidemic spreading .the epidemic spreading ends when all the infected nodes become recovered nodes .the transitions between the susceptible , infected , and recovered nodes for our model are shown in figure 1 .the underlying networks are random networks generated by the erds - rnyi ( er ) model and scale - free networks generated by the static model .also , some real - world networks are used in the simulations .we assume that , at time , the numbers of susceptible nodes , infected nodes and recovered nodes are , and respectively .first , we study the time evolution of our model on the er model , the static model , and the email network .we add the infection to the network at when the network is under free flow state , by randomly selecting a node to become infected . in figure 2 ( a ) , ( b ) and ( c ), we see that decreases with greatly , and then tends to be stable . on the contrary, increases with abruptly , and then converges at , which is the maximum number of recovered ( or ever infected ) node . reflects the range of the infection . differently from and , increases with first , then decreases with .the peak of , denoted by , represents the maximum instantaneous number of infected nodes during the epidemic spreading process . reflects how intense the epidemic spreading is . at any time ,the sum of , , and equals the size of the network .routing parameter is one of the important factors in our model . determines the efficient paths for the delivery of normal and infection packets . in figure 2(d ) , ( e ) , and ( f ) , we see that , and all vary with , and for any of the three , the trends of the curves of different are similar .we focus on the maximum instantaneous size of infected population , and the final size of ever infected population . in figure 3 ( a ) , ( b ) and ( c ), we see that both and increase with first , then decrease with .there are optimal which correspond to the maximum and respectively .note that the optimal for and are close , but not necessarily the same .there are jumps in both and when is near zero .to explain these results , we calculate the load variance of nodes , which is defined as follows : where is the load of node at time . is a constant value and is large enough to ensure accurate calculation of the average load of node .when is small , the load distribution is relatively even in the network , and vice versa . in figure 3 ( d ) , ( e ) and ( f ), decreases first and then increases with .there is optimal which leads to the minimum .interestingly , the values of the optimal for , and are close , which indicates that homogeneous load distribution facilitates the epidemic spreading .there is also abrupt decrease in when is near zero .this is because the efficient paths are very different for and , and the load is abruptly redistributed from large - degree nodes to small - degree nodes when increases from below zero to above zero .this also accounts for the jumps in and .the results are consistent for both the models networks and the email network as shown in figure 3 . , and vs. .the network models , the email network , as well as all the parameters are the same as in figure 2 .the results are the average over 100 independent runs ., width=384,height=288 ] packet generation rate also has great impacts on the epidemic spreading . in figure 4, we see that the peak of for is almost 4 times of the peak of for .the peak of for is almost 6 times of the peak of for .also the positions of the peaks for and 0.5 are different . in figure 5 ( a ), we clearly see both the maximum and the maximum increase with .when becomes large , infected nodes will generate more infected packets , which facilitates the epidemic spreading . in figure 5 ( b ) , we see that decreases with , which indicates an increase of dependency on large - degree nodes . to illustrate this , we focus on the top 1% largest degree nodes , andcalculate the average ratio of the number of infection packets a large degree node delivered when it was infected to the number of infection packets it cured after it recovered . in figure 6, increases with , which means that the role large - degree nodes play in facilitating the epidemic spreading becomes more and more remarkable compared to the role they play in inhibiting the epidemic spreading .this explains why the epidemic spreading becomes more dependent on large - degree nodes to spread widely when increases as shown in figure 5 ( b ) . and vs. for different .the results are the average over 100 independent runs . , width=480,height=192 ] , the maximum and the corresponding vs. . the network model andthe parameters are the same as in figure 4 .the results are the average over 100 independent runs ., width=384,height=192 ] vs. .the results are the average over 10 independent runs . , width=288,height=288 ]first , we show the impacts of network density on the epidemic spreading . in figure 7( a ) and ( c ) , we see that the maximum and the maximum increase with average degree for both random networks and scale - free networks .this indicates that more edges facilitate the epidemic spreading . in figure 7 ( b ) and( d ) , decreases with , but is large than zero .this means that , to realize an intense and wide epidemic spreading , generally the paths for the delivery of infection packets should be biased towards including small - degree nodes , but when the network becomes dense , the degree of the dependence of large - degree nodes in the transmission of infection packets should be increased accordingly . , the maximum and the corresponding vs. .the results are the average over 100 independent runs ., width=384,height=288 ] then we show the impacts of power - law parameter on the epidemic spreading in figure 8 .both the maximum and the maximum increase with first , and then tend to be stable .the increases slightly with .we infer from figure 8 that homogeneous network structure facilitates the epidemic spreading . , the maximum and vs. . the results are the average over 100 independent runs ., width=384,height=192 ] when delivery capacity is a constant value , and the packet generation rate is large enough , the packets ca nt be delivered in time , and then the number of packets accumulated in the network increases with time , which is the traffic congestion phenomenon . in the simulation , is set to 10 , is calculated according to eq . 2 to quantify the degree of traffic congestion .in figure 9 ( a ) , we see that only when is in [ 0.6 , 1 ] , there is no traffic congestion in the network , which is indicated by .otherwise , there exists traffic congestion where . in figure 9 ( b ) , we see that both and increase with first , and then decrease with .the optimal for and are 0.7 and 0.9 respectively , where there is no traffic congestion .also , we obtain that when the traffic congestion is not serious like in [ 1.5 , 2 ] ( figure 9 ( a ) ) , the epidemic spreading can still spread intensely and widely , which is inferred by and in figure 9 ( b ) . , and vs. .the results are the average over 100 independent runs ., width=384,height=192 ] then we fix to be zero , and study how the maximum and the maximum vary with . in figure 10 , we obtain that both the maximum and the maximum increase first and then decrease with , which is consistent for both random networks and scale - free networks .the reason for these results is that when is small , there is no traffic congestion , and the infection becomes intense and wide spread with increase of .however , when is large enough , traffic congestion appears , which inhibits the epidemic spreading . and the maximum vs. .the results are the average over 100 independent runs ., width=384,height=192 ] in addition to the efficient routing protocol , we also study the impacts of the other static routing protocols on the epidemic spreading . if is replaced with in eq .1 , then we get the cost function of the optimal routing protocol . for the optimal routing protocol , we only present the results of and vs. on three real - world networks in figure 11 .there are also optimal which correspond to the maximum and the maximum respectively . when is near zero , there are also jumps in and due to the significant change of paths for delivering packets . for the optimal routing and the efficient routing ,either the maximum or the maximum is very close .the difference lies in that and of the optimal routing vary much slower than that of the efficient routing . and vs. for the optimal routing and the efficient routing on three real - world networks .the results are the average over 100 independent runs ., width=480,height=240 ]in summary , we propose a traffic - driven sir epidemic model and study the impacts of several factors on our model . we find that the epidemic spreading is greatly affected by the load distribution , and homogeneous load distributionfacilitates the epidemic spreading . increasing the network density or network homogeneitywill enhance the epidemic spreading .large - degree nodes have dual effects on the epidemic spreading , since large - degree infected nodes facilitate the epidemic spreading , while large - degree recovered nodes greatly inhibit the epidemic spreading . to realize an intense and wide epidemic spreading , the paths for the delivery of packetsare generally biased towards including small - degree nodes .increasing packet generation rate generally favors the epidemic spreading .however , when the amount of generated packets is larger than the delivery capacity of the network , there will exit traffic congestion , which blocks the epidemic spreading .also , we find similar impacts of different static routing protocols on the traffic - driven sir epidemic spreading .our work helps understanding the interplay between traffic dynamics and epidemic spreading , and provides some clues for network immunization .this work was supported by the natural science foundation of china ( grant no .61304154 ) , the specialized research fund for the doctoral program of higher education of china ( grant no .20133219120032 ) , and the postdoctoral science foundation of china ( grant no .2013m541673 ) .10 s. n. dorogovtsev , a. v. goltsev , j. f. f. mendes , _ rev . mod .phys . _ * 80 * ( 2008 ) 1275 .a. barrat , m. barthelemy , a. vespignani , dynamical processes on complex networks , cambridge university press , cambridge , 2008 .m. e. j. newman , networks : an introduction , oxford university press , 2009 .r. pastor - satorras , c. castellano , p. van mieghem , a. vespignani , arxiv:1408.2701 , 2014 .a. l. barabsi , _ science _ * 325 * ( 2009 ) 412 .r. pastor - satorras , a. vespignani , _ phys ._ * 86 * ( 2001 ) 3200 .z. yang , t. zhou , _ phys .* 85 * ( 2012 ) 056106 .f. d. sahneh , c. scoglio , p. van mieghem , _ ieee / acm transactions on networking ( ton ) _ * 21 * ( 2013 ) 1609. m. e. j. newman , _ phys .* 66 * ( 2002 ) 016128 .m. barthlemy , a. barrat , r. pastor - satorras , a. vespignani , _ phys .* 92 * ( 2004 ) 178701 .m. barthlemy , a. barrat , r. pastor - satorras , a. vespignani , _ journal of theoretical biology _ * 235 * ( 2005 ) 275 .b. karrer , m. e. j. newman , _ phys .* 82 * ( 2010 ) 016101 .e. kenah , j. m. robins , _ phys .* 76 * ( 2007 ) 036113 .j. miller , _ phys .* 76 * ( 2007 ) 010101 .a. v. goltsev , s. n. dorogovtsev , j. f. f. mendes , _ phys . rev . e _ * 78 * ( 2008 ) 051105. a. v. goltsev , s. n. dorogovtsev , j. g. oliveira , j. f. f. mendes , _ phys .* 109 * ( 2012 ) 128702 .m. a. serrano , m. bogu , _ phys .* 97 * ( 2006 ) 088701 .j. miller , _ phys .* 80 * ( 2009 ) 020901 .y. gang , z. tao , w. jie , f. zhong - qian , w. bing - hong , _ chinese physics letters _ * 22 * ( 2005 ) 510 .x. chu , z. zhang , j. guan , s. zhou , _physica a _ * 390 * ( 2011 ) 471 .r. pastor - satorras , a. vespignani , _ phys .* 65 * ( 2002 ) 036104 .p. van mieghem , _computer communications _ * 35 * ( 2012 ) 1494 .m. starnini , a. machens , c. cattuto , a. barrat , r. pastor - satorras , _ journal of theoretical biology _ * 337 * ( 2013 ) 89 .c. granell , s. gmez , a. arenas , _ phys .* 97 * ( 2013 ) 128701 .d. w. zhao , l. h. wang , s. d. li , z. wang , l. wang , b. gao , _ plos one _ * 9 * ( 2014 ) e112018 .s. meloni , a. arenas , y. moreno , _ pnas _ * 106 * ( 2009 ) 16897 .h. x. yang , w. x. wang , y. c. lai , y. b. xie , b. h. wang , _ phys . rev .* 84 * ( 2011 )045101 . h. x. yang , z. x. wu , _ j. stat . mech . _ * 3 * ( 2014 ) p03018 .h. x. yang , z. x. wu , b. h. wang , _ phys .* 87 * ( 2013 ) 064801 .g. yan , t. zhou , b. hu , z. q. fu , b. h. wang , _ phys .* 73 * ( 2006 ) 046108 .a. arenas , a. daz - guilera , r. guimer , _ phys .* 86 * ( 2001 ) 3196 .p. erds , a. rnyi , _ publ .inst . hung ._ * 5 * ( 1960 ) 17 .goh , b. kahng , d. kim , _ phys .* 87 * ( 2001 ) 278701 .r. guimer , l. danon , a. diaz - guilera , f. giralt , a. arenas , _ phys .* 68 * ( 2003 ) 065103(r ) .s. w. sun , l. j. ling , n. zhang , g. j. li , r. s. chen , _ nucleic acids research _ * 31 * ( 2003 ) 2443 . l. a. adamic , n. glance , in proceedings of the www-2005 workshop on the weblogging ecosystem , 2005 .k. wang , y. zhang , s. zhou , w. pei , s. wang , t. li , _ physica a _ * 390 * ( 2011 ) 2593 .
we propose a novel sir epidemic model which is driven by the transmission of infection packets in networks . specifically , infected nodes generate and deliver infection packets causing the spread of the epidemic , while recovered nodes block the delivery of infection packets , and this inhibits the epidemic spreading . the efficient routing protocol governed by a control parameter is used in the packet transmission . we obtain the maximum instantaneous population of infected nodes , the maximum population of ever infected nodes , as well as the corresponding optimal through simulation . we find that generally more balanced load distribution leads to more intense and wide spread of an epidemic in networks . increasing either average node degree or homogeneity of degree distribution will facilitate epidemic spreading . when packet generation rate is small , increasing favors epidemic spreading . however , when is large enough , traffic congestion appears which inhibits epidemic spreading .
particle identification in the belle experiment is based upon a composite system of subdetectors , as illustrated in fig .[ belle_pid ] .this hybrid system consists of ionization loss measurements ( de / dx ) in the central drift chamber ( cdc ) , cherenkov light emission measurement in the barrel and endcap aerogel chernkov counters ( acc ) , and flight time measurement in the time of flight ( tof ) system . as indicated in the lower section of this figure, the three systems work together to cover the momentum range of interest . of these recording systems ,the tof system makes the most severe demands on time resolution .indeed , given the 2ns spacing between rf buckets ( and possible collisions ) , it is not known at recording time to which collision a given particle interaction in the tof system corresponds .= 3.2 in precision time recording in a very high rate environment requires an encoding scheme capable of continuous recording , with a minimum of deadtime per logged event . at the time of the construction of the belle experiment at the kekb - factory , a decision was made to unify the entire detector readout ( except for the silicon vertex detector ) on the lecroy 1877 multi - hit tdc module .this fastbus module is based upon the mtd132a asic , which has a 0.5ns resolution encoding , comparable to a number of similar devices .given the limited manpower for daq system development and maintenance , this proved to be a wise choice .the intrinsic time resolution was quite adequate for recording the timing information from the cdc , as well as the amplitude information ( through use of a charge - to - time converter ) for the cdc and acc .the challenge then was to be able to record pmt hits with 20ps resolution , using a multi - hit tdc having 500ps least count , and for collisions potentially separated by only 2ns .this latter constraint meant that traditional techniques using a common start or stop could not be applied , since the bunch collision of interest was not known at the time at which the hits need to be recorded .moreover , in order to avoid incurring additional error due to comparing a separate fiducial time , it is desirable to directly reference all time measurements to the accelerator rf clock .the solution adopted was a so - called time stretcher circuit , developed by one of the authors in conjunction with the lecroy corporation .this work built upon valuable lessons learned in developing a similar recording system for the particle identification detector system of the cplear experiment .the principle of operation is seen in fig .hits are time - dilated with respect to the accelerator clock and recorded at coarser resolution , but in direct proportion to the stretch factor employed .statistically , by also logging the raw hits , this stretch factor can be determined from the data .= 3.2 in as seen in fig .[ ts ] , four timing edges are recorded for each hit signal .the leading edge corresponds to the actual output time of the discriminator .this rising edge is paired with a falling edge , corresponding to the 2nd accelerator reference clock ( rf clock divided by 16 ) occuring after the initial hit timing .the interval of interest is then bounded to be between about 16 - 32 ns . with a tdc least count of 0.5ns , a factor of twenty time expansionis needed the stretch factor . in the figurethe third edge corresponds to the time - expanded version of the interval between the rising and falling edges .a benefit of this technique is that it provides self - calibration . by recording a large number of events, the stretch factor can be extracted from the data itself since the raw and expanded signals are recorded .a 4th edge is provided , two clock rising edges after the 3rd edge , to provide a return to known state before next pulse .an obvious drawback in this scheme is that the deadtime for each hit will be something like 320 - 640ns , as will be discussed later . in more detail, the signal chain of the current belle tof electronics , is sketched in fig .[ toffee ] .= 3.2 in high and low level discriminators are used , with the high - level used to reject background photon hits in the tof and a low - level threshold used to provide the best possible leading edge timing .the charge of triggered events is also recorded with a charge to time ( q - to - t ) asic , which is recorded with the same common tdc module .charge recording is needed to correct for amplitude dependent timing effects in the discriminator itself .the tof readout system has worked well for almost a decade . increased luminosity ( already 60% over design ) has lead to much higher single channel rates than had been specified in the design . from the beginning ,the maximum design specification was 70khz of single particle interaction rate for each channel . at this ratethe expected inefficiency would be a few percent , comparable to the geometric inefficiency ( due to cracks between scintillators ) . already the world s highest luminosity collider , the kekb accelerator can now produce in excess of one million b meson pairs per day . upgradeplans call for increasing this luminosity by a factor of 30 - 50 , providing huge data samples of 3rd generation quark and lepton decays . precise interrogation of standard model predictions will be possible , if a clean operating environment can be maintained .extrapolation of current occupancies to this higher luminosity mandates an upgrade of the readout electronics .the current system already suffers from significant loss of efficiency with higher background rates , as may be seen in fig .[ occ ] .in considering an upgrade to the tof readout electronics , it is worthwhile to consider the needs of an upgraded pid system for belle . a comparative study of the belle system , as depicted in fig .[ belle_pid ] , with that of babar pid system is informative .it is clear in fig .[ pidbakeoff ] the direct internally reflected cherenkov ( dirc ) detector of babar has a higher efficiency and lower fake rate than the hybrid tof / acc scheme used by belle .= 5.5 in indeed , it was realized in the construction stage of belle that such a dirc - type detector would have merits , and prototypes were explored . while these results were very promising ,the schedule risks led the collaboration to stick with technologies in which significant time and effort had already been invested .thinking about an upgrade , it is reasonable to revisit the choice of technology . in the intervening decade, significant progress has been made in the development of ring imaging cherenkov ( rich ) detectors , as well as detectors based upon the arrival time of the cherenkov photons , such at the correlated cherenkov timing ( cct ) and time of propagation ( top ) counters . because of the great cost encumbered in the procurement and construction of the csi crystal calorimeter , it is planned not to upgrade the barrel section . as a consequence , the volume available for the tof / acc replacement detector is rather limited .therefore a rich type detector has not been pursued .the most promising technologies to date are those illustrated in fig .[ pidconcept ] .the top concept uses timing in place of one of the projected spatial dimensions to reconstruct the cherenkov emission ring .a focusing dirc is principally using geometry to reconstruct the cherenkov ring segments .however , in this case precision timing is still very useful for two important reasons .first it allows for the possibility of using timing to correct for chromatic dispersion in the emission angle of the cherenkov photon . andsecond , fine timing allows time of flight to be measured using the quartz radiator bar .therefore , in both of the viable detector options considered , a large number of fine timing resolution recording channels are required . in the case of a finely segmented focusing dirc option, the number of readout channels could be comparable to that of the current silicon vertex detector .clearly if such a detector is to be viable , significant integration of the readout electronics will be essential .not shown is a proposal for a multi - segmented tof detector consisting of short scintillator bars .while this option remains viable ( and the electronics presented would work well with such a system ) , the pid performance degradation of such a system is probably unacceptable . of the choices listed ,the most attractive in terms of performance is a focusing dirc detector , if the issues of the photodetector and readout can be addressed .either as an upgrade of only the readout electronics or as a prototype for a higher channel count pid detector , it is worth considering improvements to the existing readout .the time stretcher technique has worked very well and belle has been able to maintain approximately 100ps resolution performance with the tof system .a slow degradation with time is consistent with loss of light output .detailed monte - carlo simulation has been able to reproduce much of the performance of the tof system and the degradation is consistent with light loss due to crazing of the scintillator surface .a larger concern is the significant degradation of tof system performance due to high hit rates .while the multi - hit tdc is capable of keeping up with high rates ( though the limited number of recorded edges ( 16 ) also leads to inefficiency ) , by its very nature , the time stretcher output can not be significantly reduced .recently , the clock speed was doubled , to help reduce this effect . nevertheless , at ever higher hit rates , the deadtime leads to ever increasing inefficiency .= 5.5 in a logical solution to this problem is to introduce a device which has buffering .also , while taking the effort to reduce the deadtime , it makes sense to consider a much more compact form - factor .this was done with the thought toward moving to a larger number of readout channels in a future belle pid upgrade , as mentioned earlier .one proposed solution is the monolithic time stretcher ( mts ) chip , a prototype of which is shown in fig .[ mts1 ] .the fundamental logic of the device is identical to that currently in use with two major changes : 1 .high density 2 .multi - hit high density is achieved by replacing discrete emitter - coupled logic components on daughter cards with a full custom integrated circuit .this higher integration permits having multiple time stretcher channels for each input . by toggling to a secondary output channel, the deadtime can be significantly reduced . once a hit is processed in one output channel ,the next is armed to process a subsequent hit . in fig .[ mts1 ] the 8 channel repeating structure of each time stretcher circuit is clearly seen in the die photograph .the basics of the time - stretcher circuit are visible in fig .[ ts_ckt ] .a one - shot circuit at the upper left leads to an immediate output signal , as well as starts charging current .pipelining of the hit signal continues for two clock cycles after which current is switched off and discharge current is switched on .a comparator monitors the voltage induced on the storage capacitor due to charging and discharging , providing an output signal to indicate the stretched time when the voltage is discharged .= 5.8 in = 5.8 in the stretch factor is given by the ratio of the two currents : .each input channel of the mts1 has two time stretcher circuits , the second corresponding to the secondary output when the primary channel is active .each output is recorded by a separate tdc channel . with this configuration at 10% deadtime for a single channel of time stretchercan be reduced to 1% .as the the incremental cost of additional tdc channels is rather low , it is possible consider additional buffering depths , which would reduce the deadtime by the , where n is the buffer depth , though that was not explored beyond a depth of two in this device .reduction of cross - talk and electro - magnetic interference is enhanced by the use of low voltage differential signalling ( lvds ) .mts1 is fabricated in the taiwan semiconductor manufacturing corporation m cmos process .when considering a photodetector with a large number of channels , the form factor of this device is very attractive , as shown for comparison in fig .[ mts_comp ] , a substantial reduction in size has been achieved . on the leftis a 16-channel fastbus - sized time stretcher card used currently in the belle experiment .inset is a test board with one of the mts1 packaged devices for comparison , where a dime has been placed on the board for scale .= 3.2 in with this level of integration it becomes feasible to consider integration of the time stretcher and tdc electronics on detector , as is being done for detector subsystems in the lhc experiments . in order to test the performance of the mts1 , a multi - hit tdc should be used . as a demonstration of the power of this time stretching technique , an field programmable gate array ( fpga )can be used as this tdc , where the results from a simple gray - code counter implementation of the hit time recording may be seen in fig .[ timing_resol ] .the rms of the distribution is about 840ps for the xilinx spartan-3 device used .this resolution could be improved by use of a faster fpga , though is sufficient to obtain the test results shown below .indeed , it is worth noting that this combined time - stetcher + fpga technique is very powerful for two important reasons : 1 . low - cost , high - density tdc implementation 2 .deep and flexible hit buffering and trigger matching logic = 3.2 in a test sweep of the mts1 input is shown in fig . [ mts1_linear ] , where it should be noted that due to the encoding scheme it is only meaningful to scan within a time expansion clock cycle period .a scan of expansion ratios was performed and the best results were obtained for stretch factors of 40 - 50 .= 3.2 in as can be seen , there is some non - linearity in the expanded time .this is more clearly seen when a plot of the residual distribution is made by subtracting off the linear fit , as shown in fig .[ mts1_resid ] .a periodic structure is seen , roughly consistent with the expansion clock period , if the negative timing dips are correlated to transition edges .= 3.2 in as with the hptdc device developed at cern for the alice detector , a fine calibration is needed to obtain a precision comparable to the current belle system .applying such a calibration , determined in a separate data set , significantly improved linearity and residuals are obtained .the subsequent results are histogrammed in fig .[ mts1_resol ] .= 3.2 in as can be seen , the timing resolution fits well to a double gaussian , with a narrow sigma less than 20ps , which is comparable to ( and actually slightly better than ) the existing belle system .this result is consistent with the expectation from the fpga tdc used , where and the measured sigma is about 15ps .it is possible that a finer resolution fpga tdc would allow for an even more precise timing determination . in practicethe systematic effects of the upstream discriminator and its amplitude dependent threshold crossing ( and comparator overdrive ) dependence make any further improvements difficult .nevertheless it is an interesting question for future exploration .this timing resolution is comparable to that obtained with the hptdc after careful non - linearity calibration .the broader gaussian distribution and significant non - gaussian tails are correlated with expansion clock feedthrough to the ramping circuit .this could be improved in a future version with better layout isolation .the m process used only had 3 metal routing layers available and migration to a finer feature size process would allow for dedicated shields and better power routing . in order to reduce deadtime a second time - stretcher circuit , with a separate output ,is provided for each input channel .this second circuit becomes armed when the primary stretcher circuit is running .use of such a scheme can significantly reduce data loss due to arrival of subsequent hit during operation of the first stretcher circuit .the factor may be expressed as where n is the number of buffer stages .for n=2 , the case prototyped here , a large existing deadtime of 20% could be reduced to 4% . moreover, this technique can be extended to an even larger number of buffer channels , a realistic possibility when using a low cost fpga - based tdc . in the case of 4 outputs ,a 20% single time stretcher deadtime would become a completely negligible .apart from the arming circuitry , the second time stretcher channel is identical to the primary .testing was performed with double - pulse events and the result for the second channel is seen in fig .[ mts1_buff ] .= 3.2 in note that these secondary channels have a time - stretch factor that is systematically smaller .as the same reference currents are mirrored in all channels , it is believed that this is due to ramp window reduction due to latency in the arming logic .an important check of performance of the mts1 is the impact of time stretcher operation on one channel while another is operating .this has been performed in fig .[ nts32_crosstalk ] , where the timing of the first channel is fixed and the timing relation of the signal in channel 2 is varied .= 3.2 in the impact of operation of this second channel is clear during the ramping portion of the readout cycle , as well as the threshold crossing at the end of the ramping interval .while this effect can be calibrated out to some extent , just like effects of the clock feedthrough , this perturbation to the circuit would be better mitigated through better isolation in the ic layout .an improved layout paired with future , higher clock frequency fpgas could open the possibility of very dense channel count , sub-10ps resolution tdc recording . for many applicationsthe hptdc is perfectly suitable and gives comparable time resolution to the mts1 + fpga tdc . in both casesa non - linearity correction is required to obtain this resolution . however the time encoding itself is only part of the issue for obtaining excellent timing resolution from a detector output .correction for time slew in the discriminator threshold crossing is critical .moreover the addition of many channels of high - speed discriminator inside a detector is a noise and power concern .compact , high - speed waveform recording may be a promising next evolutionary step in the readout of precision timing detectors .this work was supported by the us - japan foundation and the department of energy advanced detector research award number de - fg02 - 06er41424 .g. blanar and r. sumner , _new time digitizer applications in particle physics experiments _ , proc . of the first international conference on electronics for future colliders ,editors g. blanar and r. sumner , lecroy corporation , new york ( 1991 ) ; technical data sheet , mtd132a , from lecroy corporation , 700 chestnut ridge road , chestnut ridge , ny 11977 . + _ a short list , not meant to be comprehensive : _w. earle , e. hazen , b. safford , g. varner , `` a deadtimeless multihit tdc for the new g-2 experiment '' , prepared for 4th international conference on electronics for future colliders , chestnut ridge , ny , 11 - 12 may 1994 , pp 223 - 232 .m. passaseo , e. petrolo , s. veneziano , `` design of a multihit tdc integrated circuit for drift chamber readout '' , prepared for 5th international conference on electronics for future colliders , chestnut ridge , ny , 10 - 11 may 1995 , pp 139 - 142 .+ h. kichimi , y. yoshimura , t. browder , b. casey , m. jones , s. olsen , m. peters , j. rodriguez , g. varner , y. zheng , y. choi , d. kim , j. nam , _ `` the belle tof system '' _ , nucl .. meth . * a453*:315 - 320 , 2000 .nam , y.i .choi , d.w .kim , j.h .kim , b.c.k .casey , m. jones , s.l .olsen , m. peters , j.l .rodriguez , g. varner , y. zheng , n. gabyshev , h. kichimi , j. yashima , j. zhang , t.h .kim and y.j .kwon , `` a detailed monte carlo simulation for the belle tof system '' , nucl .. meth . * a 491 * 54 - 68 ( 2002 ) .low - voltage differential signaling ( lvds ) uses high - speed analog circuit techniques to provide multi - gigabit data transfers on copper interconnects .it is defined under the electronic industries alliance ( eia)-644 - 1995 standard .
identifying light mesons which contain only up / down quarks ( pions ) from those containing a strange quark ( kaons ) over the typical meter length scales of a particle physics detector requires instrumentation capable of measuring flight times with a resolution on the order of 20ps . in the last few years a large number of inexpensive , multi - channel time - to - digital converter ( tdc ) chips have become available . these devices typically have timing resolution performance in the hundreds of ps regime . a technique is presented that is a monolithic version of `` time stretcher '' solution adopted for the belle time - of - flight system to address this gap between resolution need and intrinsic multi - hit tdc performance .
elastic hadron scattering constitutes a hard challenge for qcd .the problem concerns the large distances involved ( confinement ) , which renders difficult the development of a formal nonperturbative calculational scheme for scattering states , able to describe soft diffractive processes . at this stage analyticity , unitarity , crossing and their consequences , still represent a fundamental framework for the development of theoretical ideas , aimed to reach efficient descriptions of the experimental data involved . in this context ,dispersion relations , connecting real and imaginary parts of the scattering amplitude , play an important role as a useful mathematical tool , in the simultaneous investigation of particle - particle and antiparticle - particle scattering .dispersion relations in integral form , for hadronic amplitudes , were introduced in the sixties , as consequences of the cauchy s theorem and the analytic properties of the scattering amplitude , dictated by unitarity .however , two kinds of limitations characterize this _ integral _ approach : ( 1 ) its nonlocal character ( in order to evaluate the real part , the imaginary part must be known in all the integration space ) ; ( 2 ) the restricted class of functions that allows analytical integration . later on itwas shown that , for hadronic forward elastic scattering _ in the region of high and asymptotic energies _ , these integral relations can be replaced by derivative forms . since then, the formal replacement of integral by derivative relations and their practical use have been widely discussed in the literature , mainly in the seminal papers by kol and fischer . for a recent critical review on the subject . despite the results that have been obtained with the _ derivative _ approach , the high - energy condition ( specifically , center - of - mass energies above 10 - 20 gev ) turns out difficult any attempt to perform global fits to the experimental data connecting information from low and high energy regions .a first step in this direction appears in ref . , where new representations for the derivative relations , extended to low energies , have been introduced by cudell , martynov and selyugin and to which we shall refer in what follows . however , a rigorous formal extension of the derivative dispersion relations down to the physical threshold , providing a complete analytical equivalence between integral and differential approaches , is still missing and that is the point we are interested in . in this work , we first demonstrate that , for a class of functions of physical interest as forward elastic scattering amplitudes , the integral relations can be analytically replaced by derivative forms without the high - energy approximation .therefore , in principle , for this class of functions , derivative relations hold for any energy above the physical threshold .we then check the consistences of the results obtained with the integral relations and the extended derivative dispersion relations by means of a simple analytical parametrization for the total cross sections from proton - proton ( ) and antiproton - proton ( ) scattering ( highest energy interval with available data ) .in addition , we compare the results with those obtained through the standard derivative relations ( high - energy condition ) and the derivative representation by cudell - matynov - selyugin .we shall show that , above the physical threshold , only the extended relations lead to exactly the same results as those obtained with integral forms .we proceed with a critical discussion on the limitations of our analysis from both formal and practical points of view .the manuscript is organized as follows . in sec .[ sec : dr ] we recall the main formulas and some conditions involving the _ integral dispersion relations _ ( idr ) , the _ standard derivative dispersion relations _ ( sddr ) and the _ cudell - martynov - selyugin representations _ ( cmsr ) ; we also present , in certain detail , the replacement of idr by the _ extended derivative dispersion relations _ ( eddr ) . in sec .[ sec : pra ] we check the consistences and exemplify the applicability of all these results in simultaneous fits to the total cross section and the ratio of the real to imaginary parts of the forward amplitude , from and scattering . in sec .[ sec : critical ] we present a critical discussion on all the obtained results .the conclusions and some final remarks are the contents of sec .[ sec : conclusions ] .first , it is important to recall that analyticity , unitarity and crossing lead to idr for the scattering amplitudes in terms of a _ crossing symmetric variable_. for an elastic process , , in the forward direction , this variable corresponds to the energy of the incident particle in the laboratory system , . in this context and taking into account polynomial boundedness , the one subtracted idr for crossing even ( ) and odd ( ) amplitudes , in the physical region ( ) , read where is the subtraction constant .the connections with the hadronic amplitudes for crossed channels , such as and elastic scattering , are given by the usual definitions : the main practical use of the idr concerns simultaneous investigations on the total cross section ( optical theorem ) and the ratio of the real to imaginary parts of the forward amplitude , which is also our interest here . in terms of the crossing symmetric variable physical quantities are given , respectively , by where is the scattering angle in the laboratory system .basically , at high energies , the replacement of idr by sddr is analytically performed by considering the limit in eqs .( [ eq : idre ] ) and ( [ eq : idro ] ) . it should be recalled that an additional high - energy approximation is considered in these integral equations , when they are expressed in terms of the center - of - mass energy squared and not .however , based on a rigorous replacement ( discussed in sec .[ sec : eddr ] ) , we consider the derivative relations in terms of the crossing symmetric variable . in this casethe sddr read \frac{{\mathrm{im\:}}f_{+}(e)}{e } , \label{eq : sddre}\ ] ] { \mathrm{im\:}}f_{-}(e ) .\label{eq : sddro}\ ] ] necessary and sufficient conditions for the convergence of the above tangent series have been established by kol and fischer , in particular through the following theorem : let .the series f(x ) \nonumber\end{aligned}\ ] ] converges at a point if and only if the series is convergent .for example , in the case of , a real constant , the ratio test demands for the series to be absolutely convergent ( which will be our interest in sec .[ sec : pra ] ) .recently , the following representations have been introduced for the derivative dispersion relations \frac{{\mathrm{im\:}}f_{+}(e)}{e } \nonumber \\ & & - \frac{2}{\pi } \sum_{p=0}^{\infty } \frac{c_+(p)}{2p+1}\left(\frac{m}{e}\right)^{2p } , \label{eq : cmsre}\end{aligned}\ ] ] where , .\ ] ] and \frac{{\mathrm{im\:}}f_{-}(e)}{e } \nonumber \\ & - & \frac{2}{\pi } \sum_{p=0}^{\infty } \frac{c_-(p)}{2p+1}\left(\frac{m}{e}\right)^{2p+1 } , \label{eq : cmsro}\end{aligned}\ ] ] where ,\ ] ] and and .we note the presence of correction terms in the form of infinity series , which go to zero as the energy increases , leading to the sddr , eqs .( [ eq : sddre]-[eq : sddro ] ) .we shall use this representation in sec .[ sec : pra ] , where their applicability is discussed in detail . in this sectionwe present our analytical replacement of the idr by derivative forms without the high - energy approximation .we also specify the class of functions for which this replacement can be formally performed .let us consider the even amplitude , eq .( [ eq : idre ] ) . integrating by parts we obtain following ref . , we define and , so that the integral term in the above formula is expressed by where . expanding the logarithm in the integrand in powers of , and assuming that is an analytic function of its argument, we perform the expansion substituting the above formulas in eq .( [ eq:11 ] ) and integrating term by term , _ under the assumption of uniform convergence of the series _ , we obtain where \nonumber\end{aligned}\ ] ] and is the incomplete gamma function . with this procedure and from ,( [ eq:10 ] ) is expressed by which can be put in the final form where the correction term is given by with analogous procedure for the odd relation we obtain where equations ( [ eq : eddre ] ) and ( [ eq : eddro ] ) are the novel eddr , which are valid , in principle , for any energy _ above _ the physical threshold , .we note that the correction terms as , leading , in this case , to the sddr , eqs .( [ eq : sddre ] ) and ( [ eq : sddro ] ) .we also note that the structure of the cmsr , eqs .( [ eq : cmsre ] ) and ( [ eq : cmsro ] ) , are similar to the above results , but without the logarithm terms .these terms come from the evaluation of the primitive at the lower limit in the integration by parts . since theorem [ theo:1 ] insures the uniform convergence of the series expansion associated with the condition imposed by this theorem defines the class of functions for which the eddr hold .for example , that is the case for , , referred to in sec .[ sec : sddr ] .other conditions are discussed by kol and fischer .in this section we verify and discuss the consistences between the analytical structures of the idr and the eddr in a specific example : the connections of the total cross section with the parameter from and scattering .firstly , it is important to note that the efficiency of both integral and derivative approaches in the description of the experimental data , depends , of course , on the theory available , namely the input for the imaginary part of the amplitude . in the absence of a complete model , valid for any energy above the physical threshold, we shall consider only as a _ framework _, a pomeron - reggeon parametrization for the scattering amplitude . for and scattering this analytical model assumes nondegenerate contributions from the even ( ) and odd ( ) secondary reggeons ( / and / , respectively ) , together with a simple pole pomeron contribution : where for and for . as usual , the pomeron and the even / odd reggeon intercepts are expressed by we stress that the pomeron - reggeon phenomenology is intended for the high - energy limit ( rigorously , or ) .its use here , including the region of low energies , has only a framework character .however , as we shall show , this model is sufficient for a comparative analysis of the consistences .we shall return to this aspect in sec . [sec : critical ] . in what follows ,the point is to treat simultaneous fits to the total cross section and the parameter from and scattering and compare the results obtained with both idr and eddr .schematically , with parametrization ( [ eq:15]-[eq:16 ] ) we determine through eq .( [ eq:3 ] ) and then either by means of the idr , eqs .( [ eq : idre]-[eq : idro ] ) or the eddr , eqs .( [ eq : eddre]-[eq : eddro ] ) . returning to eq .( [ eq:3 ] ) we obtain and and , at last , eqs .( [ eq : tcs ] ) and ( [ eq : rho ] ) lead to the analytical connections between and for both reactions .moreover , through the same procedure , we shall also compare the above results with those obtained by means of both the sddr , eqs .( [ eq : sddre]-[eq : sddro ] ) and the cmsr , eqs .( [ eq : cmsre]-[eq : cmsro ] ) .we first present the fit procedure and then discuss all the obtained results .for the experimental data on and , we made use of the particle data group archives , to which we added the values of and from scattering at 1.8 tev , obtained by the e811 collaboration .the statistical and systematic errors were added in quadrature .the fits were performed through the cern - minuit code , with the estimated errors in the free parameters corresponding to an increase of the by one unit . to fit the data as function of the center - of - mass energy, we express the lab energy in the corresponding formulas in terms of , namely .we included all the data above the physical threshold , gev , that is , we did not perform any kind of data selection .since the ensemble has a relatively large number of experimental points just above the threshold , the statistical quality of the fit is limited by the model used here as framework .in fact , with the these choices and procedures we obtained reasonable statistical results ( in terms of the per degree of freedom ) only for an energy cutoff of the fits at 4 gev . however , we stress that our focus here is in tests on the consistences among the different relations and representations and not , strictly , on the statistical quality of the fits ( we shall return to this point in sec . [sec : critical ] ) . in each of the four cases ( idr , sddr , cmsr and eddr ) ,we consider two variants of the fits , one neglecting the subtraction constant ( that is , taking ) and the other considering the subtraction constant as a free fit parameter .the numerical results and statistical information on the fits are displayed in table [ tab:1 ] ( ) and table [ tab:2 ] ( free ) .the corresponding curves together with the experimental data are shown in fig .[ fig:1 ] ( ) and fig .[ fig:2 ] ( free ) . [ cols="^,^,^,^,^",options="header " , ] and from and scattering , by means of integral dispersion relations ( idr ) , standard derivative dispersion relations ( sddr ) , cudell - martynov - selyugin representation ( cmsr ) and the extended derivative dispersion relations ( eddr ) and considering the subtraction constant ( table [ tab:1 ] ) .the curves corresponding to idr ( solid ) and eddr ( dot - dashed ) coincide ., title="fig : " ] and from and scattering , by means of integral dispersion relations ( idr ) , standard derivative dispersion relations ( sddr ) , cudell - martynov - selyugin representation ( cmsr ) and the extended derivative dispersion relations ( eddr ) and considering the subtraction constant ( table [ tab:1 ] ) .the curves corresponding to idr ( solid ) and eddr ( dot - dashed ) coincide ., title="fig : " ] but considering the subtraction constant as a free fit parameter ( table [ tab:2]).,title="fig : " ] but considering the subtraction constant as a free fit parameter ( table [ tab:2]).,title="fig : " ] the main goal of this section is to discuss the consistences among the results obtained by means of distinct analytical connections between the real and imaginary parts of the amplitude .however , some phenomenological consequences can also be inferred from this study , as discussed in what follows . from tables [ tab:1 ] and [ tab:2 ]we see that , as expected , the best statistical results are obtained with the subtraction constant as a free fit parameter .however , as we shall show , taking gives suitable information not only on the practical equivalence between the idr and the differential forms ( sddr , cmsr and eddr ) , but also on the important role played by the subtraction constant .for that reason we shall treat separately the cases and as a fit parameter .from table [ tab:1 ] we see that , for , the numerical results obtained with the idr and the eddr are exactly the same , up to four figures and that this does not occur in the case with the sddr and the cmsr neither .that is an important result since it demonstrates the accuracy of our analytical results for the extended derivative relations .we note that the high values of , in all the cases , are consequences of the specific analytical model considered ( intended for the high - energy region ) and the energy cutoff used .we add the fact that we did not performed any data selection , but used all the available data from the pdg archives .however , as already commented this disadvantage has no influence in our main goal , namely tests of consistences .the effects of the equivalences ( idr and eddr ) and differences ( idr and sddr or cmsr ) in the description of the experimental data are shown in fig .[ fig:1 ] .the curves corresponding to idr ( solid ) and eddr ( dot - dashed ) coincide at all the energies above the threshold and we see that even with the fit cutoff at gev , the description of the experimental data below this point is reasonably good in both cases . on the other hand , the differences between the exact results ( idr and eddr ) and the sddr or cmsr are remarkable for at the highest energies and for in the region of low energies ( below gev ) . in the case of the total cross section ,the results with sddr and cmsr indicate a faster increase with the energy then those with the idr and eddr .we stress the importance of this point , since it gives different solutions for the well known puzzle between the cdf data and the e710/e811 data at tev ; in this respect , we see that the exact results ( idr and eddr ) favor the e811/e710 results .in particular the values for the pomeron intercept read ( table [ tab:1 ] ) : ( idr and eddr ) , ( sddr ) and ( cmsr ) . with as a free fit parameter our results demonstrate , once more , an effect that we have already noted before , namely the high - energy approximation can be absorbed by the subtraction constant .in fact , from fig . [ fig:2 ] we see that in this case , the differences between the sddr / cmsr and the exact results idr / eddr , practically disappear . from table[ tab:2 ] we can identify the subtraction constant as the responsible for this complementary effect : the numerical values of the fit parameters and errors are practically the same in all the four cases , except for the values of , that is , in practice , the differences are absorbed by this parameter .we conclude that the subtraction constant affects the fit results even in the region of the highest energies ; this effect is due to the correlations among the free parameter in the fit procedure , as previously observed . of course ,also in this case the numerical values obtained with the idr and eddr are exactly the same , including the value of the subtraction constant up to four figures ( table [ tab:2 ] ) .in particular , we note that all the four variants indicate the same result for the intercept of the pomeron , namely .the corresponding result for the total cross section lies nearly between the cdf and e811/e710 results , barely favoring the last ones ( figure [ fig:2 ] ) .we have demonstrated that for the class of functions defined by theorem [ theo:1 ] , idr can be formally replaced by differential operators without any high - energy approximation ; we have also verified the equivalence between the integral and extended derivative results , in the particular case of a simple phenomenological parametrization for and scattering . despite the encouraging results reached the whole analysis has limitations from both formal and practical points of view . in what followswe summarize the main critical points , giving references where more details can be found and providing also suggestions for further investigations .since the eddr involve two contributions , the tangent operators ( sddr ) and the correction terms , we shall consider the two cases separately .we also present some critical remarks on the pomeron - reggeon model used in sect .[ sec : pra ] as a practical framework .first , let us discuss some aspects related to the dispersion approach as it has been treated and widely used in the literature till now , namely the sddr , eqs .( [ eq : sddre ] ) and ( [ eq : sddro ] ) . as commented in sect .[ sec : sddr ] , these equations are obtained by considering the limit in the idr , eqs .( [ eq : idre ] ) and ( [ eq : idro ] ) .this condition is a critical one , which puts serious practical and formal limitations in any use of the sddr , because that means to go to lower energies by passing through different thresholds , resonances , poles , up to ! in this sense ,the expression of the tangent operator as an integral from to does not guarantee any local character for the differential approach ( or the corresponding integral ) , even in the case of convergence of the series .in other words , this representation of the non - local operator ( integral ) in terms of local operators ( tangent series ) , does not guarantee the non - locality of the result .moreover , the representation does not apply near the resonances and the convergence of the series has been discussed in several works leading some authors to argue that , in a general sense , the mathematical condition for the convergence excludes all cases of physical interest " .these and other aspects were extensively discussed in the seventies and eighties and some points have been recently reviewed in .however , there is a fundamental point developed by some authors that enlarger the practical applicability of the sddr under some special conditions .as stated by kol and fischer , in discussing the replacement of idr by sddr we must distinguish two formulations : ( 1 ) to consider the case of asymptotic energies and a finite number of terms in the tangent series ; ( 2 ) to consider finite energies and an infinity number of terms in the series .the former case applies for smooth behaviors of the amplitude ( as it is the case at sufficiently high energies , specifically 10 - 20 gev ) . that includes a wide class of functions of physical interest , mainly if only the first term can be considered ( see for a recent analysis even beyond the forward direction ) .the later case , however , is critical for at least two reasons .first , because the condition of convergence of the series ( theorem [ theo:1 ] ) limits the class of functions of practical applicability . secondly and more importantly , since the high - energy approximation is enclosed , all the strong limitations referred to before applies equally well to this case . in conclusion , the class of functions for which the sddr have a practical applicability depends strongly on the formalism considered and is narrower in the case of finite energies , namely entire functions in the logarithm of the energy .let us now discuss the eddr , with focus on the role of the correction terms in eqs .( [ eq : eddre ] ) and ( [ eq : eddro ] ) .first we note that these infinity series are analytically associated with the fixed lower limit in the integral representation and they correspond to the contributions that are neglected in the high - energy approximation ( ) .for example , for the even case we have the formal identity which means that all the physical situation concerns the region _ above _ the physical threshold .therefore , from a formal point of view , the critical points raised above on the sddr ( tangent operator only ) , concerning the infinity series in the region , do not apply in this case and the critical point here concerns only the convergence of the correction series and their practical applicability . on the one hand , from a formal point of view ( as already discussed at the end of sect .[ sec : eddr ] ) , the convergence of the correction series is ensured by theorem [ theo:1 ] and that means a narrower class of functions than that associated only with entire functions in the logarithm of the energy .this restriction is due to the infinity number of derivatives in . we shall give and discuss some examples in what follows .from a practical point of view , it is obvious that the efficiency and/or real applicability of not only the derivative approach ( eddr ) , but also the integral one ( idr ) , depends on the specific physical problem involved . in this scenario ( the physical problem )we expect to find some specific limitations that are independent of the formal aspects referred to above and these aspects demands also some comments . in principle and in a _ general sense _ ,if we attempt to apply dispersion techniques directly to the experimental data ( related to the imaginary part of a function ) , we are faced with the problem of error propagation from the experimental uncertainties .even if we can reproduce " the experimental behavior by means of suitable analytical parameterizations , with statistical errors inferred for the free parameters , these errors should , in principle , be propagated too . in this case , the infinity series in both sddr and eddr have certainly limited usefulness ( see for example for the sddr case ) . however , if error propagation from the fit results is not of interest or can be neglected , and , most importantly , one has a correct " or acceptable model for the imaginary part of the amplitudes , then we are restricted only to the the formal conditions discussed above and the derivative approach becomes reliable , including the correction series ( theorem [ theo:1 ] ) .let us now discuss the specific physical problem that motivated the present analysis . as commented in our introduction and in sect .[ sec : dr ] , we focused the dispersion techniques in the context of hadron scattering , in special in the elastic case , for which a complet theory is still absent .the main goal concerns the connections between total cross section and the parameter for energies above = 5 - 10 gev . in terms of dispersion techniquesthe usual way to treat the subject is by means of idr , sddr and the analyticity prescriptions for even and odd amplitudes ( and recently the cmsr ) . in this specific case , besides the absence of a pure qcd treatment , the subject is characterized by three kinds of problems : ( 1 ) formal justification of the usual phenomenology ; ( 2 ) approximated descriptions of the experimental data by phenomenological models ; ( 3 ) experimental data available ( problems ( 2 ) and ( 3 ) are certainly connected ) .since these problems affect the practical applicability and efficiency of the dispersion techniques , let us shortly discuss some aspects involved .\(1 ) as it is well know , the usual phenomenology for the total cross sections is based on the reggeon concepts and involves distinct contributions from pomerons and secondaries reggeons .in this context , analytical parameterizations for the total cross sections are characterized by power and logarithm functions of the energy ( reggeons , simple , double and triple pole pomerons ) and the fits are performed not below = 5 gev .we note that all these contributions belong to the class of functions defined by theorem [ theo:1 ] ( the tangent series can be summed leading to closed analytical results ) and they have been used and investigated in several works . however , as we have already pointed out , the central problem here concerns the fact that these contributions are formally justified only for asymptotic energies ( or ) , which certainly is not the case for the energies considered .the applicability of these models seems to be justified only under the hypothesis that the accelerators have already reached the energies that can be considered asymptotic in the mathematical context , which seems to us a dangerous assumption .( 2 - 3 ) a close look at the bulk of experimental data available shows that these ensembles present several discrepancies due to spurious data , normalization problems and other effects . in this respect ,recent analysis have pointed out the necessity of some screening criterion in order to select the correct " experimental information .we shall not discuss this question here because it seems to us an open problem .but the point is that this fact puts serious limitations in any interpretation of statistical tests of the fits , as the popular per degree of freedom and , consequently , not only in the efficiency of the phenomenological descriptions , but also in the possible selection of the best phenomenological model . at lastlet us return to the applicability of the eddr , now in this context .despite of all the above problems , the known and usual phenomenological approach is characterized by analytical parameterizations for the imaginary parts of the amplitude and statistical tests on the quality of the fit . in this case , with specific analytical representations for the total cross sections , without error propagation from the fit parameters and in the regge context we understand that the correction terms , we have introduced , can have a suitable applicability in the context of the dispersion techniques .the point is that the class of functions for which they hold includes all the usual regge parameterizations and since the high - energy approximations is absent the fits can be formally extend to lower energies .however to reach a good statistical description of the data , specifically near the threshold , demands a correct " model for the imaginary part of the amplitude , which , to our knowledge is still lacking .we shall return to this point in what follows .based on all the above limitations in the phenomenological context , we have chosen one of the possible ( and popular ) models in order to check the equivalences ( and differences ) among the different dispersion representations analyzed in this work .although , as demonstrated in sect .[ sec : pra ] , this choice is sufficient for our aim , some drawnbacks involved demand additional comments . in the mathematical context ,as demonstrated by kol and fischer , some formal results , theorems and representations for the derivative relations were obtained under the assumption of the froissart - martin bound , , but other forms of sddr do not require this bound .therefore , since the simple pole pomeron contribution , that dominates at the asymptotic energies , violates this bound , the model assumed is not an example in full agreement with the totality of the formal results .however , as already exemplified ( sect .[ sec : sddr ] ) , the model belongs to the class of functions defined by theorem [ theo:1 ] and therefore , in this restrictive sense it seems to us to be an acceptable choice . in the formal phenomenological context ,when applied below asymptotic ( infinity ) energies , the model suffers from all the drawbacks already discussed . despite of this ,its use above , let us say , = 10 gev , could be explained ( not justified ) by the fact that the regge approach is the only known formalism , able to describe some global characteristics of the soft scattering .what is presently expected is the development of a microscopic theory able to justify its efficiency . now let us focus in the low energy region , above the physical threshold , 1.88 gev 10 gev and discuss the usefullness and practical applicability of the eddr . to our knowledge, there is no model proposed for this interval and that could explain the fact that fit procedures , even through idr , make use of energy cutoffs at 5 gev ( is a typical example ) . in this sense , the usefulness of the correction terms could be questioned .however , we understand that the lack of a phenomenological approach for that region may also be a mirror of the present stage , characterized by a focus ( probably excessive ) on the highest and asymptotic energies ( the great expectations from the tevatron , rhic , lhc ) . in our oppinion , independently of the fact that the asymptopia " might be resolved or not in a short term , the connection between resonance region ( above the physical threshold ) and the high - energy region ( above 10 gev ) still remains a fundamental problem demanding solution .in this respect we understand that the eddr can play an important role in further investigations . concerning the practical applicability of the extended relations in this region ,it is obviously limited , due to the lack of a correct " or accepted analytical model for the imaginary part of the amplitude .one way to circumvent this problem could be the introduction of a different parametrization for this particular region .that was the procedure used in ref . ; although without justification or explicit reference to the analytical form used , the authors obtained reasonable fit results .however , beyond the lack of any physical meaning , this procedure puts limitations on the equivalence between integral and derivative representations .based on the above facts and aimed only to check and compare the results obtained through different dispersion representations , we considered the pomeron - reggeon parametrization extended up to the low energy region , with a fit cutoff at = 4 gev . certainly the statistical results displayed in tables [ tab:1 ] and [ tab:2 ] indicate that the confidence level is very low and even a look at figs .[ fig:1 ] and [ fig:2 ] shows that the data near the resonance are not adequately described . as a consequence the numerical results in tables[ tab:1 ] and [ tab:2 ] may be questionable on physical grounds .however , we insist that all the figures in these tables are fundamental for a definite check of all the analytical representations investigated ( which is the only aim of sect . [sec : pra ] ) .at last we note that one may think that it might be possible to find a suitable function , in agreement with the convergence condition and able to fit all the experimental data of interest on secure statistical grounds ; that would be enough for our tests of consistences .we are not sure about this possibility , but the point is that the use of a known and popular parametrization , even with limited efficiency , can bring new insights for further developments mainly because it gives information on what should be improved .we have obtained novel analytical expressions for the derivative dispersion relations , without high - energy approximations .the mathematical results are valid for the class of functions specified by theorem [ theo:1 ] . in principle, their applicability can be extended to any area that makes use of dispersion techniques , with possible additional constraints , dictated by the analytical and experimental conditions involved . in special , under aadequate circumstances ,the local character of the derivative operators may be a great advantage . for scattering amplitudes belonging to the class of functions defined by theorem [ theo:1 ] , the eddr are valid for any energy above the physical threshold .since the experimental data on the total cross sections indicate a smooth variation with the energy ( without oscillations just above the physical threshold and a smooth systematic increase above gev ) , this class includes the majority of functions of physical interest . using as framework a popular pomeron - reggeon parametrization for the total cross sections , we have checked the numerical equivalence between the results obtained with the idr ( finite lower limit ) and the eddr , as well as the differences associated with the sddr and the cmsr .we have also presented a critical discussion on the limitations of the whole analysis from both formal and practical points of view .we stress that , as in the case of idr , the practical efficiency of the eddr in the reproduction of the experimental data on and depends on the model considered . here , in order only to check the consistences among the different analytical forms , we made use of a particular pomeron - reggeon parametrization , for which a cutoff at 4 gev was necessary .for example , by considering the full nondegenerated case ( four contributions , each one from each meson trajectory , ) , this cutoff can be reduced , or the can be reduced for the same cutoff . despite the limitations of our practical example ( sec . [sec : critical ] ) , some interesting phenomenological aspects could be inferred . in particular , although already noted , we have called the attention to the role of the subtraction constant as a practical regulator " , in the replacement of idr by derivative forms , a fact that is clearly identified in table [ tab:2 ] : the high - energy approximation is absorbed by the constant . in this respect , we have demonstrated that this artifice , which lack physical meaning , can be avoided by the direct use of the eddr .however , this observation does not depreciate the important role of the subtraction constant as a free fit parameter , since the best statistical results are obtained in this context ( tables [ tab:1 ] and [ tab:2 ] ) .in particular , we note that the effect of this parameter is to provide a slight higher value for the pomeron intercept , ( ) and ( free ) . to our knowledge ,a well established theoretical approach for total cross sections just above the physical threshold and in the region connecting low and high energies is still absent . in this sense , despite all the limitations discussed , we hope that the local analytical operators , developed here for these regions , can contribute , as a formal mathematical tool , for further developments on the subject . n.v .gribov and a.a .migdal , yad . fiz .* 8 * , 1002 ( 1968 ) [ sov* 8 * , 583 ( 1969 ) ] ; j.b .bronzan , in : argonne symposium on the pomeron , anl / hep-7327 ( 1973 ) p. 33 ; j.d .jackson , in : 1973 scottish summer school , lbl-2079 ( 1973 ) p. 39 .i. vrko , czech .math . j. * 35 * , 59 ( 1985 ) ; m.j .menon , a.e .motter , and b.m .pimentel , phys .b * 451 * 207 ( 1999 ) ; yu .s. vernov , m.n .mnatsakanova , physics of particles and nuclei * 32 * , 589 ( 2001 ) .j. fischer and p. kol , phys .b * 64 * , 45 ( 1976 ) ; phys . rev .d * 17 * , 2168 ( 1978 ) ; p. kol and j. fischer , j. math . phys . * 25 * , 2538 ( 1984 ) ; j. fischer and p. kol , czech . j. phys .b * 37 * , 297 ( 1987 ) .vila and m.j .menon , in _ sense of beauty in physics a volume in honour of adriano di giacomo _ , edited by m. delia , k. konishi , e. meggiolaro and p. rossi ( edizioni plus , pisa university press , pisa , 2006 ) , p. 153; hep - ph/0601194 .
we discuss some formal and practical aspects related to the replacement of integral dispersion relations ( idr ) by derivative forms , _ without high - energy approximations_. we first demonstrate that , for a class of functions with physical interest as forward scattering amplitudes , this replacement can be analytically performed , leading to novel extended derivative dispersion relations ( eddr ) , which , in principle , are valid for any energy above the physical threshold . we then verify the equivalence between the idr and eddr by means of a popular parametrization for total cross sections from proton - proton and antiproton - proton scattering and compare the results with those obtained through other representations for the derivative relations . critical aspects on the limitations of the whole analysis , from both formal and practical points of view , are also discussed in some detail . published in _ brazilian journal of physics _ * 37 * , 358 ( 2007 )
many situations in the recent research in quantum games appear to be based on a general idea that is quite interesting as well .it is to take a classical game exhibiting certain features , generalize it to quantum domain , and see how the situation changes in the course of this generalization .in this course noncooperative games have attracted an earlier attention with the ruling solution concept of a nash equilibrium ( ne ) .this development looks reasonable because in classical game theory as well the earlier research was focused on noncooperative games and interest in coalition formation was revived later .players in noncooperative games are not able to form binding agreements even if they may communicate . on the other handthe distinguishing feature of cooperative games is a strong incentive to work together to receive the largest total payoff .these games allow players to form coalitions , binding agreements , pay compensations , make side payments etc .in fact , von neumann and morgenstern in their pioneering work in the theory of games offered models of coalition formation where the strategy of each player consists of choosing the coalition he wishes to join . in coalition games , that are part of cooperative game theory , the possibilities of the players are described by the available resources of different groups ( coalitions ) of players . joining a group or remaining outside is part of strategy of a player affecting his / her payoff .recent work in quantum games gives rise to a natural and interesting question : what is the possible quantum mechanical role in cooperative games that are an important part of the classical game theory ? in our opinion it may be quite interesting , and fruitful as well , to investigate coalitions in quantum versions of cooperative games .our motivation in present paper is to investigate what might happen to the advantage of forming a coalition in a quantum game compared to its classical analogue .we rely on the concepts and ideas of von neumann s cooperative game theory and consider a three - player coalition game in a quantum form .we then compare it to the classical version of the game and see how the advantage of forming a coalition can be affected . in usual classical analysis of the coalitiongames the notion of a strategy disappears ; the main features are those of a coalition and the value or worth of the coalition .the underlying assumption is that each coalition can guarantee its members a certain amount called the `` value of a coalition '' . _ _ _ the value of coalition measures the worth the coalition possesses and is characterized as the payoff which the coalition can assure for itself by selecting an appropriate strategy , whereas the ` odd man ' can prevent the coalition from getting more than this amount_. using this idea we study cooperative games in quantum settings to see how advantages of making coalitions can be influenced in the new settings .the preferable scheme to us to play a quantum game has been recently proposed by marinatto and weber . in this schemean initial quantum state is prepared by an arbiter and forwarded to the players .each player possesses two quantum unitary and hermitian operators i.e. the identity and the inversion or pauli spin - flip operator .players apply the operators with classical probabilities on the initial quantum state and send the quantum state to the ` measuring agent ' who decides the payoffs the players should get .interesting feature in this scheme is that the classical game is reproduced when the initial quantum state becomes unentangled .classical game is therefore embedded in the quantum version of the game . in this paper using marinatto and weber s scheme we find a quantum form of a symmetric cooperative game played by three players . in classical form of this gameany two players out of three get an advantage when they successfully form a coalition and play the same strategy .we find a quantum form of this game where the advantage for coalition forming is lost and players are left with no motivation to cooperate .a classical three person normal form game is given by three non - empty sets , , and , the strategy sets of the players , , and and three real valued functions , , and defined on .the product space is the set of all tuples with , and .a strategy is understood as such a tuple and , , are payoff functions of the three players .the game is usually denoted as .let be the set of players and be an arbitrary subset of .the players in may form a coalition so that , for all practical purposes , the coalition appears as a single player .it is expected that players in will form an opposing coalition and the game has two opposing `` coalition players '' i.e. and .we study quantum version of an example of a classical three player cooperative game discussed in ref .each of three players and chooses one of the two strategies .if the three players choose the same strategy there is no payoff ; otherwise , the two players who have chosen the same strategy receive one unit of money each from the ` odd man . 'payoff functions , and for players and respectively are given as with similar expressions for and .suppose ; hence .the coalition game represented by is given by the following payoff matrix {lllll}% & & & & \\ & & & & \\ & & & & \\ & & & & \\ & & & & % \end{tabular}\ ] ] here the strategies ] are dominated by ] .after eliminating these dominated strategies the payoff matrix becomes {lll}% & & \\ & & \\ & & % \end{tabular}\ ] ] it is seen that the mixed strategies + \frac{1}{2}\left [ 22\right ] \label{cltc}\\ & \frac{1}{2}\left [ 1\right ] + \frac{1}{2}\left [ 2\right ] \label{lftc}%\end{aligned}\ ] ] are optimal for and respectively . with these strategies a payoff for players assured for all strategies of the opponent ; hence , the value of the coalition is i.e. . since is a zero - sum game can also be used to find as .the game is also symmetric and one can write in quantum form of this three player game the players implement their strategies by applying the identity operators in their possession with probabilities and respectively on the initial quantum state . in marinatto and weber sscheme the pauli spin - flip or simply the inversion operator is then applied with probabilities and by players , and respectively .if is the density matrix corresponding to initial quantum state the final state after players have played their strategies corresponds to where the unitary and hermitian operator can be either or . , and are the probabilities with which players , and apply the operator on the initial state respectively . corresponds to a convex combination of all possible quantum operations .let the arbiter prepares the following three qubit pure initial quantum state where the eight basis vectors of this quantum state are for .the initial state ( [ instate ] ) can be imagined as a global state ( in a dimensional hilbert space ) of three two - state quantum systems or ` qubits ' .a player applies the unitary operators and with classical probabilities on during his ` move ' or ` strategy ' operation . fig .1 shows the scheme to play this three player quantum game where players and form a coalition and player is ` leftout ' .[ ptb ] let the matrix of three player game be given by constants with .we write the payoff operators for players and as payoffs to players and are then obtained as mean values of these operators \ ] ] where , for convenience , we identify the players moves only by the numbers and . the cooperative game of eq .( [ payoffs ] ) with the classical payoff functions , and for players and respectively , together with the definition of payoff operators for these players in eq .( [ payoper ] ) , imply that with these constants the payoff to player , for example , can be found as {c}% -4rq-2p+2pr+2pq+r+q\\ -4rq+2p-2pr-2pq+3r+3q-2\\ 4rq+2pr-2pq-3r - q+1\\ 4rq-2pr+2pq - r-3q+1 \end{array } \right ] \left [ \begin{array } [ c]{c}% \left| c_{111}\right| ^{2}+\left| c_{222}\right| ^{2}\\ \left| c_{211}\right| ^{2}+\left| c_{122}\right| ^{2}\\ \left| c_{121}\right| ^{2}+\left| c_{212}\right| ^{2}\\ \left| c_{112}\right| ^{2}+\left| c_{221}\right| ^{2}% \end{array } \right ] \label{poff}%\ ] ] similarly payoffs to players and can be obtained .classical mixed strategy payoffs can be recovered from the eq .( [ poff ] ) by taking .the classical game is therefore imbedded in its quantum form .the classical form of this game is symmetric in the sense that payoff to a player depends on his / her strategy and not on his / her identity .these requirements making symmetric the three - player game are written as now in this quantum form of the game becomes same as when and then payoff to a player remains same when other two players interchange their strategies .the symmetry conditions ( [ rqmnts ] ) hold if , together with eqs .( [ rqmnts1 ] ) , following relations are also true {cc}% \alpha_{1}=\beta_{1}=\gamma_{1 } , & \alpha_{5}=\beta_{6}=\gamma_{7}\\ \alpha_{2}=\beta_{3}=\gamma_{4 } , & \alpha_{6}=\beta_{5}=\gamma_{6}\\ \alpha_{3}=\beta_{2}=\gamma_{3 } , & \alpha_{7}=\beta_{7}=\gamma_{5}\\ \alpha_{4}=\beta_{4}=\gamma_{2 } , & \alpha_{8}=\beta_{8}=\gamma_{8}% \end{array}\ ] ] these form the extra restrictions on the constants of payoff matrix and , together with the conditions ( [ rqmnts1 ] ) , give a three player symmetric game in a quantum form .no subscript in a payoff expression is then needed and represents the payoff to a player against two other players playing and .the payoff is found as the term ` mixed strategy ' in the quantum form of this game is defined as being a convex combination of quantum strategies with classical probabilities . for this assume that the pure strategies ] correspond to and respectively .the mixed strategy + ( 1-n)\left [ 2\right ] ] is played with probability and ] means that both players in the coalition apply the identity operator with zero probability. similarly the strategy ] with probability and ] is the payoff to when all three players play i.e. the strategy ] is coalition payoff when coalition players play and the player in plays .now from eq .( [ qpayoff ] ) we get } & = 2p(0,0,0)=-4(\left| c_{211}\right| ^{2}+\left| c_{122}\right| ^{2})\nonumber\\ p_{\wp\lbrack112 ] } & = 2p(0,0,1)=2(\left| c_{111}\right| ^{2}+\left| c_{222}\right| ^{2}+\left| c_{211}\right| ^{2}+\left| c_{122}\right| ^{2})\nonumber\\ p_{\wp\lbrack221 ] } & = 2p(1,1,0)=2(\left| c_{111}\right| ^{2}+\left| c_{222}\right| ^{2}+\left| c_{211}\right| ^{2}+\left| c_{122}\right| ^{2})\nonumber\\ p_{\wp\lbrack222 ] } & = 2p(1,1,1)=-4(\left| c_{211}\right| ^{2}+\left| c_{122}\right| ^{2})\end{aligned}\ ] ] therefore from eq .( [ cltpq ] ) to find the value of coalition in the quantum game we find and equate it to zero i.e. is such a payoff to that the player in can not change it by changing his / her strategy given in eq .( [ lftq ] ) .it gives , interestingly , and the classical optimal strategy of the coalition + \frac{1}{2}\left [ 22\right ] $ ] becomes optimal in the quantum game as well . in the quantum gamethe coalition then secures following payoff , also termed as the value of the coalition similarly we get the value of coalition for as note that these values reduce to their classical counterparts of eq .( [ vcltc ] ) when the initial quantum state becomes unentangled and is given by .classical form of the coalition game is , therefore , a subset of its quantum version .suppose the arbiter now has at his disposal a quantum state such that . in this case becomes a negative quantity and because of the normalization given in eq .( [ instate ] ) .a more interesting case is when the arbiter has the state at his disposal . because now both and are and the players are left with no motivation to form a coalition . a quantum version of this cooperative game ,therefore , exists in which players are deprived of motivation to form a coalition .the payoff to a player against , players in classical mixed strategy game can be obtained from eq .( [ qpayoff ] ) by taking .it gives note that and classical mixed strategy game is zero - sum and .the quantum version of this game is not zero - sum always because from eq .( [ qpayoff ] ) we have and the quantum game becomes zero - sum only when .there may appear several guises in which the players can cooperate in a game .one possibility is that they are able to communicate and , hence , able to correlate their strategies . in certain situationsplayers can make binding commitments before or during the play of a game . even in the post - play behaviorthe commitments can make players to redistribute their final payoffs .the two - player games are different from multi - player games in an important aspect . in two - player games the question beforethe players is whether to cooperate or not . in multi - player casethe players are faced with a more difficult task .each player has to decide which coalition to join .there is also certain uncertainty that the player faces about the extent to which players outside his coalition may coordinate their actions .analysis of cooperative games isolating coalition considerations instead of studying elaborate strategic structures has drawn more attention .recent exciting developments in quantum game theory provide a motivation to see how forming a coalition and its associated advantages can be influenced in already proposed quantum versions of these cooperative games . to study this we selected an interesting but simple cooperative game as well as a recently proposed scheme telling how to play a quantum game .we allowed the players in the quantum version of the game to form a coalition similar to the classical game .the underlying assumption in this approach is that because the arbiter , responsible for providing three qubit pure quantum initial states to be later unitarily manipulated by the players , can forward a quantum state that correspond to the classical game , therefore , other games corresponding to different initial pure quantum states are quantum forms of the classical game .this assumption , for example , reduces the problem of finding a quantum version of the classical coalition game we considered , with an interesting property that the advantage of making a coalition is lost , to finding some pure initial quantum states .we showed that such quantum states can be found and , therefore , there are quantum versions of the three - player coalition game where the motivation for coalition formation is lost . in conclusion , we considered a symmetric cooperative game played by three players in classical and quantum forms . in classical form of this game , which is also embedded in the quantum form , forming a coalitiongives advantage to players and players are motivated to do so . in quantum form of the game ,however , an initial quantum state can be prepared by the arbiter such that coalition forming is of no advantage . the interesting function in these situationsi.e. ` value of coalition ' is greater for coalition then for player outside ; when the game is played classically .these values become same in a quantum form of the game and motivation to form a coalition is lost .there is , nevertheless , an essential difference between the two forms of the game i.e. classical game is zero - sum but its quantum version is not .
we study two forms of a symmetric cooperative game played by three players , one classical and other quantum . in its classical form making a coalition gives advantage to players and they are motivated to do so . however in its quantum form the advantage is lost and players are left with no motivation to make a coalition .
the amount of sequence related data increased dramatically during the past years .this is due to improvements of high - throughput and computational methods in omics that often yield long lists of gene , protein , or enzyme identifiers ( ids ) . in our laboratorywe process different kinds of sequence based data , e.g. , dna - microarray derived gene - expression data .the ultimate purpose of any gene - expression experiment is to produce biological knowledge .independent of the methods used , the result of microarray experiments is , in most cases , a set of genes found to be differentially expressed between two or more conditions under study .the challenge faced by the researcher is to translate this list of differentially regulated genes into better understanding of the biological phenomena that generate such changes .a good first step in that direction is the translation of the sequence i d list into a functional profile .biological pathways can provide key information about the organization of biological systems .major publicly available biological pathway diagram resources , including the kyoto encyclopedia of genes and genomes ( kegg ) , genmapp and biocarta , can be used to allocate sequence data in pathway maps . with this manuscriptwe do not intend to present a review about existing solutions but focus on our approach .our project requires the analysis of sequence cluster lists and extend the analysis to a maximum possible number of organisms .kegg currently provides adapted maps for over 380 species covering the following molecular interaction and reaction networks : metabolism , genetic information processing , environmental information processing , cellular processes , human diseases . in order to use the kegg pathway database to display and map genes to kegg pathways, we developed a web - based tool called orfmapper .orfmapper is an easy - to - use but powerful application that supports data analysis by extracting annotations for given keywords and gene , protein , or enzyme ids , allocating these ids to metabolic pathways , and displaying them on pathway maps .two color codes can be assigned to the ids , which can , e.g. , represent sequence properties , organism identifiers , or cluster memberships .these color codes are used in the query output .the query results are displayed in hypertext format as a web page , prepared for download as tab - delimited raw text , and visualized on colored , hyperlinked kegg metabolic pathway maps that can be downloaded in pdf format .together with a version optimized for personal digital assistants , orfmapper provides unique functionality with respect to accessing and displaying kegg pathway data .orfmapper has been entirely developed with php version 4.3.4 , an open source scripting language that is especially suited for internet development .creation of pdf is performed with fpdf version 1.53 , a freely available php class that allows generating pdf files .orfmapper runs on a apple mac os x version 10.2 operating system with an apache version 1.3.33 http server .the processed kegg data are stored in a local relational mysql database version 4.1.13 database .the database behind orfmapper contains gene identifiers , the annotation , organism , and pathway information , respectively .the database is updated monthly .therefore , information from the kegg ftp - server and from the kegg web site are parsed . in order to keep orfmapper working and to avoid user query errors during updates , duplicated tablesare used . upon successful download and processing ,the updated tables are activated while outdated tables are inactivated .orfmapper was designed for prompt display of metabolic relations between gene products by the use of kegg pathway maps . a detailed online help guides the beginner through the user interface .the user has to specify either annotation keywords ( e.g. , `` hydrogenase protein '' or coxa ) , gene ids ( e.g. , kegg , ncbi , uniprot ) , or enzyme ids ( i.e. , ec - numbers ) .the user input can either be uploaded as an ascii text file , be exported from spreadsheet applications ( e.g. , microsoft excel or openoffice calc ) , or directly pasted into a text area on the web page .orfmapper is made as flexible as possible in order to handle individual input data formats .the ids can be listed either vertically or horizontally or mixed .they can be separated by all typical text delimiters , e.g. , tabulators , spaces , commas and semicolons . placing keywords in quotation marks forces orfmapper to perform a boolean and query . by default ,all organisms are queried for all entered ids and keywords . in order to restrict output to selected organisms, it is possible to specify those organisms in the first input row. this line must be preceded by an angle bracket character `` '' followed by organism names or just parts of organism names ( e.g. , `` droso '' instead `` drosophila melanogaster '' ) .the organism names must be separated by commas . if no match to an organism nameis found , all organisms are queried . in order to customize visualization , the user may specify colors for individual ids .therefore , either a color name ( e.g. , yellow , blue , red ) or a hexadecimal rgb code ( e.g. , # ffff00 ) can be appended to ids and keywords with two underscore characters `` '' ( e.g. genenameblue , genename#000080 , keyword1red , `` keyword1 keyword2''green ) .this colors the enzyme box corresponding to the i d on a kegg pathway map .likewise , the user can add one additional value to change the box border color .this is achieved by adding another color preceded by an underscore character to the i d ( e.g. , genenamebluered ) .coloration is extremely helpful to specify and , in the output , to identify gene products with common properties , such as expression levels or cluster affiliation .large sets of query data are often stored in spreadsheet applications , e.g. , microsoft excel , openoffice calc , or microsoft access .thus , we took special care to simplify date import from these applications .if the data are organized in three columns ( i d , box color , and box border color , respectively ) , then they can directly copy - pasted into orfmapper . upon clicking the _ convert tab _button , all tabulators are converted to underscores , as required .orfmapper creates three forms of output : hypertext , raw tab - delimited text , and graphical pdf pathway maps , respectively .the hypertext query result contains all gene annotations , pathway information , and hyperlinks to kegg pathway maps corresponding to the user defined query ( fig .[ fig1 ] ) .this output is sorted by organism names , metabolic categories , pathways , and gene products .the latter two levels are hyperlinked to the corresponding kegg information pages .this query result can be downloaded as raw tab - delimited text file for further processing .the first line of the text file contains the ids given by the user .all following lines contain the full set of query results with the following entries : sequence or enzyme i d , kegg species : sequence i d , annotation with ec - number and kegg orthology i d , kegg organism i d , species name , kegg pathway map number , metabolic pathway name , box background color , and box border color . upon clicking the document symbol in the hypertext query results, orfmapper creates a pdf version of the corresponding kegg pathway map .the graphical pdf map can be saved locally , is scalable , optimized for printing , and includes hyperlinks to kegg metabolite and enzyme information .if colors were assigned to sequence ids in the query input , the background and borders of enzyme boxes are colored in the pdf maps .the pdfs are oriented such that the kegg pathway maps fit perfectly either to portrait or landscape paper format .orfmapper was designed for displaying metabolic pathway oriented information of keywords and nucleotide , protein , or enzyme ids of sequenced organism .numerous visualization tools for analyzing biological data are available .orfmapper fills a gap by providing quick access to pathway information via one input field with flexible input formats and output coloration options .kegg itself provides an integrated tool that can be used to color metabolic pathway objects. however , orfmapper has a much broader functionality by allowing cross - species queries , giving a more detailed output , hyperlinking individual genes , and converting the colored pathway maps to pdf format retaining hyperlinks . a condensed version of orfmapper requiring less screen space and showing reduced output is devoted to palm - sized pdas .its screen size is scaled to 240 pixel width and the output of gene annotations is omitted . if equipped with wlan , this allows on the spot information retrieval and mapping of keywords and gene or enzyme ids , e.g. , during research seminars .orfmappers functionality will continuously be expanded .while the simple graphical user interface and query syntax will stay unchanged , extensions with respect to the application of functional characters are planned .we are currently integrating further sequence ids , e.g. , from the protein data bank ( pdb ) .furthermore , we are planning to facilitate nucleotide and protein sequence querying .this work is part of the bmbf funded cologne university bioinformatics center ( cubic ) . we like to thank professor d. tautz for generous support , toshiaki katayama from kegg for prompt help , and all beta testers for their valuable comments .conflict of interest : non declared
computational analyses of , e.g. , genomic , proteomic , or metabolomic data , commonly result in one or more sets of candidate genes , proteins , or enzymes . these sets are often the outcome of clustering algorithms . subsequently , it has to be tested if , e.g. , the candidate gene - products are members of known metabolic processes . with orfmapper we provide a powerful but easy - to - use , web - based database application , that supports such analyses . all services provided by orfmapper are freely available at http://www.orfmapper.com .
consider the following ( random ) game : a deck of cards is divided into several piles . then , for each pile , we leave it intact with probability and remove one card from there with probability ( ] .then the operator which transforms the configuration in the game of random bulgarian solitaire with parameter is defined by denote also by the result of independent applications of to ; clearly , the process is conservative in the sense that for all .suppose that .as remarked above , for the stochastic process is an irreducible aperiodic markov chain with finite state space .we denote by its stationary measure .to formulate our results , we need also to find a way to define sets of configurations that are close to a specific triangular configuration . to this end , for two configurations , define the distance by with the convention for all and for all .next , we define the triangular configuration by , . when is fixed and , we can write , . finally , for ( which may depend on ) define the set of `` roughly triangular '' configurations by let , and define the configuration , where note that ( for example , for we have ) . for the particular case say that is nondegenerate if it contains the configuration , as well as all the configurations with .it is easy to see that for any fixed there exists such that is nondegenerate for all , and the same is true when e.g. , .now we are ready to formulate the main results of this paper .first , we state the result about the time to approximate the triangular configuration for the deterministic bulgarian solitaire ( i.e. , with ) .[ det_bs ] take and suppose that is large enough to guarantee that is nondegenerate .suppose that the initial configuration with has the following properties : and for some .then there exists such that we have for all . in words, this result means that if the initial configuration is `` reasonable '' , then the number of moves required to approximate the triangle is .now , we turn our attention to random bulgarian solitaire : [ ran_bs ] suppose that . then for any there exist positive constants and such that for all in section [ s_fr ] there are some more comments and open problems related to the bulgarian solitaire ( both deterministic and random ) .also , the reader may find it interesting to look at java simulation of the random bulgarian solitaire ( with ) on the internet page of kyle petersen at + http://people.brandeis.edu/~tkpeters/reach/stuff/reachthis section is organized in the following way . in section [ s_etienne ]we introduce the notion of _ etienne diagram _ , which is just another way to represent the configurations of the game .then , we show how the moves of bulgarian solitaire are performed on this diagram and discuss its other properties . in section [ s_proof_det ]we prove theorem [ det_bs ] , and in section [ s_proof_rand ] we prove theorem [ ran_bs ] ( in section [ s_proof_rand ] some results and technique from sections [ s_etienne ] and [ s_proof_det ] are used , most notably the inequality ( [ iteration ] ) ) . before starting the proofs, we need to describe another representation of a particular state of ( deterministic ) bulgarian solitaire , which we call an etienne diagram ( cf . ) . in this approachthe cards are identified with particles living in the cells of the set , with at most one particle per cell .we write when the cell is occupied and when the cell is empty .clearly , is a half - quadrant of , but we would like to visualize in a little bit unconventional way ( see figure [ p1 ] ) : the cell lies in the base and supports the column , while the diagonal goes in the nw direction ( so the rows of are enumerated from right to left ; notice that at this point we deviate from , where the rows were enumerated from left to right ) .now , a configuration is represented as follows ( as on figure [ p1 ] ) : we put for and for all other pairs .from the fact that we immediately deduce that for any and one of the advantages of the representation via etienne diagram is that it makes it more clear how the process approaches the triangular configuration . to see what we mean , first note that the move of the bulgarian solitaire consists in applying the following two substeps to the corresponding etienne diagram ( see figure [ p2 ] ) : *apply the cyclic shift ( from left to right ) to each row of the diagram ; * if after the shift there is a particle that is placed above an empty cell , then the particle falls there ; this procedure is repeated until no further fall is possible .speaking formally , let .then the etienne diagram of is constructed using the following procedure : * for all put for and . *suppose that for the array we can find such that , . then construct the new array by , , and for .* repeat the previous procedure with instead of , and so on . at some momentwe will obtain an array for which we can not find such that , .then for all put .now , suppose that .on the etienne diagram the triangular configuration corresponds to the configuration . note also that if the first rows of the diagram are occupied , then they will remain occupied during all the subsequent evolution .this shows that the falls of particles `` help '' to reach the stable configuration ( more and more rows become all occupied ) .moreover , in many concrete situations it is possible to know how many moves are needed to fill out some region which was originally empty .arguments of this kind will be heavily used in the course of the proof of our results .consider the etienne diagram of a configuration .since the system is conservative , there is a natural correspondence between particles ( holes ) in that diagram and particles ( holes ) in the diagram of the configuration .this shows that for each particle ( hole ) on the original diagram we can define its trajectory , i.e. , we know its position after moves of the game .let be the second coordinate of the particle ( hole ) from after moves , and let be the number of falls ( movements upwards ) that the particle ( hole ) from was subjected to during moves .that means that , if , then are the coordinates of the particle from after moves , while if , then are the coordinates of the hole from after moves .it seems to be very difficult to calculate exactly and ( except in trivial situations , when , e.g. , and for all ) .however , we can establish some relation between these quantities by defining first in words , would be the position of particle ( hole ) from at time if we know that ( the quantities depend also on , but we do not indicate that in our notations ) .[ jm ] if and is such that , or and is such that , then _ proof ._ suppose for example that .denote .since is such that , we have that .the lemma then follows from the fact that should be somewhere in between and .the other case is treated analogously .next , we define some quantities which concern the geometric structure of the representation via etienne diagram , and prove some relations between them . for define when , we have . using the etienne representation of a configuration , define and put .the quantity can be thought of as the `` energy '' of the configuration : the bigger is , the `` more distant '' ( not necessarily in the sense of the distance ) is from .denote also .the next lemma establishes some elementary properties of the energy .[ prop_e ] * there exists a constant such that for all and all we have .* for all it holds that .moreover , is equal to the number of falls of particles during the second substep of the move of the bulgarian solitaire represented by the etienne diagram ( i.e. , it is equal to in * ( iii ) * ) ._ define . from ( [ poln_tri ] )one easily gets that there exists such that for all we have .the proof of ( i ) then reduces to an elementary computation ( roughly speaking , to compute we have at worst terms , each of order ) . as for the proof of ( ii ) , note first that the operation of cyclic shift does not change the quantities defined in ( [ def_e-])([def_e+ ] ) .then , it is straightforward to see that each particle s fall decreases by one unit , which concludes the proof of the lemma .* is the maximal vertical distance between and the holes below ; * is the maximal vertical distance between and the particles above which also lie inside the area indicated by the dashed lines on figure [ p3 ] ; * is the total area covered by the holes below ; * is the total area covered by the particles above . similarly to the energy , all those quantities could be used to measure the deviation of from the `` almost triangular '' configuration . consider also the normalized quantities , , and , .[ alphas ] for all there exist constants ( depending on ) such that _ proof ._ it is elementary to obtain the inequalities ( [ a12 ] ) and ( [ a56 ] ) from ( [ pust_tri ] ) .analogously , to obtain ( [ a34 ] ) and ( [ a78 ] ) , one can use ( [ poln_tri ] ) and the fact that together with the following observation . if for some , then either or . consider a configuration such that . by definition of , there exists a constant such that also , we will always tacitly assume that , i.e. , we will not consider configurations that are `` too close '' to the triangle . in this case there are constants such that ( note also that if is a triangular number , then for any ) . using ( [ a12 ] ) , ( [ a56 ] ) , and ( [ beta12 ] ) , we obtain and , by ( [ a34 ] ) , .this shows that there is a constant such that for all .analogously , we obtain that for some it holds that and . by lemma [ prop_e ] ( i ) the quantity bounded on , so there is such that , which implies that for some .finally , we use lemma [ prop_e ] ( i ) once again to obtain that there exists such that when .first , the idea is to prove that after moves , the `` normalized energy '' will decrease by a considerable amount .consider a configuration with .abbreviate ; by ( [ minh ] ) , we can find such that and .moreover , without loss of generality one can suppose that is divisible by .define also , , and .define two sets by ( see figure [ p4 ] ) .note that from ( [ pust_tri ] ) and ( [ poln_tri ] ) it follows that for all and for all .abbreviate also , .now , the idea is to consider the evolution of sets at times , .first , note that , for any and . then , each time we make a complete turn ( i.e. , moves ) a particle which was on the level will be units to the left of its initial position ( provided it did not fall ) .this shows that there exists such that \cap [ m_1,j_1 ] \big| \geq \frac{2\hh}{5}\ ] ] ( when , by ] ) .take such that \subset \big([{\hat j}_{i'_0,j_2}(k_0i_0),{\hat j}_{i'_0,m_2}(k_0i_0 ) ] \cap [ m_1,j_1]\big)\ ] ] and .we consider two cases : : at time in the set , j\in[j_3,j_3 - 1+\hh/5]\}\ ] ] there is at least one hole , i.e. , for at least one . in this case , by ( [ pust_tri ] ) no particle can be in the set , j\in[j_4-\hh/5,j_4]\},\ ] ] i.e. , for all we have that .note that , so the `` image '' of after turns completely covers . on the other hand , be completely empty , so there should have been a lot of particle falls in order to avoid . in what follows we estimate the minimal number of falls necessary ( and , consequently , we find the minimal amount by which the energy should decrease ) .define the set , j\in[j_4-\hh/10,j_4]\ } \subset u'_2.\ ] ] for any there is a unique such that , and , by the above observation , , so the cell originally contained a particle . to guarantee that that particle is not in at time , at least one of the following two possibilities must occur : * either , * or , but in this case , by lemma [ jm ] , .denote ; for the both of the above possibilities , we obtained in fact that .since the number of cells in the set is at least , the number of particle falls until time should be at least . by lemma [ prop_e ] ( ii ) , it means that , for the case 1 , : there are no holes at time in the set , i.e. , for all . using the duality between holes and particles , this case can be treated quite analogously to the case 1 .namely , we note first that the `` image '' of after turns completely covers .so , in order to escape , the holes that are `` candidates '' to be there must make a sufficient number of movements in the upwards direction . in the same way as in the case 1, one can work out all the details to obtain that ( [ hjk ] ) is valid for the case 2 as well .we continue proving theorem [ det_bs ] . by ( [ minh ] ) and ( [ hjk ] ) ,there exist such that where ( the formula ( [ iteration ] ) will play an important role in the proof of theorem [ ran_bs ] as well ) .consider now the initial configuration ; by lemma [ prop_e ] ( i ) , for some .fix an arbitrary and define ; then there exists ( depending only on ) such that . by ( [ iteration ] ) this means that where , i.e. , after moves we will arrive to a configuration with small normalized energy .now we are almost done with the proof of theorem [ det_bs ] , and it remains only to make one small effort : we have to prove that if the energy is small , then either is already close to the triangular configuration ( in the sense of the distance ) , or it will come close to after moves .define the sets it is elementary to see that , for fixed and for all large enough we need the following [ srezaem_hvosty ] suppose that , and put + 2 ] . on the other hand ,if , then clearly =0 ] .for any , let us define configurations in the following way : if , let where are chosen in such a way that and for any we have .define the operator by i.e. , making the -move consists of making a move of deterministic bulgarian solitaire , and then adding cards to the new pile ( so that ) . for the simplicity of notations, we do not indicate in the dependence on and ; note also that in the above display we do not assume that , so need not apply to only . for two configurations , say that if and for all .[ dominacao ] suppose that and ( where are the quantities from lemma [ reasonable ] ) .then for any there exists such that \geq 1 - n^m e^{-\sigma_2n^{\delta_0}}.\ ] ] _ proof ._ let us refer to the piles of and as and respectively .then , the piles born at the moment are referred to as and . using the notation , for , let and stand for the sizes of the piles and at the moment , respectively ( if a pile is emptied at some moment , then we mean that the size remains for all ) .clearly , the event where we define the event by define also the event ; by lemma [ reasonable ] ( ii ) we know that \geq 1 - n^m e^{-\sigma_1{n^{1/2}}} ] ._ first , each particle added to changes the energy by at most , so we have for some constants using the same sort of argument and the fact that for any , with the help of lemma [ nao_fugira ] and ( [ asd ] ) we obtain introduce the event . by lemma [ dominacao ]we have \geq 1-n'_s e^{-\sigma_2n^{\delta_0}},\ ] ] and , since analogously to ( [ asd])([asd ] ) we obtain that on now , we have that , and . since , we obtain the proof of ( [ eq_l2.8 ] ) from ( [ iteration ] ) , ( [ asd ] ) , ( [ fff * * * ] ) , and ( [ fff * * * * ] ) . as for the second claim of lemma [ iter_rand ] , we note that for , by lemma [ prop_e ] ( ii ) it holds that , and then use the same kind of estimates as used above. now we are ready to finish the proof of theorem [ ran_bs ] . by lemma [ reasonable ] ( i ) there are such that ( [ sdf ] ) holds .note that there exists such that if , then .define .let and for .take , , and define ( cf .( [ a12 ] ) and ( [ a34 ] ) ) .let ; since , by examining the iteration scheme we obtain that there exists such that .let \},\ ] ] and define also , . take any and denote . by ( [ osn_fakt ] ) andlemma [ reasonable ] ( i ) we can write where now , by lemma [ iter_rand ] , the left - hand side of ( [ linha1 ] ) can be bounded from below as follows : \nonumber\\ & \geq&\pi_{p , n}(\l_n ) ( 1-e^{-\sigma_3n^{\delta_1 } } ) . \label{yyy}\end{aligned}\ ] ] again using lemma [ iter_rand ] , we write using now ( [ yyy ] ) and ( [ yyy ] ) together with the trivial bound , we obtain from ( [ linha1])([linha2 ] ) that for some by induction , we then obtain that there is such that for any so , since , taking summation in ( [ qqq ] ) we obtain for some that now , the last step of the proof of theorem [ ran_bs ] is analogous to what was done in lemma [ srezaem_hvosty ] . note that if , then so if , then , thus showing that define take any , and denote ( recall that ) . using ( [ osn_fakt ] ) , we write where observe that if and is small enough , then after moves there will be no particles in the set , with probability at least for some .so , for the left - hand side of ( [ lll1 ] ) we can write on the other hand , the same argument implies that and the bound is trivial .so , using ( [ oc_en ] ) and ( [ lhs1 ] ) , we obtain from ( [ lll1 ] ) that abbreviate and define \big\},\end{aligned}\ ] ] , .analogously to ( [ linha1])([linha2 ] ) and ( [ lll1])([lll2 ] ) , we write where the following fact can be deduced from ( [ poln_tri ] ) : if and for some , then .then , by examining the evolution of on the etienne diagram and using lemma [ dominacao ] , it is elementary to obtain that for any , \geq 1 - e^{-c_7{n^{1/2}}}.\ ] ] using that fact , one can bound the left - hand side of ( [ zzz1 ] ) from below by and the term can be bounded from above by . then , it is straightforward to write , .denoting now , analogously to ( [ qqq])([qqq ] ) we obtain summing over and taking ( [ jsueh ] ) and ( [ oc_h0 ] ) into account , we finally obtain that for some ( depending on ) since and is arbitrary , we complete the proof of theorem [ ran_bs ] ( note that for ) .a natural question that one may ask is : starting from an initial configuration with , how many steps ( of the deterministic game ) are necessary to reach where as . from the proof of theorem [ det_bs ] it can be deduced that if , , then moves suffice . however , this result is only nontrivial when ( since moves are always enough to reach the `` exact '' triangle ) , and even then it is almost certainly far from being precise . also , loosely speaking , theorem [ ran_bs ] shows that the typical deviation from the triangle is of order at most .again , we do not believe that that result is the best possible one .in fact , the author has strong reasons to conjecture that the typical deviation should be of order ; however , the proof of that is still beyond our reach .the author is thankful to pablo ferrari for many useful discussions about the random bulgarian solitaire , and to ira gessel , who posed the problem of finding the limiting shape for that model during the open problems session at the conference _ discrete random walks 2003 _ ( ihp , paris ) .also , the author thanks the anonymous referees for careful reading of the manuscript and useful comments and suggestions .
we consider a stochastic variant of the game of bulgarian solitaire . for the stationary measure of the random bulgarian solitaire , we prove that most of its mass is concentrated on ( roughly ) triangular configurations of certain type . + * keywords : * shape theorem , triangular configuration , markov chain , stationary measure departamento de estatstica , instituto de matemtica e estatstica , universidade de so paulo , rua do mato 1010 , cep 05508090 , so paulo sp , brasil . + e - mail : popov.usp.br
in the past idealized models for turbulent fluctuations which can be found in the solar wind plasma or in the interstellar medium have been proposed ( e.g. matthaeus et al .we are concerned with statistically axisymmetric models of magnetostatic fluctuations that are transverse to a uniform mean magnetic field . if solar wind turbulence is considered , the mean field might be identified with the magnetic field of the sun .the total magnetic field is a superposition of this mean field and the fluctuations .whereas we usually approximate the mean field by a constant field aligned parallel to the ( ) , the turbulent contribution has to be replaced by turbulence models .some prominant examples are slab , 2d , and two component models that include both slab and 2d contributions ( e.g. matthaeus et al .1990 ) .there are recent spacecraft measurements of magnetic correlations in the solar wind ( see e.g. matthaeus et al .2005 , dasso et al . 2007 ) .such measurements are very interesting and important since they allow an improved understanding of turbulence .for instance , characteristic length scales of turbulence such as the correlation length , the bendover scale , and the dissipation scale can be obtained from such observations .also the investigation of spectral anisotropy by using data from different space craft missions such as wind and ace is possible .these properties of solar wind turbulence are very important for several investigations ( heating and damping of the solar wind plasma , transport of charged cosmic rays ) .a further important turbulence property is the turbulence dynamics ( the time dependence of the stochastic magnetic fields ) . in principle ,data sets from wind and ace can also be used to compute dynamical correlation functions to explore the turbulence dynamics . in a recent article ( shalchi 2008 )magnetic correlation functions were computed analytically .such analytical forms of magnetic correlations complement data analysis results such as matthaeus et al .( 2005 ) and dasso et al .since we expect that future data analysis work will also allow the investigation of temporal correlation functions , we explore theoretically ( numerically and analytically ) the forms of these eulerian correlations .these results can be compared with data analysis results as soon as they are available .the organization of the paper is as follows : in section 2 we define and discuss the basic parameters which are useful for describing turbulence .furthermore , we explain the slab , the 2d , and the slab/2d compositel model . in section 3we review different models for the turbulence dynamics . in section 4we compute eulerian correlation functions numerically and analytically . in section 5the results of this article are summarized .the key function in turbulence theory is the two - point - two - time correlation tensor . for homogenous turbulence its componentsare r_lm ( , t ) = .[ s1e1 ] the brackets used here denote the ensemble average .it is convenient to introduce the correlation tensor in the . by using the fourier representation b_l ( , t ) = d^3 k b_l ( , t ) e^i [ s1e2 ]we find r_lm ( , t ) = d^3 k d^3 k^ e^i .[ s1e3 ] for homogenous turbulence we have = p_lm ( , t ) ( - ^ ) [ s1e4 ] with the correlation tensor in the . by assuming the same temporal behaviour of all tensor components , we have p_lm ( , t ) = p_lm ( ) ( , t ) with the dynamical correlation funtion .( [ s1e3 ] ) becomes than r_lm ( , t ) = d^3 k p_lm ( ) ( , t ) e^i ( ) [ s1e5 ] with the magnetostatic tensor . in this paragraphwe discuss the static tensor defined in eq .( [ s1e5 ] ) .matthaeus & smith ( 1981 ) have investigated axisymmetric turbulence and derived a general form of for this special case . in our casethe symmetry - axis has to be identified with the axis of the uniform mean magnetic field . for most applications ( e.g. plasma containment devices , interplanetary medium )the condition of axisymmetry should be well satisfied .furthermore , we neglect magnetic helicity and we assume that the parallel component of the turbulent fields is zero or negligible small ( ) . in this casethe correlation tensor has the form p_lm ( ) = a(k_,k _ ) , l , m = x , y [ s1e20 ] and .the function is controlled by two turbulence properties : the turbulence geometry and the turbulence wave spectrum .the geometry describes how depends on the direction of the wave vector with respect to the mean field .there are at least three established models for the turbulence geometry : 1 .the slab model : here we assume the form a^slab( k_,k _ ) = g^slab ( k _ ) .[ s1e21 ] in this model the wave vectors are aligned parallel to the mean field ( ) .2 . the 2d model : here we replace by a^2d ( k_,k _ ) = g^2d ( k _ ) .[ s1e22 ] in this model the wave vectors are aligned perpendicular to the mean field ( ) and are therefore in a two - dimensional ( 2d ) plane .the slab/2d composite ( or two - component ) model : in reality the turbulent fields can depend on all three coordinates of space .a quasi - three - dimensional model is the so - called slab/2d composite model , where we assume a superposition of slab and 2d fluctuations : . because of , the correlation tensor has the form p_lm^comp ( ) = p_lm^slab ( ) + p_lm^2d ( ) .[ s4e2 ] in the composite model the total strength of the fluctuations is .the composite model is often used to model solar wind turbulence .it was demonstrated by several authors ( e.g. bieber et al .1994 , 1996 ) that slab / 2d should be realistic in the solar wind at 1 au heliocentric distance .the wave spectrum describes the wave number dependence of . in the slab modelthe spectrum is described by the function and in the 2d model by .as demonstrated in shalchi ( 2008 ) , the combined correlation functions ( defined as ) for pure slab turbulence is given by r_^slab ( z ) = 8 _ 0^ d k _ g^slab ( k _ ) ( k _ z ) [ corrslab ] and the correlation function for pure 2d is r_^2d ( ) = 2 _ 0^ d k _ g^2d ( k _ ) j_0 ( k _ ) .[ s3e12 ] here is the distance parallel with respect to the mean magnetic field and denotes the distance in the perpendicular direction . to evaluate these formulas we have to specify the two wave spectra and . in a cosmic ray propagation study , bieber et al .( 1994 ) proposed spectra of the form g^slab ( k _ ) & = & l_slab b_slab^2 ( 1 + k_^2 l_slab^2)^- + g^2d ( k _ ) & = & l_2d b_2d^2 ( 1 + k_^2 l_2d^2)^- with the inertial range spectral index , the two bendover length scales and , and the strength of the slab and the 2d fluctuations and . by requiring normalization of the spectra b^2 = b_x^2 + b_y^2 + b_z^2 = d^3k [ s2e4 ] we find c ( ) = .[ s2e5 ] by combining these spectra with eqs .( [ corrslab ] ) and ( [ s3e12 ] ) the slab correlation function r_^slab ( z ) = 4 c ( ) b_slab^2 l_slab _ 0^ d k _ ( 1+k_^2 l_slab^2)^- ( k _ z ) [ s2e6 ] as well as the 2d correlation function r_^2d ( ) = 4 c ( ) b_2d^2 l_2d _ 0^ d k _( 1+k_^2 l_2d^2)^- j_0 ( k _ ) [ s3e13 ] can be calculated . in eq .( [ s3e13 ] ) we have used the bessel function . in shalchi ( 2008 ) such calculations valid for magnetostatic turbulence are presented . for dynamical turbulence the slab and the 2d correlation functions from eqs .( [ corrslab ] ) and ( [ s3e12 ] ) become r_^slab ( z ) & = & 8 _ 0^ d k _ g^slab ( k _ ) ( k _ z ) ^slab ( k_,t ) + r_^2d ( ) & = & 2 _ 0^ d k _ g^2d ( k _ ) j_0 ( k _ ) ^2d ( k_,t ) .[ generalcorr ] for the model spectrum defined in the previous paragraph these formulas become r_^slab ( z , t ) & = & 4 c ( ) b_slab^2 l_slab _ 0^ d k _( 1+k_^2 l_slab^2)^- ( k _ z ) ^slab ( k_,t ) + r_^2d ( , t ) & = & 4 c ( ) b_2d^2 l_2d _ 0^ d k _( 1+k_^2 l_2d^2)^- j_0 ( k _ ) ^2d ( k_,t ) .[ corrdyn2 ] to evaluate these equations we have to specify the dynamical correlation functions and which is done in the next section .in the following , we discuss several models for the dynamical correlation function . in table[ dyntab ] , different models for the dynamical correlation function are summarized and compared with each other ..__different models for the dynamical correlation function . here, is the alfvn speed and is a parameter that allows to adjust the strength of dynamical effects .the parameters and of the nadt model are defined in eq .( [ c2s6e3 ] ) . _ _ [ cols="<,<,<",options="header " , ] by comparing the results of this article with spacecraft measurements , we can find out whether modern models like the nadt model is realistic or not .this would be very useful for testing our understanding of turbulence .some results of this article , such as eqs .( [ generalcorr ] ) are quite general and can easily be applied for other turbulence models ( e.g. other wave spectra ) ._ this research was supported by deutsche forschungsgemeinschaft ( dfg ) under the emmy - noether program ( grant sh 93/3 - 1 ) . as a member of the _ junges kolleg _a. shalchi also aknowledges support by the nordrhein - westflische akademie der wissenschaften . _by using abramowitz & stegun ( 1974 ) and gradshteyn & ryzhik ( 2000 ) we can compute analytically the different eulerian correlation functions defined in eqs .( [ c2s5e9 ] ) and ( [ c2s5e10 ] ) . for the plasma wave modelthe results are given in the main part of the paper .for the damping model of dynamical turbulence we can use _ 0^ d x ( 1+x^2)^- e^-a_i x & = & + & & .here we used bessel functions and the struve function . by making use of this resultwe find for the eulerian correlation function e_^i , dt ( t ) & = & 4 c ( ) b_i^2 + & & with and a_slab & = & + a_2d & = & for the damping model of dynamical turbulence . for the random sweeping model we can employ _0^ d x ( 1+x^2)^- e^-(a_i x)^2 = u ( , - , a_i^2 ^2 ) with the confluent hypergeometric function . by employing this resultwe find for the eulerian correlation function e_^i , rs ( t ) = 2 c ( ) b_i^2 u ( , - , a_i^2 ^2 ) for the random sweeping model . for the nadt modelwe only have to explore the 2d fluctuations ( the slab result is trivial and discussed in the main part of the this paper )( [ c2s5e10 ] ) can be rewritten as e_^2d = 4 c ( ) b_2d^2 the first integral in trivial , the second one can be expressed by an exponential integral function : e_^2d = 4 c ( ) b_2d^2 .[ ananadt2d ] this is the final result for the eulerian correlation function of the 2d fluctuations . to evaluate this expression for late times ( )we can approximate the exponential integral function by using e_3 - 1/2 ( 1 ) _1^ e^- x = .[ approxen ] here we assumed that the main contribution to the integral comes from the smallest values of , namely . by combining eq .( [ approxen ] ) with eq .( [ ananadt2d ] ) we find approximatelly e_^2d 4 c ( ) b_2d^2 e^- corresponding to an exponential behavior of the eulerian correlation function .for the correlation time scale we find .a further discussion of these results can be found in the main part of the text .
current spacecraft missions such as wind and ace can be used to determine magnetic correlation functions in the solar wind . data sets from these missions can , in principle , also be used to compute so - called eulerian correlation functions . these temporal correlations are essential for understanding the dynamics of solar wind turbulence . in the current article we calculate these dynamical correlations by using well - established methods . these results are very useful for a comparison with eulerian correlations obtained from space craft missions .
this note is about two classical problems in nonparametric statistical analysis of recurrent event data , both formalised within the framework of a simple , stationary renewal process .we first consider observation around a fixed time point , i.e. , we observe a backward recurrence time and a forward recurrence time .it is well known that the nonparametric maximum likelihood estimator of the gap - time distribution is the cox - vardi estimator ( cox 1969 , vardi 1985 ) derived from the length - biased distribution of the gap time . however , winter & fldes ( 1988 ) proposed to use a product - limit estimator based on , with delayed entry given by .keiding & gill ( 1988 ) clarified the relation of that estimator to the standard left truncation problem .unfortunately this discussion was omitted from the published version ( keiding & gill , 1990 ) .since these simple relationships do not seem to be on record elsewhere , we offer them here .the second observation scheme considers a stationary renewal process observed in a finite interval where the left endpoint does not necessarily correspond to an event .the full likelihood function is complicated , and we briefly survey possibilities for restricting attention to various partial likelihoods , in the nonparametric case again allowing the use of simple product - limit estimators .winter & fldes ( 1988 ) studied the following estimation problem . consider independent renewal processes in equilibrium with underlying distribution function , which we shall assume absolutely continuous with density , minimal support interval , and hazard , .the reason for our unconventional choice for the hazard rate belonging to will become apparent later . corresponding to a fixed time ,say , the backward and forward recurrence times and , , are observed ; their sums are length - biased observations from , i.e. , their density is proportional to .let denote a generic triple .we quote the following distribution results : let be the expectation value corresponding to the the distribution , then the joint distribution of and has density , the marginal distributions of and are equal with _ density _ , andthe marginal distribution of has density , the length - biased density corresponding to .winter and fldes considered the product - limit estimator where is the _ number at risk _ at time .this estimator is the same as the kaplan - meier estimator for iid survival data left - truncated at ( kaplan & meier 1958 , andersen et al .winter & fldes showed that is strongly consistent for the _ underlying _ survival function .we shall show how the derivation of this estimator follows from a simple markov process model similar to the one used by keiding & gill ( 1990 ) to study the random truncation model .first notice that the conditional distribution of given that has density that is , intensity ( hazard ) , , which is just the hazard corresponding to the underlying distribution left - truncated at .now define corresponding to a stochastic process on ] it should be required that in the terminology of keiding & gill ( 1990 , sec .5c ) , and using and the integrability condition translates into or finiteness of where has the underlying ( `` length - unbiased '' ) interarrival time distribution .it may easily be seen from gill et al .( 1988 ) that the same condition is needed to ensure weak convergence of the cox - vardi estimator. a variation of the observation scheme of this section would be to allow also right censoring of the .this can be immediately included in the markov - process / counting process approach leading to the inefficient product - limit type estimator ; the delayed - entry observations are simultaneously right - censored .see vardi ( 1985 , 1989 ) and asgharian et al .( 2002 ) for treatment of the full non - parametric maximum likelihood estimator of , extending the cox - vardi estimator to allow right censoring .other ad hoc estimators and the rich relationships with a number of other important non - parametric estimation problems are discussed by denby and vardi ( 1985 ) and vardi ( 1989 ) .we consider again a stationary renewal process on the whole line and assume that we observe it in some interval ] , actually a right - censored version of 3 . , contributing factors of the form to the likelihood .+ mcclean & devine ( 1995 ) studied nonparametric maximum likelihood estimation in the conditional distribution given that there is at least one renewal in the interval , i.e. , that there are no observations of type 4 .our interest is in basing the estimation only on complete or right - censored gap times , i.e. , observations of type 1 or 2 .when this is possible , we have simple product - limit estimators in the one - sample situation , and we may use well - established regression models ( such as cox regression ) to account for covariates .pea et al .( 2001 ) assumed that observation started at a renewal ( thereby defining away observations of type 3 and 4 ) and gave a comprehensive discussion of exact and asymptotic properties of product - limit estimators with comparisons to alternatives , building in particular on results of gill ( 1980 , 1981 ) and sellke ( 1988 ) .the crucial point here is that calendar time and time since last renewal both need to be taken into account , so the straightforward martingale approach displayed by andersen et al .( 1993 ) is not available .pea et al .also studied robustness to deviations from the assumption of independent gap times .as noted by aalen & husebye ( 1991 ) in their attractive non - technical discussion of observation patterns , observation does however often start between renewals .( in the example of keiding et al .( 1998 ) , auto insurance claims were considered in a fixed calendar period ) . as long as observation starts at a stopping time , inference is still valid , so by starting observation at the first renewal in the interval we can essentially refer back to pea et al .a more formal argument could be based on the concept of the _ aalen filter _ , see andersen et al .( 1993 , p. 164 ) .the resulting product - limit estimators will not be fully efficient , since the information in the backward recurrence time ( types 3 and 4 ) is ignored .it is important to realize that the validity of this way of reducing the data depends critically on the independence assumptions of the model .keiding et al .( 1998 ) , cf . keiding ( 2002 ) for details , used this fact to base a goodness - of - fit test on a comparison of the full nonparametric maximum likelihood estimator with the product - limit estimator .similar terms appear in another model , called the _ laslett line segment problem _ ( laslett , 1982 ) .suppose one has a stationary poisson process , with intensity , of points on the real line .we think of the real line as a calendar time axis , and the points of the poisson process will be called _ pseudo renewal times _ or _ birth times _ of some population of individuals .suppose the individuals have independent and identically distributed lifetimes , each one starting at the corresponding birth time .the corresponding calendar time of the end of each lifetime can of course be called a _ death time_.now suppose that _ all we can observe _ are the intersections of individuals lifetimes ( thought of as time segments on the time axis ) with an observational window ] for which death occurred before time .censored _ proper _ lifetimes corresponding to births within $ ] for which death occurred after time .complete _ residual _ lifetimes corresponding to births which occurred at an unknown moment before time , and for which death occurred after and before time .censored _ residual _ lifetimes corresponding to births which occurred at an unknown moment before time , for which death occurred after time , and which are therefore censored at time .+ the _ number _ of at least partially observed lifetimes ( proper or residual ) is random , and poisson distributed with mean equal to the intensity of the underlying poisson process of birth times , times the factor this provides a fifth , `` poisson '' , factor in the nonparametric likelihood function for parameters and , based on all the available data . maximizing over and ,the mean of the poisson distribution is estimated by the observed number of partially observed lifetimes .thus we find that the _ profile likelihood _ for , and the _ marginal likelihood _ for based only on contributions 1.4 ., are proportional to one another .nonparametric maximum likelihood estimation of was studied by wijers ( 1995 ) and van der laan ( 1996 ) , cf .van der laan & gill ( 1999 ) .some of their results , and the calculations leading to this likelihood , were surveyed by gill ( 1994 , pp . 190 ff . ) .the nonparametric maximum likelihood estimator is consistent ; whether or not it converges in distribution as tends to infinity is unknown , the model has a singularity coming from the vanishing probability density of complete lifetimes just larger than the length of the observation window corresponding to births just before the start of the observation window and deaths just after its end .van der laan showed that a mild reduction of the data by grouping or binning leads to a much better behaved nonparametric maximum likelihood estimator . if the amount of binning decreases at an appropriate rate as increases , this leads to an asymptotically efficient estimator of .this procedure can be thought of as _ regularization _ , a procedure often needed in nonparametric inverse statistical problems , where maximum likelihood can be too greedy .both `` unregularized '' and regularized estimators are easy to compute with the em algorithm ; and the speed of the algorithm is not so painfully slow as in other inverse problems , since this is still a problem where `` root '' rate estimation is possible .the problem allows , just as we have seen in earlier sections , all the same inefficient but rapidly computable product - limit type estimators based on various marginal likelihoods .moreover since the direction of time is basically irrelevant to the model , one can also look at the process `` backwards '' , leading to another plethora of inefficient but easy estimators .one can even combine in a formal way the censored survival data from a forward and a backward time point of view , which comes down to counting all uncensored observations twice , all singly censored once , and discarding all doubly censored data .( this idea was essentially suggested much earlier by r.c .palmer and d.r .cox , cf .palmer(1948 ) ) .the attractive feature of this estimator is again the ease of computation , the fact that it only discards the doubly censored data , and its symmetry under reversing time .the asymptotic distribution theory of this estimator is of course not standard , but using the nonparametric delta method one can fairly easily give formulas for asymptotic variances and covariances . in practiceone could easily and correctly use the nonparametric bootstrap , resampling from the partially observed lifetimes , where again a resampled complete lifetime is entered twice into the estimate .the laslett line segment problem has rather important extensions to observation of line segments ( e.g. , cracks in a rock surface ) observed through an observational window in the plane . under the assumption of a homogenous poisson line segment processone can write down nonparametric likelihoods , maximize them with the em algorithm ; it seems that regularization may well be necessary to get optimal `` root '' behaviour but in principle it is clear how this might be done .again , we have the same plethora of inefficient but easy product - limit type estimators . van zwet ( 2004 ) studied the behaviour of such estimators when the line segment process is not poisson , but merely stationary .the idea is to use the poisson process likelihood as a quasi likelihood , i.e. , as a basis for generating estimating equations , which will be unbiased but not efficient , just as in parametric quasi - likelihood .van zwet shows that this procedure works fine .coming full circle , one can apply these ideas to the renewal process we first described in this section , and the other models described in earlier sections .all of them generate stationary line segment processes observed through a finite time window on the line .thus the nonparametric quasi - likelihood approach can be used there too . since in the renewal process casewe are ignoring the fact that the intensity of the point process of births equals the inverse mean life - time , we do not get full efficiency .so it is disputable whether it is worth using an inefficient ad - hoc estimator which is difficult to compute when we have the options of soon and woodroofe s fully efficient ( but hard to compute ) full nonparametric maximum likelihood estimator , and the many inefficient but easy and robust product - limit type estimators of this paper .this research was partially supported by a grant ( ro1ca54706 - 12 ) from the national cancer institute and by the danish natural sciences council grant 272 - 06 - 0442 `` point process modelling and statistical inference '' .aalen , o.o . & husebye , e. ( 1991 ) . statistical analysis of repeated events forming renewal processes ._ statistics in medicine _ * 10 * , 12271240 .keiding , n. ( 2002 ) .two nonstandard examples of the classical stratification approach to graphically assessing proportionality of hazards . in : _ goodness - of - fit tests and model validity _ ( eds . c. huber - carol , n. balakrishnan , m.s .nikulin and m. mesbah ) .boston , birkhuser , 301308 .sellke , t. ( 1988 ) .weak convergence of the aalen estimator for a censored renewal process , in _ statistical decision theory and related topics iv _ , ( eds .s. gupta & j. berger ) , springer - verlag , new york , vol .2 , 183194 .
nonparametric estimation of the gap time distribution in a simple renewal process may be considered a problem in survival analysis under particular sampling frames corresponding to how the renewal process is observed . this note describes several such situations where simple product limit estimators , though inefficient , may still be useful .
for a long time tunneling a particle through an one - dimensional time - independent potential barrier was considered in quantum mechanics as a representative of well - understood phenomena .however , now it has been realized that this is not the case .the inherent to quantum theory standard wave - packet analysis ( swpa ) ( see also ) , in which the study of the temporal aspects of tunneling is reduced to timing the motion of the center of `` mass '' ( cm ) of the corresponding wave packet to describe the process , does not provide a clear prescription both to interpret the scattering of finite in space wave packets and to introduce characteristic times for a tunneling particle .the latter is known as the tunneling time problem ( ttp ) which has been of great interest for the last decades . as is known ( see ) , the main difficulties to arise in interpreting the wave - packet s tunneling are connected to the fact that there is no causal link between the transmitted ( or reflected ) and incident wave packets .one of the visual consequences of this is that the average particle s kinetic energy for the transmitted , reflected and incident wave packets is different .for example , in the case of an opaque rectangular barrier , the velocity of the cm of the transmitted wave packet is larger than that of the incident one .it is evident that this fact needs a proper physical explanation . as was pointed out in , it would be strange to interpret the above property of wave packets as the evidence of accelerating a particle ( in the asymptotic regions ) by the static potential barrier .one has to point also to the well - known hartman effect related to the acceleration of the cm of the transmitted wave packet , to superluminal velocities ( see also ) . in many respects ,the present interpretation of this property of wave packets is still controversial .note , in the case of wide ( strictly speaking , infinite ) in space wave packets the average kinetic energy of particles , before and after the interaction , is the same .however , it is evident that a causal link between the transmitted and incident wave packets does not appear in this limiting case .perhaps , this fact is a basic reason by which many physicists appraise the phase times introduced in the swpa as ill - defined .at least , a review devoted to the ttp seems to be the last one in which the swpa is considered in a positive context .apart from the swpa to deal with the cm of a wave packet , in the same or different setting the tunneling problem , a variety of alternative approaches ( see reviews and references therein ) to introduce various characteristic times for a tunneling particle have also been developed . among the alternative concepts , of interestare that of the dwell time , that of the larmor time to give the way of measuring the dwell time , and the concept of the time of arrival which is based on introducing either a suitable time operator ( see , for example , ) or the positive operator valued measure ( see review ) . besides , of importance are the studies of the temporal aspects of tunneling on the basis of the feynman , bohmian and wigner approaches to deal with the random trajectories of particles ( see , for example , and references therein ) .one should also mention the papers where the ttp is studied beyond the framework of the standard setting the scattering problem .we have to stress however that in the standard setting , when the initial wave packet may include a zero - momentum component , none of the alternative approaches have led to commonly accepted characteristic times ( see ) .the recent papers , which present new versions of the dwell time ( see ) and complex tunneling times ( see ) , evidence too that up to now there are no preferable time concepts for a tunneling particle .there is an opinion that the ttp , in the standard setting of the one - dimensional scattering problem , is ill - defined , since it does not include the measurement process .we think that such an opinion is very questionable .of course , in some cases the measurement process can essentially modify the original scattering one , and hence the study of its possible influence on the temporal aspects of this scattering process can be very useful ( see , for instance , and the review where this question is deeply analyzed ) . at the same time , to state that any measurement - independent setting of the ttp is ill - defined is unacceptable , in principle .otherwise , all hamiltonians to describe a measurement - independent scattering processes ( including the motion of a free particle , and the tunneling process ) would be considered as ones having no physical sense .we have to stress that any quantum scattering process , like classical one , proceeds , irrespective of our assistance , in some space - time framework . so that quantum theory should give a clear and unambiguous prescription to define both the spatial and temporal limits of this process .the main question of the ttp , which implies an unique answer , is that of the ( average ) time spent by a quantum particle inside a finite barrier region .there is a particular case when answering this question is trivial .we have in mind tunneling a particle through the -potential barrier .indeed , one can _ a priori _ say that this characteristic time should be equal to zero for this potential . for the probability to finda particle in its barrier region is equal to zero .note , the ttp is very often ( see , for example , ) treated as the problem of introducing characteristic times for _ a wave packet _ passing through a quantum - mechanical barrier .as is seen , from the very outset , this formulation implies timing a lengthy object whose spatial size is comparable with ( or even much more than ) the width of the potential barrier .such a vision of the ttp is of wide spreading .therefore it is no mere chance that a nonzero phase transmission time obtained in the swpa for the -potential is viewed by many physicists as a fully expected result , which allegedly says about the non - locality of a quantum scattering process .however , this result , being derived in the swpa on the basis of timing the motion of the cm of a wave packet , is _ a priori _ inconsistent : any selected point of a wave packet should cross instantaneously the point - like support of the -potential .as regards the non - locality of tunneling , the example of the -potential shows explicitly that the time spent by a quantum particle in the barrier region provides insufficient information about a quantum one - particle scattering process .it is useful also to define the time interval when the probability to find a particle crossing through the barrier region is sufficiently large .the necessity in the additional time scale is associated eventually with the fact that the time of arrival of a particle at some point can be predicted , in quantum theory , with the error amounting to the half - width of the corresponding wave packet .it is this characteristic time that must be derived , for a particle , with taking into account of the wave - packet s width .this time , which can be treated as the time of the interaction of a quantum ensemble of particles with the barrier , is always greater than the time spent in the barrier region by each particle of this ensemble .it is this quantity that must be nonzero for the -potential .so , due to the uncertainty in finding the position of a tunneling particle , the ( average ) time spent by the particle in the barrier region is insufficient to give a full information about its interaction with the barrier .however , there is once more peculiarity of a quantum description of the one - dimensional scattering of a particle , which drastically complicates solving the ttp . indeed , in classical theory , in timing a scattering particle for a given initial condition, we deal with the only trajectory of a particle , that corresponds either transmission or reflection. however , in quantum description we deal with a wave function to include information about both the alternative possibilities .therefore every physicist setting to the ttp has firstly to resolve the dilemma , whether he has to introduce individual ( transmission and reflection ) times or whether he must solve the ttp with no distinguishing between transmission and reflection .one should recognize that at present this question is still open .most of the time concepts , such as the time of arrival as well as the dwell , larmor and phase tunneling times suggest introducing individual characteristic times for transmission and reflection . as is pointed out by nussenzveig ( see ) , `` [ if some characteristic time ] does not distinguish between reflected and transmitted particles , [ this is ] usually taken as a defect '' . at the same time , nussenzveig himself believes ( ibid ) that `` [ a joint description of the whole ensemble of tunneling particles ] is actually a virtue , since transmission and reflection are inextricably intertwined ; only the characteristic times averaged over transmitted and reflected particles have a physical sense '' .an intrigue is that there are forcible arguments in both the cases . on the one hand , quantum mechanics , as it stands , indeed provides no prescription to separate to - be - transmitted and to - be - reflected particles at the early stages of scattering .thus , having no information about the behaviour of both the kinds of particles in the barrier region , it is impossible to find the average time spent , in this region , by particles of each kind . a knowledge about their behavior after the scattering event is insufficient for this purpose .on the other hand , the final state of a tunneling particle evidences that tunneling consists in fact from two alternative processes - transmission and reflection .born s formula underlying the statistical interpretation of quantum mechanics fails in this case : the average values of the particle s position and momentum calculated over the whole ensemble of particles , can not be interpreted as the expectation values of these quantities .we consider that this fact is a poor background for introducing characteristic times averaged over transmitted an reflected particles .in fact , the above controversy says that usual quantum mechanics does not provide both a joint and separate description of transmitted and reflected particles .it enables one to study in detail the temporal behavior of wave packets to describe the tunneling process .however , it gives no basis to extract from these detailed data the expectation values of the particle s position and momentum , as well as to introduce its characteristic times .its basic tools - born s formula for calculating the expectation values , and the standard timing procedure - proved to be usefulness in studying a tunneling particle .the main idea of this paper is that in order to learn to calculate expectation values of physical observables for a tunneling particle ( and solve the ttp , on this basis ) one needs to correct our understanding of the nature of a quantum one - particle scattering state and the correspondence principle .now it is generally accepted that any quantum time - dependent one - particle state can be considered , in principle , as the quantum counterpart to some classical one - particle trajectory .however , generally speaking , this is not the case . in this approach ,all quantum one - particle scattering processes described by the schrdinger equation are divided into two classes - combined and elementary .if the wave packet to describe a quantum one - particle scattering process represents at some time a disconnected object ( or , in other words , when the set of spatial points , where the probability to find a particle is nonzero , is disconnected ) , then we deal with a combined process . otherwise , the process is elementary .only in the last case , born s formula and the standard timing procedure are applicable . by our approach , only an elementary time - dependent one - particle scattering state can be considered as the quantum counterpart to some classical one - particle trajectory . as regards a combined time - dependent state , it can be associated with several one - particle trajectories .on the basis of this idea we develop a renewed wave - packet analysis in which we treat the one - particle one - dimensional scattering of a particle on a static potential barrier as a combined process consisting from two alternative ones , transmission and reflection .we hope that this approach will be useful for a deeper understanding of the nature of quantum one - particle scattering processes and , in particular , the tunneling effect .the paper is organized as follows . in section [ a1 ]we pose a complete one - dimensional scattering problem for a particle .shortcomings of the swpa are analyzed in section [ a12 ] . in section [ a2 ]we present a renewed wave - packet analysis in which transmission and reflection are treated separately . in section [ a3 ]we define the average ( exact and asymptotic ) transmission and reflection times and consider the cases of rectangular barriers and -potentials . in the last sectionsome aspects of our approach are discussed in detail .let us consider a particle tunneling through the time - independent potential barrier confined to the finite spatial interval ] .it is evident that the above equations for the arrival times and , which correspond to the extreme points and , respectively , read now as considering ( [ 73 ] ) and ( [ 64 ] ) , we obtain from here that now the transmission time is similarly , for the reflection time ( ) , we have considering ( [ 74 ] ) and ( [ 65 ] ) , one can easily show that the inputs ( ) and ( ) will be named below as the asymptotic transmission and reflection times for the barrier region , respectively : here the word `` asymptotic '' points to the fact that these quantities were obtained with making use of the corresponding in and out asymptotes .unlike the exact tunneling times the asymptotic times can be negative by value .the corresponding lengths and can be treated as the effective barrier s widths for transmission and reflection , respectively .let us consider the case of a rectangular barrier ( or well ) of height and obtain explicit expressions for ( now , both for transmission and reflection , since ) which can be treated as the effective width of the barrier for a particle with a given .besides , we will obtain the corresponding expressions for the expectation value , , of the staring point for this particle : .it is evident that in terms of the above asymptotic times for a particle with the well - defined momentum read as using the expressions for the real tunneling parameters and ( see ) , one can show that , for the below - barrier case ( ) , \left[\kappa_0 ^ 2\sinh(\kappa d)-k^2 \kappa d\right ] } { 4k^2\kappa^2 + \kappa_0 ^ 4\sinh^2(\kappa d)}\ ] ] where for the above - barrier case ( \left[k^2 \kappa d-\beta \kappa_0 ^ 2\sin(\kappa d)\right ] } { 4k^2\kappa^2+\kappa_0 ^ 4\sin^2(\kappa d)}\ ] ] where if , otherwise , . in both the cases .it is important to stress that and , in the limit .this property guarantees that for infinitely narrow in -space wave packets the average starting points for both subensembles will coincide with that for all particles .note , for wells , the values of and , as a consequence , the corresponding asymptotic tunneling times are negative , in the limit , when .note that for sufficiently narrow barriers and wells , namely when , we have . for the starting pointwe have for and , respectively . for wide barriers and wells , when , we have and , for ; and for .it is important that for the -potential , we have .that is , like the dwell and larmor times , in this case .thus , though the ensemble of identically prepared particles spends nonzero time to pass through this potential , each quantum particle of the ensemble spends no time in its barrier region .while the quantum ensemble of particles interacts with the -potential , there is a nonzero probability to find a particle near this barrier .note , unlike the first derivative of with respect to , that of has equal values in the limits .the average force calculated for a particle in the state is zero , for this potential .that is , describes that part of the incident wave packet , which does not experience the action of the -potential . transmitting particles start , on the average , from the point and moves then freely .the main idea underlying this paper is that tunneling a particle through an one - dimensional static potential barrier is a combined stochastic process consisting from two alternative elementary ones - transmission and reflection .we showed that the wave function to describe the tunneling process can be uniquely decomposed into two solutions of the schrdinger equation for the given potential , which describe separately transmission and reflection .we found both the solutions in the case of symmetric potential barriers and introduced , according to the standard timing procedure , the average ( exact and asymptotic ) transmission and reflection times . by our approach , in the most cases ,quantum one - particle scattering processes are just combined .each of them represents a complex stochastic process consisting from several alternative elementary ones .the decomposition of a combined process into elementary ones can be performed uniquely .accordingly , the wave function to describe the combined process can be uniquely presented as a sum of those to describe all the elementary ones .the main peculiarity of combined states is that the averaging over such states of the particle s position and momentum , with help of born s formula intended for calculating the expectation values of physical observables , does not give in reality expectation values of these quantities .both the average values behave non - causally in time .strictly speaking , in the case of combined states , the particle s position and momentum ( though their operators are hermitian ) lost their primary status of physical observables ! as a result , timing a particle in such states is meaningless too . only for elementary quantum processes and statesborn s formula and the timing procedure are valid . in other words , only for elementary statesthe particle s position and momentum ( and other physical quantities with linear hermitian operators ) have their primary status of observables , and , as consequence , there is no problem to time the motion of a particle being in such states .as is shown , the peculiarity of the wave functions for transmission and reflection is that each of them contains only one incident and only one scattered wave packets . at the same time it is evidentthat if a particle was prepared in the combined state , then this packet would be divided by the barrier into two parts .of course , in the case considered the initial combined state of a particle is but not .nevertheless , we have to clear up the principal difference taking place between a wave function to describe a combined process and those to describe elementary ones involved in the former .let , for the given potential , in addition to the problem at hand where the amplitudes of incoming and outgoing waves are , respectively , we have two auxiliary scattering problems with amplitudes and ( the transfer matrix ( [ 50 ] ) is evident to be the same for all three problems ). note that in the first auxiliary problem the only outgoing wave coincides with the reflected wave arising in ( [ 800 ] ) .and , in the second one , the only outgoing wave coincides with the transmitted wave in ( [ 800 ] ) .it is evident that the sum of these two functions results just in that to describe the state of a particle in the original tunneling problem .as is seen , the main peculiarity of the superposition of these two probability fields is that due to interference their incoming waves in the region fully annihilate each other ( note that in the corresponding reverse motion they are outgoing waves ) .the corresponding flux of particles is reoriented into the region .thus , the initial probability fields ( [ 801 ] ) and ( [ 802 ] ) associated with the transmitted and reflected wave packets are radically modified under the superposition . in this case , the wave packet connected causally to the transmitted ( reflected ) one is just ( ) .thus , we see that the sum of wave functions ( [ 801 ] ) and ( [ 802 ] ) can be presented as that of the stationary - states rwf and twf . as a result of reorienting the probability fields , the squared amplitude of the incoming wave ( in the region ) associated with reflection increases due to interference from the initial value ( ) ( see ( [ 801 ] ) ) to ( ( in the rwf ) . in the case of transmission , the corresponding quantity increases from the initial value ( ) ( see ( [ 802 ] ) ) to ( ( in the twf ) . as is seen , in contrast to probability fields ( [ 801 ] ) and ( [ 802 ] ) , and should be considered as an inseparable pair : they can not evolve separately .of course , in this case one would doubt the reality of these fields . indeed , they can can not be observed separately . and , besides , being involved in the combined state , they can not be directly observed , at early stages of scattering , because of interference .however , for any combined process , namely the interference between wave fields to describe elementary sub - processes provides all needed information to justify their existence .indeed , let be the result of measuring . then , using the distributions and calculated beforehand , we can extract , from the experiment data , the difference to describe interference between the wave fields and .by our approach , it must have two important properties : 1 ) the integral of this difference over the region must be zero ; 2 ) this difference must be nonzero only for the first stages of scattering , and only for .the first property means that the whole ensemble of particles , in this scattering problem , can be indeed divided into two subensembles described by the distributions and .the second property means that one of them is indeed connected causally to the transmitted wave packet , and another evolves causally into the reflected one .this means , in turn , that the above decomposition is unique .note , this property is inherent only to combined processes .the elementary states and , themselves represent decompositions into the orthogonal stationary states .however , the latter can not be treated as elementary states . forthe interference between them does not have the above two properties .they can not be separated , in principle .so , the wave functions and describe two real processes to proceed simultaneously . taking into account the fact that a wave function describes the beam ( or , ensemble ) of identically prepared particles , rather than a single quantum particle , one can interpret the found decomposition of as follows .namely , describes that part of the beam of mutually non - interacted particles prepared in the state , which is transmitted through the barrier .similarly , does the reflected part of this beam . at early timesthese parts of the beam move in the same spatial region . at this stage of scattering, the study of the motion of both parts is reduced to the analysis of the interference between them . at all stagesthey evolve irrespective of each other , not `` seeing '' their own counterparts ; just ( ) is causally connected to the transmitted ( reflected ) wave packet considered separately , but not . as well as in the superposition of free moving wave packets they do not destroy each other ( after their meeting in some spatial region they move unaltered ) , in the superposition of the modified wave fields and they do not influence each other , too .it is not surprising that particles of both parts start , on the average , from the spatial points to differ from the average starting point calculated for the whole beam of particles .firstly , due to interference ; here and are the norms of and , respectively . and , what is more important , among these three average quantities , only and have the physical meaning of the expectation values of the particle s position .the behaviour of , being averaged over two alternative processes , is not causal .it can not be interpreted as the expectation value of the particle s position .the main point of our research is that any combined state represents a superposition of elementary states which are distinguishable . as a consequence , by our approach , the experimental study of the probability density for a particle taking part in a combined process means , in fact ,the observation of the interference between the elementary states involved in the combined process . however , maxima and minima of the interference pattern behave non - causally . or ,more correctly , only when we know all information about each elementary state , which just behaves causally , we can unambiguously interpret the behaviour of the interference pattern .all this takes place , in particular , in the case of tunneling . by this approach ,the non - causal behaviour of a tunneling wave packet ( which have been pointed out by ) is explained by the fact that tunneling is a combined process .an exhaustive explanation of this quantum effect can be achieved only in the framework of a separate description of transmission and reflection . for only these processesare elementary , and , as a consequence , namely they ( and the corresponding probability densities ) behave causally .a simple analysis shows that the definitions of the asymptotic tunneling times given in our approach differ essentially from their analogs known in the literature . at this pointwe have to note once more that a correct timing of transmitted and reflected particles implies the availability of a complete information about these subensembles of particles at all stages of scattering . making use , in the alternative approaches , of the incident wave packet asthe counterpart to the transmitted ( or reflected ) wave packet at the first stage of scattering is clearly an inconsistent step .for the former does not connected causally to the transmitted ( or reflected ) one ( see also ) .just the wave functions for transmission and reflection found in our approach provides all needed information .thus , we think that our definitions of tunneling times have a more solid basis .of course , a final decision in the long - lived controversy in solving the ttp should be made by a reliable experiment . in this connection, we have to note that the main ideas of such approaches as , and others whose formalism involves the peculiarities of the measurement process , may be very useful in the following study of the tunneling and other scattering processes treated as combined ones . indeed ,the fact that our definitions of the tunneling times do not coincide , for example , with those obtained in and does not mean at all that the main ideas underling our and these approaches contradict each other .rather they are mutually complementary .we think that namely in combination all these ideas will be useful in studying quantum scattering processes .so , it would be very useful to define the larmor time and time - of - arrival distribution on the basis of wave functions for transmission and reflection .it is evident that the influence of an external magnetic field ( or absorbing potential ) on transmitted and reflected particles should be different .hence , the study of the interference between transmission and reflection , at the first stages of scattering , might permit us to check both our idea of separating these elementary processes and ways of introducing characteristic times , which differ from the standard timing procedure .for the first case , for this purpose , the same magnetic field ( or absorbing potential ) might be localized in two equivalent spatial regions lying on the same distance from the midpoint of a symmetric potential barrier . in this case , the symmetry of the original potential remains unaltered , and there is no principal problem to find the wave functions for transmission and reflection .as regards further development of our approach , it can be applied , in principle , to any potential localized in the finite spatial region . in one dimension, it is applicable to the potential steps and asymmetric potential barriers .no principal difficulties should arise also in separating transmission and reflection in the case of quasi - one - dimensional structures , when the potential energy of a particle depends only on one coordinate .as regards the scattering problem with two slits in the opaque screen , it seems to involve four elementary processes , transmission and reflection for the first and second slits .besides , scattering a particle on a point - like obstacle , with a spherically symmetrical potential , seems to involve two alternative elementary processes . in this casethere is a plain to separate both the processes .this plain must be parallel to the vectors and x |\psi_{full}(x , t)|^2 l_0=7.5 nm 0.25 ev |\psi_{tr}(x , t)|^2 |\psi_{ref}(x , t)|^2 v_0=0.3 ev a=500 nm b=505 nm t=0 x |\psi_{full}(x , t)|^2 l_0=7.5 nm 0.25 ev |\psi_{tr}(x , t)|^2 |\psi_{ref}(x , t)|^2 v_0=-0.3 ev a=500 nm b=505 nm t=0$. } ....
we present a renewed wave - packet analysis based on the following ideas : if a quantum one - particle scattering process and the corresponding state are described by an indivisible wave packet to move as a whole at all stages of scattering , then they are elementary ; otherwise , they are combined ; each combined process consists from several alternative elementary ones to proceed simultaneously ; the corresponding ( normed ) state can be uniquely presented as the sum of elementary ones whose ( constant ) norms give unit , in sum ; born s formula intended for calculating the _ expectation _ values of physical observables , as well as the standard timing procedure are valid only for elementary states and processes ; only an elementary time - dependent state can be considered as the quantum counterpart to some classical one - particle trajectory . by our approach , tunneling a non - relativistic particle through a static one - dimensional potential barrier is a combined process consisting from two elementary ones , transmission and reflection . in the standard setting of the problem , we find an unique pair of solutions to the schrdinger equation , which describe separately transmission and reflection . on this basis we introduce ( exact and asymptotic ) characteristic times for transmission and reflection .
localization involves the process of obtaining physical locations using radio signals .radio signals are exchanged between radios with known and unknown positions .radios at known positions are called reference radios .radios at unknown positions are called blind radios .localization of blind radios reduces to fitting these measured radio signals to appropriate propagation models .propagation models express the distance between two radios as a function of the measured radio signals .these measured radio signals are often modeled as deterministic radio signals with noise using a large variety of empirical statistical models .radio localization usually involves non - linear numerical optimization techniques that fit parameters in the propagation model given the joint probability distribution of the ensemble of measured radio signals .localization precision is usually expressed as the root mean squared error ( rmse ) . in the field of wireless sensor networks ,the estimation bounds on localization precision are often calculated by the cramr - rao lower bound ( crlb ) from empirical signal models with independent noise .in general , localization precision depends on whether the measured radio signals contain phase information .phase information can only be retrieved from measurements that are instantaneous on a time scale that is short relative to the oscillation period of the signals .the smallest measurable position difference depends on how well phase can be resolved .this is usually limited by the speed and noise of the electronics of the system .less complex and less expensive localization systems are based on measurements of time - averaged power flows or received signal strengths ( rss ) .time - averaging is usually performed over timespans that are large compared to the coherence time of the radios , so that phase information is lost .rss localization is an example of such less - complex systems .hence , when determining the bounds on localization precision , it makes sense to distinguish between the time scales of the signal measurements , i.e. between instantaneous and time - averaged signal and noise processing .speckle theory describes the phenomenon of power - flow fluctuations due to random roughness from emitting surfaces .this theory shows that independent sources generate power - flow fluctuations in the far field that are always cross - correlated over a so - called spatial coherence region .the linear dimension of this region equals the correlation length .this correlation length has a lower bound in far - field radiation . in all practical cases of interest ,the lower bound on the correlation length of this far - field coherence region is of the order of half the mean wavelength of the radiation .this lower bound is called the diffraction limit or rayleigh criterion .it is not obvious that this lower bound on correlation length holds in our wireless sensor network setup with its characteristic small cylindrical antennas .therefore , we derive this lower bound on the correlation length with the corresponding cross - correlation function from the maxwell equations using the ieee formalism described by .our derivation of this lower bound on the correlation length and corresponding cross - correlation function leads to the well - known van cittert - zernike theorem , known in other areas of signal processing like radar , sonar and optics .our novel experimental setup validates that power - flow fluctuations are cross - correlated over a correlation length of half a wavelength by increasing the density of power - flow measurements from one to 25 power - flow measurements per wavelength . in the field of wireless sensor networks ,the estimation bounds on localization precision are usually determined by empirical signal models with independent noise over [ space , time ] .hence , the correlation length of the radiation is assumed to be infinitely small .this representation renders practical relevance as long as the inverse of the sampling rate is large compared to the correlation length , so that correlations between measurements are negligibly small .this paper experimentally and theoretically investigates radio localization in the far field when sufficient measurements are available to reveal the bounds on localization precision .we use bienayme s theorem to show that the ensemble of correlated power - flow measurements has an upper bound on the finite number of independent measurements over a finite measuring range .we show that this finite number of independent measurements depends on the correlation length .when we account for the finite number of independent measurements in the fisher information , the crlb on localization precision deviates 2 - 3% from our experimental results . we show that this approach provides practically identical results as the crlb for signals with correlated gaussian noise , given the lower bound on correlation length .hence , the lower bound on correlation length determines the upper bound on localization precision .there are a few papers that assume spatially cross - correlated gaussian noise on power - flow measurements .these papers consider cross - correlations caused by shadowing that extend over many wavelengths . states that their cross - correlation functions do not satisfy the propagation equations .such cross - correlations have no relation to wavelength of the carrier waves as the diffraction limit is not embedded in the propagation model like we derive .correlation , coherence and speckle properties of far - field radiation are governed by the spreads of fourier pairs of wave variables that show up as variables in the classical propagation equations of electromagnetic waves .the products of these spreads of fourier pairs express uncertainty relations in quantum mechanics , which are formulated as bandwidth relations in classical mechanics and signal processing . mathematically establishes a relationship between fisher information , the crlb and uncertainty .as uncertainty is lower bounded by diffraction , fisher information is upper bounded and crlb is lower bounded .our experimental work reveals quantitative evidence for the validation of this theoretical work .this paper is organized as follows .section [ sec : experimental_setup ] describes the experimental setup .section [ sec : theory ] derives the propagation and noise models from first principles for our experimental setup for each individual member of the ensemble of measurements .section [ sec : model ] describes the signal model , maximum likelihood estimator ( mle ) and crlb analysis for the ensemble of measurements using the results of section [ sec : theory ] .section [ sec : experimental ] presents the experimental results in terms of spatial correlations and upper bounds on localization precision .finally , section [ sec : discussion ] provides a discussion and section [ sec : conclusions ] summarizes the conclusions .[ fig : measurement_setup ] and [ fig : photo ] show the two - dimensional experimental setup .[ fig : measurement_setup ] shows a square of , which represents the localization surface .we distinguish between two types of radios in our measurement setup : one reference radio and one blind radio .the reference radio is successively placed at known positions and is used for estimating the position of the blind radio .the crosses represent the manual positions of the reference radio ( ) and are uniformly distributed along the circumference of the square ( one position every half centimeter ) . rather than placing a two - dimensional array of reference radios inside the square, it may suffice to place a significantly smaller number on the circumference of the rectangle and get similar localization performance .measuring field amplitudes on a circumference rather than measuring across a surface is theoretically expressed by green s theorem as is shown in section [ sec : theory ] . whether sampling power flows on circumferences instead of sampling across two - dimensional surfaces suffices has yet to be verified by experiment .this paper aims to show the practical feasibility of this novel technique , which was first proposed by .m ) to measure rss to the blind radio .the blind radio is located at the origin . for illustrative purposes, this figure only shows of the measurement positions.,scaledwidth=48.0% ] [ t ] cm apart .the inset on the right shows a close - up picture of the radio.,scaledwidth=48.0% ] the red circle represents the position of the blind radio .we place the blind radio at an unsymmetrical position , namely .we only use one blind radio and one reference radio to minimize influence of hardware differences between radios .the blind and reference radios are both main powered to minimize voltage fluctuations .the blind radio has a power amplifier and broadcasts messages with maximum power allowed by etsi to maximize snr .both blind and reference radios have an external dipole antenna .the antennas have the same vertical orientation and are in line - of - sight ( los ) for best reception .this implies that the polarization is vertically oriented perpendicular to the two - dimensional localization plane .the length of the antenna is half a wavelength .its diameter is roughly one twentieth of a wavelength .we keep the relative direction of the printed circuit boards on which the antennas are mounted constant by realigning them every cm . in order to minimize interference from ground reflections , we place the radios directly on the ground , so that their antennas are within one wavelength height .the ground floor consists of a reinforced concrete floor covered by industrial vinyl .we minimize interference from ceiling reflections by placing a aluminum plate one centimeter above the blind radio antenna .the ground and the aluminum plate minimize the influence from signals in the z - direction , so that we only have to consider signals in two dimensions .all reference radio positions are in the far field of the blind radio .a photo of our setup is shown in fig .[ fig : photo ] . at each of the reference radio positions ,the reference and blind radio perform a measurement round .a measurement round consists of repetitive multiplexed rss measurements to investigate and quantify the measurement noise and apply crlb analysis to this measurement noise .each measurement round consists of measurement sets that consist of rss measurements on an unmodulated carrier transmitted by the blind radio . between each measurement set of rss measurements ,the reference and blind radios automatically turn on and off ( recalibrate radios ) .although we did not expect to find any difference from these two different forms of multiplexing rss signals , our experiments should verify this , which they did .hence , a measurement round consists of measurement sets , each set consisting of repetitive multiplexed rss measurements .measurement rounds and measurement sets are represented by .index identifies the position of the reference radio and thus the measurement round , index identifies the measurement sets , and index identifies the individual rss measurements in each measurement set . represents the measured power in dbm and is time - averaged over according to the radio chip specification .the coherence time is , which implies a coherence length of km .the averaging time of is a factor of five larger than the specified coherence time of the carrier wave of a typical 802.15.4 radio .practically , this means that these power measurements lose all phase information . in theory , this means that we measure the time - averaged power flow or poynting vector as the cross - sections of the antennas are the same .the blind radio transmits in ieee 802.15.4 channel in order to minimize interference with wi - fi channels and .the reference radio performs power - flow measurements in the same channel and sends the raw data to a laptop over usb , which logs the data .we use matlab to analyze the logged data . between each measurementround , we change the position of the reference radio by cm and push a button to start a new measurement round .note that cm is well within the diffraction limit of half the mean wavelength . in summary ,each measurement set takes one second ; each measurement round takes roughly one minute ( seconds ) .the experiment consists of measurement rounds and takes in total hours ( minutes hours ) . in practice, we spend almost three weeks in throughput time to perform this complete data collection .this section formulates the propagation and noise model of radio localization systems operating in the far field .this model holds for each individual member of the ensemble of power - flow measurements over space described in section [ sec : experimental_setup ] .our propagation model is based upon em inverse source theory , where we follow the ieee formalism given by .this formalism is needed to derive the lower bound on correlation length of wireless networks using power - flow measurements on a time scale long compared to the coherence length of the hardware .the measurement configuration of practical interest is composed of the wave - carrying medium , i.e. the atmosphere , a number of source domains and one receiver domain .our measuring chamber has a variety of contrasting obstacles like a ceiling , walls , a ground floor , pillars , and a small aluminum table hiding the ceiling from the transmitter .most of these secondary sources are so far away from the receiver that they are neglected in our theoretical representation .we account for the following domains : * the wave - carrying medium denoted by , with electric permittivity , magnetic permeability and em wave speed .it is locally reacting , spatially invariant , time invariant and lossless . to locate positions in space , the observer employs the ordered sequence of cartesian coordinates , or , with respect to a given origin , while distances are measured through the euclidean norm .the fourth dimension is time and is denoted by . *a transmitter with bounded support with . here, denotes the spatial support carrying electric currents with volume density , and denotes the spatial support carrying magnetic currents with volume density ^{-} ] .we assume this noise to be negligible in our setup as we identify it with the thermal noise in all contrasting media in the measuring chamber .* surface scatterer(s ) or surface noise with bounded spatial support with . here, denotes the surface boundary carrying electric currents with surface density , and denotes the surface boundary carrying magnetic currents with surface density ^{-} ] are accessible to measurement through the received time - averaged power flow ^{- } e_{q } dt \text{.}\ ] ] the receiver measures the power flow of an unmodulated carrier with an observation time of , long compared to the coherence time of .the mapping sources field is unique if the physical condition of causality is invoked . for the received signal at the point , , of observation , the relation between the em vector and scalar potentials , ^{- } \right\} ] , is directed perpendicular to the z - axis and and equals the far - field electric field strength multiplied by . in the far - field approximation ,the time - averaged power flow of becomes inversely proportional to with the help of , so that without any noise , reduces to in , denotes the absolute value of the time - averaged poynting vector given by , and denotes the reference power flow at reference far - field distance of , say , m .equation can be read as the non - logarithmic gain model . in the inverse - source problem formulation pertaining to our measurement configuration, information is extracted from an ensemble of measured , time - averaged power - flow signals .this information reveals the nature and location of the scattering volume and surface sources in .the mapping of the is known to be non - unique .a detailed analysis in this respect can be found in . in radio localization, an algorithm is used that is expected to lead to results with a reasonable degree of confidence .such an algorithm is based on the iterative minimization of the norm of the mismatch in the response between an assumed propagation model for each individual measurement and an ensemble of observed power - flow signals .the influence of noise can be accounted for by using an input power - flow signal perturbed by an additive noise signal as is usually done in the log - normal shadowing model ( lnsm ) .for independent , uncorrelated noise , reduces to the non - logarithmic gain model ^{1/2}/10 \ll 1 ] . under the condition that the perturbation is small , we obtain ^{1/2}/10 \ll 1 ] can be replaced by an average over the fourier conjugated space because of the assumed wide - sense stationary ( wss ) random noise process . in one dimension, the expectation value of ] at each individual reference - radio position .as we shall see in the next subsection , this measurement variance has a fundamental lower bound within the time scale of our measurements ( ) .although the correlation function derived only holds for correlations in the close neighborhood of each point of observation , can be extended to larger distances when the path losses are accounted for as is usually done in lnsm .the correlation function adopted in the literature is an exponentially decaying function .reference notes that the exponentially decaying correlation function does not satisfy the propagation equations of em theory .our correlation functions hold for each individual measurement position of the reference radio .the extension to an ensemble of measurement positions with a derivation of the measurement variance of this ensemble is given in section [ sec : model ] .section [ sec : model ] investigates two signal models for this extension and compares the computed measurement variances for the two correlation functions of power flows as a function of the sampling rate .equation contain two products of the fourier pairs [ angular frequency , time ] and [ spatial frequency , position .these two sets of conjugate wave variables naturally lead to an uncertainty in physics called the heisenberg uncertainty principle . hence , uncertainty is a basic property of fourier transform pairs as described by and equations and are the well - known uncertainty relations for classical wave variables of the fourier pairs [ energy , time ] and [ momentum , position ] . in these fourier pairs ,energy and momentum are proportional to angular frequency and wave number , the constant of proportionality being the reduced planck constant .although these uncertainty relations originally stem from quantum mechanics , they fully hold in classical mechanics as bandwidth relations . in information theory , they are usually given in the time domain and are called bandwidth measurement relations as described by . in fourier optics , they are usually given in the space domain as described by .heuristically , one would expect the lower bounds on uncertainty in time , , and position , , to correspond to half an oscillation cycle , as half a cycle is the lower bound on the period of energy exchange between free - space radiation and a receiving or transmitting antenna giving a stable time - averaged poynting vector .when one defines localization performance or precision as the inverse of the rmse , one would expect the localization precision of the ensemble not to become infinitely large by continuously adding measurements .when the density of measurements crosses the spatial domain of spatial coherence as computed from , the power - flow measurements become mutually dependent .the link between uncertainties and fisher information was first derived by . as we show in section [ sec : experimental ] , our experiments reveal convincing evidence for this link .the first five subsections describe the signal model , mle and crlb usually applied in the field of rss - based radio localization for independent noise and cross - correlated noise .section [ sec : bienayme ] reconciles diffraction and the nyquist sampling theorem with the crlb using cross - correlated noise and bienaym s theorem .in addition , it introduces the notion of the maximum number of independent rss measurements and relates this to fisher information . in section [ sec : sim ] , simulations verify that the estimator is unbiased and efficient in the setup as described in section [ sec : experimental_setup ] .we extend the notations introduced in section [ sec : experimental_setup ] and [ sec : theory ] that describe our measurement configuration with the bold - faced signal and estimator vectors usually employed in signal processing .the experimental setup consists of a reference radio that measures power at positions of an unmodulated carrier transmitted by the blind radio . at each position, the reference radio performs repetitive multiplexed power measurements .these measurements are used to estimate the blind radio position , which is located at the origin .we adopt the empirical lnsm for modeling the power over distance decay of our rss measurements . as the cross - sections of the blind transmitter and reference receiverare given and equal , the power as well as the power - flow measurements are assumed to satisfy the empirical lnsm where in and , identifies the power - flow measurement . denotes the ensemble mean of power - flow measurements at far - field distance in dbm . represents the power flow at reference distance in dbm . represents the path - loss exponent . represents the noise of the model in db due to fading effects . follows a zero - mean gaussian distribution with variance and is invariant with distance .equations and are equivalent with the of , where the path - loss exponent equals two .one usually assumes spatially independent gaussian noise . in ,the cross - correlation between power - flow measurements is independent of wavelength and is modeled with an exponentially decaying function over space by \text{,}\ ] ] where denotes the correlation length .section [ sec : noise ] shows that the lower bound on correlation length depends on the wavelength and equals half the mean wavelength , . in section [ sec : noise ] , shows that the cross - correlation function of power - flow measurements takes on the form of the diffraction pattern in section [ sec : crlb ] , we show the implications of using the cross - correlation function of and the cross - correlation function of that satisfies the propagation model derived from first principles .the non - linear least squares problem assuming the lnsm with correlated noise is denoted by where denotes the vector of power - flow measurements \text{,}\ ] ] where denotes the vector of power - flow measurements calculated by the lnsm as expressed by \text{,}\ ] ] where denotes the vector of distances between the reference radio and the blind radio \text{.}\ ] ] here , denotes power flow at far - field distance as expressed by . in, denotes the set of unknown parameters , where ] of the lnsm are calibrated for a given localization setup .the blind radio position is assumed to be known when the lnsm is calibrated .the lnsm assumes that is equal for each rss measurement , so that and can be estimated independent of = \arg \min_{\boldsymbol{\theta } = [ p_{r_{0 } } , \eta ] } v(\boldsymbol{\theta } ) \text{.}\ ] ] in , is defined as the standard deviation between the measurements and the fitted lnsm using and .equation gives for all our power - flow measurements the best fit when the lnsm parameters are calibrated at we use these calibrated lnsm parameter values to calculate the bias and efficiency of our estimator from simulations .in addition , we use these calibrated lnsm parameter values to calculate the cross - correlations in the far - field and crlb .we use the mle as proposed by , with the physical condition of causality invoked on and with boundary conditions on , to estimate the blind radio position = \arg \min_{\boldsymbol{\theta } = [ x , y , p_{r_{0 } } , \eta ] } v(\boldsymbol{\theta } ) \nonumber \\ & \text{subject to } \nonumber \\ & \eta>0 \nonumber \\ & -1 \leq x \leq 2 \nonumber \\ & -2 \leq y \leq 1 \text{,}\end{aligned}\ ] ] our estimator processes the noise as independent , so that .section [ sec : sim ] shows that this estimator is unbiased and efficient in our simulation environments with uncorrelated and correlated noise .crlb analysis provides a lower bound on the spreads of unbiased estimators of unknown deterministic parameters .this lower bound implies that the covariance of any unbiased estimator , , is bounded from below by the inverse of the fisher information matrix ( fim ) as given by in , represents the set of unknown parameters , and represents the estimator of these parameters . in case of multivariate gaussian distributions ,the elements of the fim are given by {a , b } = \left [ \frac{\partial \mathbf{\bar{p}}}{\partial \theta_{a } } \right]^{t } c^{-1 } \left [ \frac{\partial \mathbf{\bar{p}}}{\partial \theta_{b } } \right ] + \\\frac{1}{2 } \left [ c^{-1 } \frac{\partial c}{\partial \theta_{a } } c^{-1 } \frac{\partial c}{\partial \theta_{b}}\right]\text{.}\end{gathered}\ ] ] in our case , the covariance matrix is not a function of unknown parameter set so that the second term equals zero .the elements of the fim associated with the estimator defined by , ] .the crlb on rmse for unbiased estimators is computed from here represents the trace of the matrix .we use the calibrated lnsm parameters as given by .,title="fig:",scaledwidth=50.0% ] fig . [fig : crlb ] shows the lower bound on the rmse calculated by and as a function of the number of rss measurements .when we assume independent rss measurements , , the rmse decreases to zero with an ever increasing number of independent rss measurements .this is in accordance with the theoretical bound analysis presented by .hence , there is no bound on localization precision with increasing sampling rate .when we account for the spatial cross - correlations between rss measurements , , the bound on radio localization precision reveals itself when sufficient rss measurements are available .we evaluate the crlb with the cross - correlation function expressed by the diffraction pattern in and the exponentially decaying cross - correlation function as expressed by .they basically converge to the same bound on localization precision . apparently , the bound on localization precision depends on the bound on the correlation length rather than on the form of the cross - correlation functions .the determinant of the covariance matrix starts to decrease when correlations start to increase , until the fisher information and thus localization precision stabilizes on a certain value . when the cross - correlations are equal to the diffraction pattern as expressed by , the covariance matrix becomes singular when the sampling rate goes above a certain threshold , which equals the inverse of the diffraction limit or the nyquist sampling rate .hence , the multivariate gaussian distribution becomes degenerate and the crlb can not be computed without regularization .this section connects diffraction to the sampling theorem , covariance and fisher information using bienaym s theorem .bienaym s theorem offers a statistical technique to estimate measurement variance of correlated signals .it states that when an estimator is a linear combination of measurements the covariance of this linear combination of measurements equals and is equivalent to the measurement variance . in, is a weighting factor , represents the standard deviation of measurement , and represents the correlation coefficient between measurement and . when all measurements have equal variance and equal weights , reduces to in , is the spatially averaged correlation of measurement pairs and is given by and represents the number of measurements that effectively decreases estimator covariance and is given by shows in the time domain that the cross - correlation of bandwidth limited fluctuations on signals with a periodic wave character is proportional to the point - spread - function ( psf ) of a measurement point . in the space domain ,the cross - correlation of far - field power - flow measurements takes on the form expressed by the diffraction pattern in .in addition , shows that the radius , , of the first zero equals the inverse of the minimum ( nyquist ) sampling rate .the sampling theorem determines the sampling rate that captures all information of a signal with finite bandwidth . for signals with a periodic wave character measured in the far field ,noise is cross - correlated over the far - field spatial coherence region of the signal .this far - field coherence region assumes the far - field diffraction pattern and is equivalent to the psf of power - flow measurements . the lower bound on the far - field region of spatially correlated signals and noise is equal to the diffraction limit and to the inverse of the upper bound on spatial frequencies divided by , i.e. . the finite spatial - frequency bandwidth of the far field relates diffraction to the sampling theorem and whittaker - shannon interpolation formula or cardinal series .hence , this coherence region determines the maximum number of independent measurements over a finite measuring range .this maximum number of independent measurements is also known as the degrees of freedom , and is given by in .we decompose into two factors where represents the ratio of the total number of measurements and effective number of measurements , and represents the covariance of the estimator assuming that all measurements are independent .the global spatial average of correlation coefficients , , as given by can be computed using the correlation function for any localization setup by assuming an ever increasing sampling rate . for our localization setup of fig .[ fig : measurement_setup ] , we compute processing all rss measurements over space . substituting this value in gives . to account for the effects of spatial correlations on the covariance of non - linear unbiased estimators , we heuristically rewrite into the following postulate where is equal to the fisher information assuming independent measurements .our postulate rigorously holds when the variance and thus the fisher information of each wave variable at each measurement point is equal , assuming that the crlb holds as defined in .our postulate of is in agreement with the concept that the degrees of freedom of signals with a periodic wave character is independent of the estimator .postulate is also in line with that crlb efficiency is approximately maintained over nonlinear transformations if the data record is large enough .hence , is a good performance indicator when the correlation length is small relative to the linear dimensions of localization space .[ fig : crlb ] shows the lower bound on the rmse calculated by and using the two cross - correlation functions expressed by and .the solid curves represent the results of .our approach expressed in differs less than 1 mm from the crlb for signals with correlated gaussian noise over the entire range shown by fig .[ fig : crlb ] .the influence of correlated measurements on the crlb is apparent for as converges asymptotically to roughly half the mean wavelength for both cross - correlation functions .hence , the correlation length and thus the finite number of independent rss measurements determines the bound on localization precision .note that the effective number of measurements converges to practically identical values for both cross - correlation functions , .the sinc - squared cross - correlation function expressed by is bandwidth limited , so that the covariance matrix becomes degenerative at roughly the nyquist sampling rate .the exponentially decaying cross - correlation function expressed by is not bandwidth limited , so that the covariance matrix stays full rank . in this case , the fisher information decreases per individual rss measurement with increasing sampling density until the localization precision stabilizes .when , provides practically the same results as the crlb for independent measurements ( mm difference in all cases ) .hence , one can assume independent measurements as long as the inverse of the sampling rate is large compared to the correlation length .the bound on estimator covariance becomes of interest when the density of measurements surpasses the bound imposed by the spatial coherence region of far - field radiation of the transmitters .when the number of measurements goes to infinity , reduces to which does not approach zero .hence , bienaym s theorem reveals the link of measurement variance to diffraction , the sampling theorem , and fisher information .we performed simulation runs using the measurement setup described in fig .[ fig : measurement_setup ] to quantify ( 1 ) the bias and ( 2 ) the efficiency of our estimator in .we performed two sets of simulations , one with independent noise and one with correlated noise expressed by .we did not perform simulations with cross - correlation function , because the covariance matrix is singular above the nyquist sampling rate and thus for our measurement setup .note that these simulations do not provide an experimental validation of the model used .an estimator is unbiased if are interested in the bias of the estimated blind radio position , so we define the bias as simulations show that the bias is of the order of mm for independent and correlated noise .estimator efficiency is defined as the difference between the covariance of the estimator and crlb .we quantify this difference by simulations show that the estimator and crlb differs at most by mm for both independent and correlated noise .hence , our estimator in has a negligible bias and a high efficiency in our simulation environments with independent and correlated noise .this section presents the experimental results of the two - dimensional measurement setup described in section [ sec : experimental_setup ] .the first subsection presents the far - field spatial cross - correlation function of the rss signals and estimate the spread of these spatial correlations .we then determine the spatial fourier transform of these cross - correlations to look for an upper bound at a spatial frequency of , in line with the spatial resolution being bounded by the diffraction limit .finally , we determine the global rmse of and show its asymptotic behavior to the diffraction limit with the increase of the density of sampling points . we compare this experimentally determined rmse with the rmse determined by the crlb for independent and correlated noise .we show that the crlb for independent noise underestimates the rmse computed from our measurements .[ fig : coherence ] shows the spatial cross - covariance function between power flows as a function of distance in wavelengths .the spatial cross - covariance function is calculated using the deviations from the lnsm expressed by and calibrated propagation parameters expressed by .hence , spatial cross - covariance and thus cross - correlations are distance independent by cross correlating the deviations from the calibrated lnsm . fig .[ fig : coherence ] shows that the spatial cross - correlations go to a minimum over a distance of roughly half the mean wavelength , which corresponds to the diffraction limit .the small difference between the black and red curve indicates that noise resulting from repetitive multiplexed measurements over time is negligible compared to the cross - correlated noise in the far field .[ t ] million rss measurements .the red curve shows the measured cross - correlations using rss measurement per reference radio position.,title="fig:",scaledwidth=48.0% ] [ t ] .theoretically , it is represented by tri(k ) , as it is the fourier transform of .the vertical line represents the spatial frequency that corresponds to the diffraction limit , which forms an upper bound in the spectrum.,title="fig:",scaledwidth=48.0% ] in the case of fisher information and thus crlb analysis , information is additive when measurements are independent .one usually assumes signal models with independent noise .hence , localization precision increases with an ever increasing amount of independent measurements over a finite measuring range. however , when measurements are correlated , information and localization precision gain decrease with increasing correlations .[ fig : coherence ] shows that when space - measurement intervals become as small as the diffraction limit , measurements become spatially correlated and mutually dependent .our measurements show that rss signal measurements are spatially correlated over a single - sided region of roughly half the mean wavelength .it corresponds to the diffraction pattern expressed by derived in section [ sec : noise ] .therefore , increasing reference radio density beyond one per half a wavelength has a negligible influence on fisher information gain and thus on localization precision .our experimental results in the next subsection confirm this . in case of independent measurements , fig .[ fig : coherence ] would show an infinitely sharp pulse ( dirac delta function ) .[ fig : fft ] shows the measured power spectrum of cross - correlated noise in rss signals , , i.e. the spatial fourier transform of the cross - covariance of fig .[ fig : coherence ] .theoretically , it is represented by the spatial fourier transform of .this figure shows that the energy is mainly located in lower spatial frequencies and it diminishes over a single - sided interval of .this upper bound corresponds to the diffraction limit .the nyquist sampling rate provides an estimate of the minimum sampling rate to fully reconstruct the power - flow signal over space without loss of information , which equals the single - sided bandwidth of our power spectrum .the vertical black line represents the spatial frequency associated with the nyquist sampling rate , which equals . our experimental results in the next subsectionconfirm this by showing that the localization precision does not increase by sampling beyond this sampling rate . in case of independent measurements , fig .[ fig : fft ] would show a uniform distribution .[ fig : localization ] shows the experimental results of our measurement setup described in section [ sec : experimental_setup ] .localization precision is given as the inverse of rmse , which is computed from .[ t ] independent rss measurement per reference radio position ( green curve).,title="fig:",scaledwidth=48.0% ] the red curve in fig .[ fig : localization ] shows the rmse as a function of the number of measurements per wavelength .the rmse decreases with increasing number of rss measurements over space until sufficient measurements are available . at a critical measurement density ,the bound on localization precision becomes of interest .the rmse ( red and black curves ) converges asymptotically to roughly half the mean wavelength as one would expect from the diffraction limit .the rmse represented by the red curve is based on processing all rss signal measurements per reference radio position .the rmse represented by the black curve is based on processing one rss signal measurement per reference radio position .the negligible difference between the red and black curves shows that the number of repeated rss signal measurements per reference radio has a negligible influence on the rmse .the crlb for independent measurements starts deviating from the rmse ( red and black curves ) when the sampling rate is increased beyond one rss signal per half the mean wavelength ( see black dotted curve ) , as one would expect from the diffraction limit and the nyquist sampling rate over space .spatial correlations between rss signals increase rapidly with increasing rss measurement density beyond one sample per half the mean wavelength ( fig .[ fig : coherence ] ) .correlated rss signals can not be considered as independent . has shown that fisher information is upper bounded by uncertainty in line with .as is upper bounded in spatial frequencies by , the spread or uncertainty in position , , is lower bounded .hence its inverse is upper bounded , as is fisher information .coherence and speckle theory have shown that uncertainty is lower bounded by the diffraction limit .hence , the crlb at a sampling density of one sample per half the mean wavelength ( vertical dotted black curve ) should equal the measured bound on rmse .we define the measured bound on rmse as the rmse processing all 60 million measurements .our measurements show that the difference between this crlb and the measured bound on rmse is -% ( 1 mm ) . hence, our experiments validate the theoretical concepts introduced by .our experiments reveal evidence that the crlb can not be further decreased by increasing the number of measurements . on the other hand , at rss signal measurements per wavelength , the rmse is a factor of four higher than the one calculated by the crlb for independent noise .this difference can not be explained by the difference between the covariance of the estimator and the crlb ( see section [ sec : model ] ) .the difference can , however , be explained by calculating the degrees of freedom of our localization setup using bienaym s theorem and the lower bound on correlation length .when one substitutes this lower bound in the and substitutes in , one obtains the asymptotic behavior of the crlb with correlated noise as is shown in fig .[ fig : crlb ] .the measured asymptote in fig . [ fig : localization ] deviates -% from bienaym s theorem and from the crlb for correlated noise .all million power measurements and matlab files are arranged in a database at linkping university .our novel localization setup of using reference radios on the circumference of a two - dimensional localization area instead of setting them up in a two - dimensional array worked well .this implies that one does not have to know the phases and amplitudes on the closed surfaces around extended sources to reconstruct the positions of these extended sources .our experiments show that it suffices to measure time - averaged power flows when a localization precision of about half a wavelength is required ( in our case cm ) .time averaging on a time scale that is long compared to the temporal coherence time is usually employed in rss localization , so that phase information is lost .we expect such a setup to work well when all radio positions are in los . for a practical implementation in nlos environments, we refer to .the propagation equations of section [ sec : theory ] are than applied to design an efficient setup and algorithm to locate the blind radios . in section [ sec : theory ], we distinguished between primary and secondary cross - correlated noise in the far field . in our experimental setup , where signal levels are large compared to noise levels , it is reasonable to neglect thermal and quantum noise . scattering from the surface roughness of the primary source can not provide the the spatial frequency bandwidth of fig .[ fig : fft ] because of the small size of the transmitting antenna .we expect that the cross - correlated noise originates from the surface roughness of the large area between the transmitter and receiver .hence , our time - averaged rss measurements are correlated over space in line with the derived correlation model of as was verified by our experiments .noise can originate from a variety of sources . in our setup, the noise level of the antenna plus electronics on a typical radio is about below signal level as specified in .the reproducible ripples on our rss signals originated in part from small interference effects from undue reflections from mostly hidden metallic obstacles in the measuring chamber . such obstacles like reinforced concrete pillars could not be removed . in a real indoor office environment , these interference effects usually dominate rss signals . in these environments the cross - correlated noise in rss signals is still determined by the diffraction limit , which in turn , determines the bound on localization precision .multi - path effects and fading do not change the stochastic properties of the secondary extended sources where the noise is generated . as explained in the literature ,such models are usually clarified by ray - tracing .ray - tracing implies the geometrical - optics approximation , so that the wave lenghth is set to zero and diffraction effects are not considered .when we set the correlation length of the exponentially decaying correlations as expressed by equal to the diffraction limit , the crlb converges to practically the same bound on localization precision .it is remarkable that this bound is revealed by our simple experiment using various noise and signal models .our experiments reveal evidence that the diffraction limit determines the ( 1 ) bound on localization precision , ( 2 ) sampling rate that provides optimal signal reconstruction , ( 3 ) size of the coherence region and uncertainty , ( 4 ) upper bound on spatial frequencies and ( 5 ) upper bound on fisher information and lower bound on the crlb . with the mathematical work of , a rigorous link between fisher information , crlb and uncertainty was established . as uncertaintyhas a lower bound according to coherence and speckle theory , so must fisher information have an upper bound , so that the crlb has a lower bound , all when the noise processes are assumed to be gaussian distributed .our experiments were able to validate those theoretical concepts .this is further confirmed by applying bienaym s theorem .an interesting observation is that despite the fact that power flows have twice the spatial bandwidth than radiation flows determined by amplitudes , their spatial coherence regions are the same in size , as are their degrees of freedom .our paper describes a novel experimental and theoretical framework for estimating the lower bound on uncertainty for localization setups based on classical wave theory without any other prior knowledge .bienaym s theorem and existing crlb for signals with correlated gaussian noise reveal our postulate , so that the lower bound on uncertainty corresponds to the upper bound on fisher information .our experimental results can not be explained by existing propagation models with independent noise .it took almost three weeks in throughput time to perform the million measurements , generating repetitive multiplexed measurements .we tried to minimize multi - path effects by avoiding the interfering influence of ground and ceiling reflections . making sure to minimize reflections of other metallic obstacles in the measuring chamber were challenging but could be overcome .this allowed us to reveal a performance bound in theory and experiment for measurements without phase information .our novel two - dimensional localization setup where we positioned the reference radios on the circumference of the localization area rather than spreading them out in a two - dimensional array over this area worked well in our los setup .our measurements show that localization performance does not increase indefinitely with an ever increasing number of multiplexed rss signals ( space and time ) , but is limited by the spatial correlations of the far - field rss signals . when sufficient measurements are available to minimize the influence of measurement noise on localization performance , the bound on localization precision is revealed as the region of spatially correlated far - field radiation noise .the determination of this region of spatial correlations is straightforward and can be directly calculated from rss signals . within this region of correlated rss signals , assumptions of independent measurementsare invalidated , so that the crlb for independent noise underestimates the bounds on radio localization precision .this underestimation is removed by accounting for the correlations in the noise .the bound on the correlations is given by the fundamental bound on the correlation length that we derived from first principles from the propagation equations .the crlb is linked to the uncertainty principle as measurement variance is directly related to this principle as we showed .the bounds on the precision of rss- and tof - based localization are expected to be equal and of the order of half a wavelength of the radiation as can be concluded from our experiments and underlying theoretical modeling .sampling beyond the diffraction limit or the nyquist sampling rate does not further resolve the oscillation period unless near - instantaneous measurements are performed with the a priori knowledge that signal processing is assumed to be based on non - linear mixing .future research is aimed at the inclusion of strong interference effects such as show up in practically any indoor environment .the authors would like to thank adrianus t. de hoop , h. a. lorentz chair emeritus professor , faculty of electrical engineering , mathematics and computer sciences , delft university of technology , delft , the netherlands , for his comments on applying green s theorem as a propagation model in electromagnetic theory .secondly , they would like to thank carlos r. stroud , jr .of the institute of optics , rochester , ny , for sharing his insight on fourier pairs and uncertainties .finally , they would like to thank gustaf hendeby and carsten fritsche of linkping university , sweden , for their comments .35 g. toraldo di francia , `` resolving power and information , '' j. optical soc .45 , no . 7 , 1955. a. j. stam , `` some inequalities satisfied by the quantities of information of fisher and shannon , '' inform . and control , vol .2 , pp . 101112 , 1959 .a. papoulis , `` error analysis in sampling theory , '' proc .ieee , vol .54 , no . 7 , pp .947 - 955 , july 1966 .m. gudmundson , `` correlation model for shadow fading in mobile radio systems , '' ieee electronics letters , 21452146 , 7 nov .h. hashemi , `` the indoor radio propagation channel , '' proc .ieee , vol .81 , no . 7 , pp .943 - 968 , july 1993 .s. kay , `` fundamentals of statistical signal processing : estimation theory , '' prentice hall , 1993 .j. s. lee , i. jurkevich , p. dewaele , p. wambacq , a. costerlinck , `` speckle filtering of synthetic aperture radar images : a review '' , remote sensing reviews , vol .313 - 340 , 1994 .l. cohen , `` time - frequency analysis , '' prentice hall , 1994 .e. s. keeping , `` introduction to statistical inference , '' dover publications , 1995 .m. v. de hoop and a. t. de hoop , `` wavefield reciprocity and optimization in remote sensing , '' proceedings of the royal society of london , pp .641682 , 2000 . m. a. pinsky , `` introduction to fourier analysis and wavelets , '' amer .soc . , 2002 .n. patwari , a. o. h. iii , m. perkins , n. s. correal , and r. j. odea , `` relative location estimation in wireless sensor networks , '' ieee trans .signal process .51 , no . 8 , pp . 21372148 , aug .2003 . wireless medium access control ( mac ) and physical layer ( phy ) specifications for low rate wireless personal area networks ( wpan ) , 802.15.4 - 2003 , 2003 .j. w. goodman , `` introduction to fourier optics , '' roberts and company publishers , 2004 .j. w. goodman , `` speckle phenomena in optics : theory and applications , '' roberts and company publishers , 2006 .n. patwari and a. o. hero iii , `` signal strength localization bounds in ad hoc and sensor networks when transmit powers are random , '' in ieee workshop on sensor array and multichannel processing , july 2006 .r. a. malaney , `` nuisance parameters and location accuracy in log - normal fading models , '' ieee trans .wireless commun ., vol . 6 , pp .937 - 947 , march 2007 .n. patwari and p. agrawal , `` effects of correlated shadowing : connectivity , localization , and rf tomography , '' ieee / acm int .conf . on inf .processing in sensor networks , april 2008 .f. gustafsson , f. gunnarsson , `` localization based on observations linear in log range , '' proc .17th ifac world congress , seoul , 2008 .y. shen , and m. z. win , `` fundamental limits of wideband localization part i : a general framework , '' ieee trans .theory , vol .4956 4980 , oct . 2010 .b. j. dil and p. j. m. havinga , `` rss - based self - adaptive localization in dynamic environments , '' int .internet of things , pp .5562 , oct .l. mandel and e. wolf , `` optical coherence and quantum optics , '' cambridge university press , 2013 .b. j. dil , `` lost in space , '' ph.d .dissertation , dept ., twente univ . ,enschede , netherlands , 2013 .a.t . de hoop , `` electromagnetic field theory in ( n+1)-spacetime : a modern time - domain tensor / array introduction , '' proceedings of the ieee 101 , pp . 434450 , 2013 .a. herczyski , `` bound charges and currents , '' american journal of physics , vol .3 , 202205 , 2013 . g. brooker , `` modern classical optics , '' oxford university press , 2014 .t. szabo , `` diagnostic ultrasound imaging : inside out , '' 2014 .j. w. goodman , `` statistical optics , '' wiley , 2015 .p. lpez - dekker , m. rodriguez - cassola , f. de zan , g. krieger , and a. moreira , `` correlating synthetic aperture radar ( cosar ) , '' ieee trans .geoscience and remote sensing , no .99 , pp.1 - 17 , 2015 .http://www.ti.com/lit/ds/symlink/cc2420.pdf , p. 55 , march 2015 .b. j. dil , `` database of fundamental bound measurements and matlab files , '' linkping university , sweden , 2015 .
this paper experimentally and theoretically investigates the fundamental bounds on radio localization precision of far - field received signal strength ( rss ) measurements . rss measurements are proportional to power - flow measurements time - averaged over periods long compared to the coherence time of the radiation . our experiments are performed in a novel localization setup using 2.4ghz quasi - monochromatic radiation , which corresponds to a mean wavelength of 12.5 cm . we experimentally and theoretically show that rss measurements are cross - correlated over a minimum distance that approaches the diffraction limit , which equals half the mean wavelength of the radiation . our experiments show that measuring rss beyond a sampling density of one sample per half the mean wavelength does not increase localization precision , as the root - mean - squared - error ( rmse ) converges asymptotically to roughly half the mean wavelength . this adds to the evidence that the diffraction limit determines ( 1 ) the lower bound on localization precision and ( 2 ) the sampling density that provides optimal localization precision . we experimentally validate the theoretical relations between fisher information , cramr - rao lower bound ( crlb ) and uncertainty , where uncertainty is lower bounded by diffraction as derived from coherence and speckle theory . when we reconcile fisher information with diffraction , the crlb matches the experimental results with an accuracy of 97 - 98% . radio localization , cramr - rao bounds , fisher information , bienaym s theorem , sampling theorem , speckles , uncertainty principle .
there has been intense research interest in phase transitions in mass - transport and growth models involving adsorption and desorption , fragmentation , diffusion and aggregation .these processes are ubiquitous in nature and arise in a large number of seemingly diverse systems such as growing interfaces , colloidal suspensions , polymer gels , river networks , granular materials , traffic flows , etc . in these systems ,different nonequilibrium states arise if the rates of the underlying microscopic processes are varied .conservation laws also play an important role in determining both the time - dependent behavior and steady states of such systems . as these steady statesare usually not described by the gibbs distribution , they are hard to determine .however , much insight on this issue has been gained by studying lattice models . due to their simplistic nature, these models can be treated either exactly or via a mean - field ( mf ) approach .they are also simple to implement numerically using monte carlo ( mc ) techniques . in this paper ,we present a pedagogical discussion of the modeling and simulation of mass - transport and growth phenomena .we discuss analytical and numerical techniques in the context of mass - transport models where the elementary move is the fragmentation of mass , and its subsequent diffusion to a neighboring site where it aggregates .the -chip models that we study here are interesting in physical situations where the deposited material consists of polymers .we study the mf limit of these models , focusing on the steady - state mass distribution [ vs. , which is characterized by branches .we also compare the mf results with mc simulations in .this paper is organized as follows . in sec .[ s2 ] , we present a framework for mass - transport models in terms of the rate ( evolution ) equations for , the probability that a site has mass at time .we then discuss various systems which can be described within this framework . in sec .[ s3 ] , we introduce -chip models and obtain analytical results for the mf versions of these models . in sec .[ s4 ] , we present mc results for mass - transport models , and compare them with the corresponding mf solutions .we conclude this paper with a summary and discussion in sec .the appendices contain details of calculations and mc procedures .we consider lattice models of mass transport with the processes of fragmentation , diffusion and aggregation . for simplicity, we describe the models on a 1-dimensional lattice with periodic boundary conditions .( the generalization to higher dimensions is straightforward . ) to begin with , masses are placed randomly at each site with an overall mass density .let be the mass at site at time .the mass variables assume discrete values 0,1,2,3 , etc .the evolution of the system is as follows .a piece of mass chips off a site having mass with rate .this piece deposits on the right neighbor with probability , or on the left neighbor with probability .the mass of the chosen neighbor adds up , while that of the departure site decreases , with the total mass of the system remaining conserved .figure 1 is a schematic depiction of the above model . to facilitate mc simulations ,the update rules can be rewritten as follows : + 1 ) randomly pick a site at time with mass .the site is updated as with rate .+ 2 ) the neighboring sites are updated as with probability , or with probability .we also study the above models within a mf approximation which keeps track of the distribution of masses , ignoring correlations in the occupancy of adjacent sites .although the mf theory has this shortcoming , our mc simulations show that it gives an accurate description of the above model , even in the 1-dimensional case .let denote the probability that a site has mass at time . in the mf limit, evolves as follows : these equations enumerate all possible ways in which a site with mass may change its mass . the first term on the right - hand - side ( rhs ) of eq .( [ rate1 ] ) is the `` loss '' of mass due to chipping , i.e. , a site with mass may lose a fragment of mass ( ) to a neighbor . the second term on the rhs represents the loss due to transfer of mass from a neighbor chipping .the third and fourth terms are the `` gain '' terms which represent the ways in which a site with mass greater ( lesser ) than can lose ( gain ) the excess ( deficit ) to yield mass .the terms of eq .( [ rate10 ] ) can be interpreted similarly . in order to ensure that all loss and gain terms have been included in the rate equations ,it is useful to check the sum rule with some algebra , it can be shown that eqs .( [ rate1])-([rate10 ] ) do indeed satisfy the above rule .we provide the steps for this check in appendix a. if other microscopic processes are present , additional terms will have to be included in the rate equations ( [ rate1 ] ) and ( [ rate10 ] ) .for example , if adsorption of a unit mass at a site occurs with rate , we require additional terms and in eq .( [ rate1 ] ) , and in eq .( [ rate10 ] ) . while it is simple to write realistic rate equations by including all relevant microscopic processes , it is often difficult to solve them analytically to obtain either time - dependent or steady - state solutions .several mf models studied earlier may be obtained as special cases of eqs .( [ rate1])-([rate10 ] ) by an appropriate choice of the chipping kernel .some interesting issues which have been addressed in these studies , in addition to obtaining steady - state mass distributions , are the possibilities of phase transitions in these models .we mention two representative examples to highlight the typical questions which are addressed in this area .+ 1 ) majumdar et al . studied a conserved - mass model in which either a single unit or the entire mass could dissociate from a site .thus , has the form where is the relative rate of the 1-chip process .the corresponding rate equations are obtained by substituting eq .( [ ssm ] ) in eqs .( [ rate1])-([rate10 ] ) as follows : where . the steady - state mass distributions [ vs. for eqs .( [ snm1])-([snm0 ] ) were calculated by majumdar et al . as a function of the density of the system .the relevant analytical techniques are described in sec .they observed a _ dynamical phase transition _ as was varied ( being fixed ) , with the different phases being characterized by different steady - state distributions . for , decayed exponentially for large .for , showed a power - law decay , with a universal exponent .finally , the high - density " phase arising for was characterized by the formation of an infinite aggregate ( at ) .the aggregate coexisted with smaller clusters , and their mass distribution showed a power - law decay , .+ 2 ) rajesh et al . studied a system of fragmenting and coagulating particles with mass - dependent diffusion rates . in this model , the case = 0 corresponds to the model in eq .( [ ssm ] ) .the corresponding rate equations are where .for , rajesh et al .showed that there is no dynamical phase transition .the high - density phase with an infinite aggregate disappears , although its imprint in the form of a large aggregate is observed in finite systems .further , the steady - state probability distribution decays exponentially with for all and . before concluding this discussion , a few words regarding the condensation transition are in order .the condensation observed in the model in eqs .( [ snm1])-([snm0 ] ) occurs due to the dynamical rules of evolution and not due to an `` attraction '' between the masses .though it shares analogies with _ bose - einstein condensation _ ( bec ) , an important difference is that these condensates are formed in real space rather than momentum space as is the case in bec . as a matter of fact, condensation occurs in a variety of seemingly diverse systems which are governed by nonequilibrium dynamics .we now discuss some physical applications of mass - transport models .the aim here is to stress the general nature of the questions addressed in a variety of physical situations involving mass transport .recently , there has been much research interest in suspensions of single - domain _ magnetic nanoparticles _ ( mnp ) , which have a wide range of technological applications , e.g. , memory devices , magnetic resonance imaging , targeted drug delivery , bio - markers and bio - sensors . a major reason for the utility of mnpsis the ease with which they can be detected and manipulated by an external magnetic field .their response times are strongly size - dependent , thus introducing the possibility of controlling particle sizes to obtain desired response times .an inherent property of mnp suspensions is cluster formation , due to the presence of attractive interactions of varying strengths between the constituent particles .therefore , mass - transport models with fragmentation and aggregation have been traditionally employed to study clustering dynamics in these systems .the steady - state cluster - size distributions and the average cluster size are determined by the interplay between aggregation ( due to attractive interactions ) and fragmentation ( due to repulsive interactions and thermal noise ) .assuming that the number of particles is , and denoting the number of clusters containing particles at time by , the rate equations in the mf approximation are as follows : in eq .( [ reqn ] ) , and are the aggregation and fragmentation kernels , respectively .the aggregation kernel describes the coalescence of two clusters containing and particles to yield a larger cluster with particles . in many models, it is assumed to have a mass - dependent form , .this accounts for the reduced mobility of large clusters .the fragmentation kernel describes the loss of one particle from a cluster with particles , and has also been assumed to have a mass - dependent form , .equation ( [ reqn ] ) can be rewritten in terms of probability distributions [ cf .( [ rate1])-([rate10 ] ) ] by introducing a normalization factor , .our second example is in the context of traffic models . in this context, we discuss the so - called _ bus route model _( brm ) . here, one is interested in the initial conditions or parameters which result in a clustering of buses or a traffic jam .the model is defined on a 1-dimensional lattice of size .each site has two associated variables and : ( i ) if a site is occupied by a bus , 1 ; otherwise 0 .( ii ) if a site has passengers , then 1 ; otherwise 0 .a site can not have both 1 , i.e. , .if there are buses , the bus density is a conserved quantity .however , the total number of sites with passengers is not conserved .the update rules are as follows : ( i ) pick a site at random .( ii ) if 0 , then set 1 with rate , i.e. , a passenger arrives at an empty site with rate .( iii ) if 1 and 0 , a bus hops onto a site with no passengers ( 0 ) with rate , and to a site with passengers ( 1 ) with rate .thus , the variables 0 and 1 and 0 with rate or , as the case may be . usually , as the buses slow down when passengers are being picked up .a jam in the system is a gap between buses of size , which is stable in the thermodynamic limit .the mf approximation of this model considers the distribution of gaps , ignoring the time - correlations in the hopping of buses .it should be noted here that , unlike the mass - transport models described in sec .[ s2a ] , the brm is asymmetric .thus , the movement of the buses is unidirectional although the hop rate is proportional to the size of the gap .these features put the brm in a class of models which are referred to as _ zero - range processes _ ( zrp ) .the important property of a zrp is that it yields a steady - state as a product of marginals calculated using well - defined procedures .further , mf calculations are exact for this class of models . from simulations of the discrete model , heuristic arguments and mftheory , oloan et al .obtain evidence of a jamming transition as a function of the density of buses . in terms of buses and passengers ,the jam may be interpreted as follows .an ideal situation is one where the buses are evenly distributed along the route so that each bus picks up approximately the same number of passengers .jamming or clustering of buses may occur if one of the buses gets delayed due to some fluctuation at a pick - up point .subsequently , the buses which follow catch up with the delayed bus resulting in a jam !an important observation here is that the jamming is a consequence of a local , stochastic dynamics which couples the conserved variable ( buses ) and the nonconserved variable ( passengers ) .the transition is reminiscent of the condensation transition described earlier , and has also been useful in describing _ clogging _ in the transport of sticky particles down a pipe . as a final example , we consider the packing of granular materials , which is important in many technological processes .the crucial issue in these problems is understanding the complex network of forces which is responsible for the static structure and properties of granular materials .one such system which has been subjected to experiments , simulations and analysis is a pack of spherical beads in a compression cell .the bead pack is modeled as a regular lattice of sites , each having a particle of unit mass .the mechanisms which lead to the formation of force chains in the system are summarized in the rules defined below : ( i ) each site in layer is connected to sites in layer .( ii ) only vertical forces are considered explicitly .a fraction of the total weight supported by particle in layer is transmitted to particle in layer .thus , the weight supported by the particle at the site in layer satisfies the stochastic equation the are independently - distributed random variables which satisfy the constraint 1 , required for enforcing the force - balance condition on each particle . in general, the values of at neighboring sites in layer are not independent .the mf approximation of this model ignores these correlations . defining a normalized weight variable , we want to obtain the force distribution , i.e. , the probability that a site at depth is subject to a vertical force . within the mf approximation , it is possible to obtain a recursive equation for .coppersmith et al . found that , for almost all distributions of , the distribution of forces decays exponentially .however , a power - law decay was also observed in some cases .let us first consider the 1-chip model .the chipping kernel has the simple form with the above kernel , eqs .( [ rate1])-([rate10 ] ) become absorbing into the definition of time , these equations simplify to the following form : here , we have defined as the probability of occupancy of a site with mass .consequently , the probability of a site being empty is .the above rate equations were obtained earlier in refs . and were solved exactly .we recall this calculation to illustrate the _ generating - function approach _ for obtaining steady - state solutions of such rate equations . defining the generating function , an equation for be obtained from eq .( [ rate3 ] ) by multiplying both sides by and summing over : +s_{1}z\left[q+p(0,t)\right].\end{aligned}\ ] ] setting , and substituting from the steady - state version of eq .( [ rate30 ] ) , we obtain the value of is fixed by mass conservation , which requires that = , where is the mass density . putting = , we obtain the steady - state distribution is the coefficient of in , and can be obtained by taylor - expanding about .this yields for a more complicated function , we can obtain by inverting .it is useful to illustrate this for the simple form of in eq .( [ gen1 ] ) .thus , here , the closed contour encircles the origin in the complex plane counter - clockwise and lies inside the circle .the integral is calculated using the residue theorem . only those singular points which lie within ( viz ., , which is a pole of order ) contribute to this evaluation .the associated residue is thus , the steady - state mass distribution is notice that , so eq . ( [ soln1 ] ) is also valid for .using eq .( [ rho1 ] ) , the above mass distribution can be rewritten as where in the case of simple chipping kernels , as in eq .( [ kernel1 ] ) , the above solution can also be obtained directly from the difference equations ( [ rate3])-([rate30 ] ) by setting the left - hand - side ( lhs ) to zero .we can then write down expressions for the first few terms of : which is identical to eq .( [ soln1 ] ) .again , the mass conservation condition = results in eq .( [ rho1 ] ) , as expected .the 1-chip solution in eq .( [ soln11 ] ) is important because of its universal nature . as a matter of fact , it is a steady - state solution for all mf models where the chipping kernel is independent of the mass of the departure site , . to confirm this , we consider eqs .( [ rate1])-([rate10 ] ) with replaced by .the corresponding rate equations are in the steady state , the above equations may be combined to obtain substituting on the rhs of eq .( [ arbk1ss ] ) , we obtain the first and fourth terms cancel , and the second and third terms cancel , so .this confirms that is a solution of eqs .( [ arbk1])-([arbk10 ] ) .the constants and are fixed by the requirements of probability normalization ] and odd values of ] . using eqs .( [ rate1])-([rate10 ] ) , &=&\frac{d}{dt}\left[\sum_{m=1}^{\infty}p(m , t)+p(0,t)\right]\nonumber\\ & = & -\sum_{m=1}^{\infty}p(m , t)\sum_{m_{1}=1}^{m}g_{m}(m_{1 } ) -\sum_{m=1}^{\infty}p(m , t)\sum_{m_{2}=1}^{\infty}p(m_{2},t)\sum_{m_{1}=1}^{m_{2}}g_{m_{2}}(m_{1})\nonumber\\ & & + \sum_{m=1}^{\infty}\sum_{m_{1}=1}^{\infty}p(m+m_{1},t)g_{m+m_{1}}(m_{1})\nonumber\\ & & + \sum_{m=1}^{\infty}\sum_{m_{1}=1}^{m}p(m - m_{1},t)\sum_{m_{2}=m_{1}}^{\infty}p(m_{2},t)g_{m_{2}}(m_{1})\nonumber\\ & & -p(0,t)\sum_{m_{2}=1}^{\infty}p(m_{2},t)\sum_{m_{1}=1}^{m_{2}}g_{m_{2}}(m_{1 } ) + \sum_{m_{1}=1}^{\infty}p(m_{1},t)g_{m_{1}}(m_{1}).\end{aligned}\ ] ] we regroup terms and write \nonumber\\ & & + \left[\sum_{m=1}^{\infty}\sum_{m_{1}=1}^{\infty}p(m+m_{1},t)g_{m+m_{1}}(m_{1 } ) + \sum_{m_{1}=1}^{\infty}p(m_{1},t)g_{m_{1}}(m_{1})\right]\nonumber\\ & & + \sum_{m=1}^{\infty}\sum_{m_{1}=1}^{m}p(m - m_{1},t)\sum_{m_{2}=m_{1}}^{\infty}p(m_{2},t)g_{m_{2}}(m_{1})\\ \label{sum22 } & = & -\sum_{m=1}^{\infty}p(m , t)\sum_{m_{1}=1}^{m}g_{m}(m_{1 } ) -\sum_{m=0}^{\infty}p(m , t)\sum_{m_{2}=1}^{\infty}p(m_{2},t)\sum_{m_{1}=1}^{m_{2}}g_{m_{2}}(m_{1})\nonumber\\ & & + \sum_{m=0}^{\infty}\sum_{m_{1}=1}^{\infty}p(m+m_{1},t)g_{m+m_{1}}(m_{1})\nonumber\\ & & + \sum_{m=1}^{\infty}\sum_{m_{1}=1}^{m}p(m - m_{1},t)\sum_{m_{2}=m_{1}}^{\infty}p(m_{2},t)g_{m_{2}}(m_{1 } ) .\end{aligned}\ ] ] our initial condition for eqs .( [ rate1])-([rate10 ] ) satisfies .therefore , we set on the rhs of eq .( [ sum22 ] ) .this is justified subsequently as it results in /d t=0 ] .the generating function for the 3-chip model is [ cf .( [ genk ] ) ] where . from eq . ( [ steady1 ] ), we have the closed contour encircles the origin and lies inside the region defined by . as usual , only res contributes to the integral , and is evaluated as \bigg|_{z=0 } \nonumber\\ \label{res3c } & = & \frac{(1-s_{1})}{3}\left[1 + 2\cos\left(\frac{2\pi(m-3)}{3}\right)\right]s_{3}^{m/3}\nonumber\\ & & + \frac{(s_{1}-s_{2})}{3}\left[1 + 2\cos\left(\frac{2\pi(m-1)}{3}\right)\right]s_{3}^{(m-1)/3 } \nonumber\\ & & + \frac{(s_{2}-s_{3})}{3}\left[1 + 2\cos\left(\frac{2\pi(m-2)}{3}\right)\right]s_{3}^{(m-2)/3}.\end{aligned}\ ] ] then , the steady - state mass distribution is {3}^{m/3 } \nonumber\\ & & + \frac{(s_{1}-s_{2})}{3}\left[1 + 2\cos\left(\frac{2\pi(m-1)}{3}\right)\right]s_{3}^{(m-1)/3 } \nonumber\\ & & + \frac{(s_{2}-s_{3})}{3}\left[1 + 2\cos\left(\frac{2\pi(m-2)}{3}\right)\right]s_{3}^{(m-2)/3 } \nonumber\\ \label{3csol3 } & = & ( 1-s_{1}){s_{3}}^{m/3}\delta_{{\rm mod}(m,3),0 } + ( s_{1}-s_{2}){s_{3}}^{\left(m-1\right)/3}\delta_{{\rm mod}(m,3),1}\nonumber\\ & & + ( s_{2}-s_{3}){s_{3}}^{\left(m-2\right)/3}\delta_{{\rm mod}(m,3),2}.\end{aligned}\ ] ] the relation between the mass density and the s can be obtained directly by using eq .( [ 3csol3 ] ) .thus as for the 2-chip model , the s can be obtained as a function of and the probability sums on each of the branches in eq .( [ conserv ] ) .these result in the following expressions : . initializing the lattice : integer masses are placed on the lattice sites in accordance with the chosen such that . a simple procedure for achieving this is as follows : 1 . choose an integer random number nran( ) from the range ] , where refers to the integer part of .2 . chipping and aggregation : a site is chosen at random . if is non - zero , a unit mass chips and aggregates with a randomly - chosen neighbor .thus , and or with a probability 1/2 . 3 .repeat step 2 for times , which corresponds to one mc step ( mcs ) .4 . compute the mass distribution of lattice sites .repeat steps 2 - 4 for several mcs , storing the mass distribution at intermediate mcs .repeat steps 2 - 5 for a large number of independent lattice configurations , generated via step 1 .7 . compute the configuration - averaged steady - state mass distribution , vs. . 1 .define a matrix of dimension , chosen specific to .for example , if , the rate of chipping a mass of size 1000 is .we can assume that chipping of masses greater than 1000 occurs with a very small rate , and hence these events may be ignored .thus we may set .initialize to zero .the row index corresponds to the mass of the lattice site chosen to fragment .masses greater than may be treated as for the computation of fragmentation rates for reasons discussed in 1 above .the column index refers to the chipped mass .thus the matrix has a triangular form , with the non - zero entries calculated using .for example , if , the rates for the row are [ 1 , 0.25 , 0.1111 , 0.0625 , 0.04 , 0.02777 , 0.0204 , 0.0156 , 0.0123 , 0.01 , 0 , 0 , ... , 0 ] . to connect these to probabilities we normalize these numbers by . 1 .a site is chosen at random .if is non - zero , draw a random number in the interval ( 0,1 ) .2 . go to the row of the table and check the two consecutive entries which sandwich .the column number of the larger entry is the number of mass particles that chip from .thus , and or with probability 1/2 .kuiri , b. joseph , h.p .lenka , g. sahu , j. ghatak , d. kanjilal and d.p .mahapatra , observation of a universal aggregation mechanism and a possible phase transition in au sputtered by swift heavy ions " , phys .* 100 * , 245501 ( 2008 ) . vs. from 1- mc simulations with three different functional forms for the chipping kernel .the data sets are plotted on a linear - logarithmic scale .the details of the mc simulations are provided in the text .the mass density for the initial conditions is .the solid line denotes the 1-chip solution in eq .( [ soln11]).,title="fig:",width=453,height=377 ] + vs. on a linear - log scale , from mc simulations in .the initial conditions were characterized by average density , and .the solid line denotes the solution in eq .( [ steady2 ] ) with .( b ) analogous to ( a ) , but the initial conditions for the mc simulations have a mixture of both even and odd masses .the corresponding parameter values are .the solid line denotes the solution in eq .( [ steady2 ] ) with , .the inset highlights the staircase structure of the probability distribution ., title="fig:",width=453,height=377 ] + vs. from mc simulations with , so that .the solid line denotes the result in eq .( [ steadym ] ) with , and calculated from eq .( [ 3chs ] ) as .( b ) plot of vs. for initial conditions with , so that .the solid line denotes the result in eq .( [ steadym ] ) with .,title="fig:",width=453,height=377 ] +
we present a review of nonequilibrium phase transitions in mass - transport models with kinetic processes like fragmentation , diffusion , aggregation , etc . these models have been used extensively to study a wide range of physical problems . we provide a detailed discussion of the analytical and numerical techniques used to study mass - transport phenomena .
most of analysis in wireless networks consider a 2d analytical model of network .however since the needs are increasing in terms of qos and performance , and because of the scarcity of the frequency bandwidth , forecast of performance needs to be more and more accurate .this is the reason why more accurate analytical models are developed and in particular 3d models of wireless networks which should be closer to a real network .a large literature has been developed about the modeling of wireless networks , in the aim to analyze the capacity , throughput , coverage , and more generally performances for different types of wireless systems such as sensors , cellular and ad - hoc ones .these models of networks are based on i ) stochastic geometry : the transmitters are distributed according to a spatial poisson process , ii ) hexagonal pattern : the transmitting base stations constitute a regular infinite hexagonal grid , iii ) a fluid approach : the interfering transmitters are replaced by a continuum .the hexagonal wireless network model is the most used one ( -- ) .this model seems rather reasonable for regular deployments of base stations .however , it is a two dimensional one : it does not take into account the height of antennas , and the propagation in the three dimensions . moreover , from an analytical point of view , it is intractable . therefore , extensive computations are needed to establish performance . among the techniques developed to perform such computations , monte carlo simulations are widely used in conjunction with this model or numerical computations in hexagonal networks .more generally , most of the works focus on 2d models of wireless networks .nevertheless , 3d models of wireless networks were developed and analyzed in term of capacity .authors of present a 3d geometrical propagation mathematical model . based on this model , a 3d reference model for mimo mobile to mobile fading channelsis proposed . in , authors propose an exact form of the coverage probability , in two cases : i ) interferers form a 3d poisson point process and ii ) interferers form a 3d modified matern process .the results established are then compared with a 2d case .authors of develop a three - dimensional channel model between radar and cellular base stations , in which the radar uses a two - dimensional antenna array and the base station uses a one - dimensional antenna array .their approach allows them to evaluate the interference impact .the 3d wireless model for sensor networks developed in allows the authors to develop an analysis where the coverage can be improved .the tilt angle is the corner stone of an algorithm which uses the 3d model they propose , : in this paper , we develop a three dimensional analytical model of wireless networks .we establish a closed form formula of the sinr .we show that the formula allows analyzing wireless networks with more accuracy than a 2d wireless approach .comparisons of performance coverage and qos results , established with this 3d spatial fluid model and with a classical 2d model , show the wide interest of using this approach instead of a 2d approach .the paper is organized as follows . in section [ model ] , we develop the 3d network model . in section [ 3dfluid ] , the analytical expression of the sinr by using this 3d analytical fluid modelis established . in section [ valid3d ] ,the validation of this analytical 3d model is done by comparison with monte carlo simulations .section [ conclusion ] concludes the paper .we consider a wireless network consisting of geographical sites , each one composed by 3 base stations .each antenna covers a sectored cell .we focus our analysis on the downlink , in the context of an ofdma based wireless network , with frequency reuse 1 .let us consider : * the set of geographic sites , uniformly and regularly distributed over a two - dimensional plane . * the set of base stations , uniformly and regularly distributed over the two - dimensional plane .the base stations are equipped with directional antennas : = 3 . *the antenna height , denoted .* sub - carriers where we denote the bandwidth of each sub - carrier . * the transmitted power assigned by the base station to sub - carrier towards user . * the propagation gain between transmitter and user in sub - carrier .we assume that time is divided into slots .each slot consists in a given sequence of ofdma symbols . as usual at network level, we assume that there is no inter - carrier interference ( ici ) so that there is no intra - cell interference .the total amount of power received by a ue connected to the base station , on sub - carrier is given by the sum of : a useful signal , an interference power due to the other transmitters and thermal noise power .we consider the sinr defined by : as the criterion of radio quality .we investigate the quality of service and performance issues of a network composed of sites equipped with 3d directional transmitting antennas .the analyzed scenarios consider that all the subcarriers are allocated to ues ( full load scenario ) .consequently , each sub - carrier of any base station is used and can interfere with the ones of other sites .all sub - carriers are independent , we can thus focus on a generic one and drop the index .let us consider the path - gain model , where is a constant , is the distance between a transmitter and a receiver , and 2 is the path - loss exponent .the parameter is the antenna gain ( assuming that receivers have a 0 dbi antenna gain ) .therefore , for a user located at distance from its serving base station , the expression ( [ sinr ] ) of the sinr can be expressed , for each sub - carrier ( dropping the index ) : where : * is the transmitted power , * is the pattern of the 3d transmitting antenna of the base station , and is the maximum antenna gain .* is the horizontal angle between the ue and the principal direction of the antenna , * is the vertical angle between the ue and the antenna ( see fig . [ antennes3d-2 ] ) , * , where represents the projection of on the ground . the gain of an antenna in a direction is defined as the ratio between the power radiated in that direction and the power that radiates an isotropic antenna without losses .this property characterizes the ability of an antenna to focus the radiated power in one direction . the parameter ( [ sinrdirect ] ) is particularly important for a beamforming analysis .let notice that it is determined by considering that the power , which would be transmitted in all directions for a non directive antenna ( with a solid angle of ) , is transmitted in a solid angle given by the horizontal and the vertical apertures of the antenna . in the ideal case where the antenna emits in a cone defined by and ,the gain is given by .in our analysis , we conform to the model of for the antenna pattern ( gain , side - lobe level ) .the antenna pattern which is applied to our scheme , is computed as : ,\end{aligned}\ ] ] where and correspond respectively to the horizontal and the vertical antenna patterns .the horizontal antenna pattern used for each base station is given by : ,\end{aligned}\ ] ] where : * is the half - power beamwidth ( 3 db beamwidth ) ; * is the maximum attenuation .the vertical antenna direction is given by : ,\end{aligned}\ ] ] where : * is the downtilt angle ; * is the 3 db beamwidth . .it receives a useful power from antenna and interference power from antenna . ]each site is constituted by 3 antennas ( 3 sectors ) .therefore , for any site of the network , we have : where and represent the angles relative to the antenna for the site .for the sake of simplicity , in expression ( [ sinrdirect ] ) we do a sum on the base stations ( not on the sites ) and denote and the angles relative to the antenna . for a ue at the distance from the antenna , the vertical angle can be expressed as : for interfering antennas , it can be noticed that since , we have , and . therefore the vertical antenna pattern ( [ eq : v_pattern ] ) can be written as : \nonumber \\ & \approx & -\min \left [ 12 \left ( \frac{\phi_{tilt}}{\phi_{3db } } \right)^{2 } , a_m \right ] \nonumber \\ & = & g_{v_{db}},\end{aligned}\ ] ] where ] .so we have : \\g_{v_{db } } = - \min\left[12\left(\frac{\phi_{tilt}}{\phi_{3db}}\right)^2,a_m\right ] \end{array } \right.\end{aligned}\ ] ] therefore , we establish that in this case , the vertical antenna gain only depends on the angle .the main assumption of the fluid network modeling consists in replacing a fixed finite number of enbs by an equivalent continuum of enbs spatially distributed in the network .therefore , the transmitting power of the set of interfering enbs of the network is considered as a continuum field all over the network . considering a density of sites and following the approach developed in ,let us consider a ue located at in the area covered by the enb .since each site is equipped by 3 antennas , we can express the denominator of ( [ sinrdirect ] ) as : where the integral represents the interference due to all the other sites of the network , and the discrete sum represents the interference due to the 2 antennas ( enb ) co - localized with the enb .the index holds for these 2 antennas .this can be further written as : since for the other enb of the network , the distance , we have , and the interference can be approximated by using ( [ eq : hv_pattern2 ] ) : the approach developed in allows to express as , where represents the intersite distance ( isd ) .we refer the reader to for the detailed explanation and validation through monte carlo simulations .therefore , ( [ interference2 ] ) can be expressed as : for a ue located at ( dropping the index ) , relatively to its serving enb , the inverse of the sinr ( [ sinrdirect ] ) is finally given by the expression : where the index holds for the 2 antennas ( enb ) co - localized with the serving antenna ( enb ) .the closed form formula ( [ sinrfluid2 ] ) allows the calculation of the sinr in an easy way .first of all , it only depends on the distance of a ue to its serving enb ( not on the distances to each enb ( sinr expression ( [ sinrdirect])).this formula highlights the network characteristic parameters which have an impact on the sinr ( path loss parameter , inter - site distance , antenna gain ) . a simple numerical calculation is needed , due to the tractability of that formula .the sinr allows calculating the maximum theoretical achievable throughput of a ue , by using shannon expression . for a bandwidth , it can be written : * remarks : * in the case of realistic wireless network systems , it can be noticed that the mapping between the sinr and the achievable throughput are established by the mean of _ level curves_. +the validation of the analytical formula ( [ sinrfluid2 ] ) consists in the comparison of the results established by this formula , to the ones established by monte carlo simulations .let us consider : * a hexagonal network composed of sectored sites ; * three base stations per site ; * the 2d model : the antenna gain of a transmitting base station is given in db by : ,\ ] ] where = and db ; * the 3d model : the antenna gain of a transmitting base station is given by expressions ( [ eq : hv_pattern ] ) ( [ eq : h_pattern ] ) ( [ eq : v_pattern ] ) ; * analyzed scenarios corresponding to realistic situations in a network : * * urban environment : inter site distance isd = 200 m , 500 m and 750 m , * * antennas tilts : , , .user equipments are randomly distributed in a cell of a 2d hexagonal based network ( fig .[ hexagonaljpg ] ) .this hexagonal network is equipped by antennas which have a given height ( 30 m and 50 m in our analysis ) , in the third dimension .monte carlo simulations are done to calculate the sinr for each ue .we focus our analysis on a typical hexagonal site .the cumulative distribution function ( cdf ) of the sinr can be established by using these simulations .these curves are compared to the ones established by using the analytical formula ( [ sinrfluid2 ] ) to calculate the sinr values . moreover , the sinr values established by the two ways are drawn on figures representing a site with three antennas .we present two types of comparisons .we first establish the cdf of the sinr .indeed , the cdf of sinr provides a lot of information about the network characteristics : the coverage and the outage probability , the performance distribution , and the quality of service that can be reached by the system . as an example ,figure 3 shows that for an outage probability target of 10% , the sinr reaches -8 db , which corresponds to a given throughput .a second comparison , focused on the values of the sinr at each location of the cell , establishes a map of sinr over the cell . and axes represent the coordinates , in meters .the intersite distance in this example is 750 m. ] for the validation , we compare the two methods by considering realistic values of network parameters . an urban environment with realistic parameters of propagation is simulated .different tilts and apertures are considered .the scenarios , summarized in tab .[ tab_scenariofigures ] , show that the 3d fluid analytical model and the simulations provide very close values of sinr : .scenarios and figures [ cols="^,^,^,^,^,^,^",options="header " , ] the figures of scenario 1 ( fig .3 ) , scenario 2 ( fig . 5 and 6 ) ,scenario 3 ( fig .8) , scenario 4 ( fig .10 ) , scenario 5 ( fig .12 ) and scenario 6 ( fig .14 ) show that the analytical model ( blue curves ) and the simulations ( red curves ) provide very close cdf of sinr curves .the figures of scenario 1 ( fig . 4 ) ,scenario 2 ( fig .7 ) , scenario 3 ( fig .9 ) , scenario 4 ( fig .11 ) , scenario 5 ( fig . 13 ) and scenario 6 ( fig .15 ) represent the values of sinr in each location of a cell , where the and axes represent the coordinates ( in meters ) .these figures show that the analytical model ( right side ) and the simulations ( left side ) provide very close maps of sinr . , a vertical aperture and an horizontal aperture . ] , a vertical aperture and an horizontal aperture . ] , a vertical aperture and an horizontal aperture . ] ) where , and . ] , a vertical aperture and an horizontal aperture . ] , a vertical aperture and an horizontal aperture . ] , a vertical aperture and an horizontal aperture . ] , a vertical aperture and an horizontal aperture . ] , a vertical aperture and an horizontal aperture . ] , a vertical aperture and an horizontal aperture . ] , a vertical aperture and an horizontal aperture . ] , a vertical aperture and an horizontal aperture . ] , a vertical aperture and an horizontal aperture . ]the aim of our analysis is to propose a model allowing to evaluate the performance , quality of service and coverage reachable in a cell whose standard antennas are replaced by 3d antennas , taking into account the height of the antenna and its tilt .therefore the whole antenna energy is focused on the cell .this implies that the angle has to be lower than , otherwise ues belonging to other cells could be served by this antenna .the validation process was done according to this constraint .however , the analytical closed - form formula ( [ sinrfluid2 ] ) allows to establish cdf of sinr very closed to simulated ones , for the different values of , vertical apertures and horizontal apertures , as soon as .moreover , the sinr maps given by simulations and by the formula are also very closed .therefore , the formula is particularly well adapted for 3d wireless networks analysis .we develop , and validate , a three dimensional analytical wireless network model .this model allows us to establish a closed form formula of the sinr reached by a ue at any location of a cell , for a 3d wireless network .the validation of this model , by comparisons with monte carlo simulations results , shows that the two approaches establish very close results , in terms of cdf of sinr , and also in terms of sinr map of the cell .this model may be used in the aim to analyze wireless networks , in a simple way , with a higher accuracy than a classical 2d approach .jean - marc kelif , marceau coupechoux , philippe godlewski , a fluid model for performance analysis in cellular networks , eurasip journal on wireless communications and networking , vol .2010 , article i d 435189 , doi:10.1155/2010/435189
in this article we develop a three dimensional ( 3d ) analytical model of wireless networks . we establish an analytical expression of the sinr ( signal to interference plus noise ratio ) of user equipments ( ue ) , by using a 3d fluid model approach of the network . this model enables to evaluate in a simple way the cumulative distribution function of the sinr , and therefore the performance , the quality of service and the coverage of wireless networks , with a high accuracy . the use of this 3d wireless network model , instead of a standard two - dimensional one , in order to analyze wireless networks , is particularly interesting . indeed , this 3d model enables to establish more accurate performance and quality of services results than a 2d one .
geometrical - optical illusions ( gois ) have been discovered in the xix century by german psychologists ( oppel 1854 , hering , 1878 , ) and have been defined as situations in which there is an awareness of a mismatch of geometrical properties between an item in object space and its associated percept .the distinguishing feature of these illusions is that they relate to misjudgements of geometrical properties of contours and they show up equally for dark configurations on a bright background and viceversa . for the interested reader , a historical survey of the discovery of geometrical - optical illusionsis included in appendix i of .our intention here is not to make a classification of these phenomena , which is already widely present in literature ( coren e girgus , 1978 , ; robinson , 1998 , ; wade , 1982 , ) .the aim of this paper is to propose a mathematical model for gois based on the functional architecture of low level visual cortex ( v1/v2 ) .this neuro - mathematical model will allow us to interpret at a neural level the origin of gois and to reproduce the arised percept for this class of phenomena .the main idea is to adopt the model of the functional geometry of v1 provided in and to consider that the image stimulus will modulate the connectivity .when projected onto the visual space , the modulated connectivity gives rise to a riemannian metric which is at the origin of the visual space deformation .the displacement vector field at every point of the stimulus is mathematically computed by solving a poisson problem and the perceived image is finally reproduced .the considered phenomena consist , as shown in figure [ fig:1ills ] , in straight lines over different backgrounds ( radial lines , concentric circles , etc ) .the interaction betwen target and context either induces an effect of curvature of the straight lines ( fig .[ fig:1:4 ] , [ fig:1:5 ] , [ fig:1:1 ] ) , eliminates the bending effect ( fig .[ fig:1:6 ] ) , or induces an effect of unparallelism ( fig . [ fig:1:2 ] ) .the paper is organised as follows : in section [ sec:1 ] we review the state of the art concerning the previous mathematical models proposed . in section [ sec:2 ]we will briefly recall the functional achitecture of the visual cortex and the cortical based model introduced by citti and sarti in . in section [ sec:3bis ] we will introduce the neuro - mathematical model proposed for gois , taking into account the modulation of the functional architecture induced by the stimulus . in [ sec:3 ] the numerical implementation of the mathematical modelwill be explained and applied to a number of examples .results are finally discussed . 1.8 in 2.1 in 1.8 in 2.1 in 1.8 inin psychology the distal stimulus is defined as _ the light reflected off a physical object in the external world _ ; when we look at an image ( distal stimulus ) we can not actually experience the image physically with vision , we can only experience it in our mind as proximal stimulus , .geometrical optical illusions arise when the distal stimulus and its percept differ in a perceivable way . as explained by westheimer in , we can conveniently divide illusions into those in which spatial deformations are a consequence of the exigencies of the processing in the domain of brightness and the true geometrical - optical illusions , which are misperceptions of geometrical properties of contours in simple figures .some of the most famous geometric illusions of this last type are shown in figure [ fig:1ills ] .the importance of this study lies in the possibility , through the analysis of these phenomena combined with physiological recordings , to help to guide neuroscientific research ( eagleman , ) in understanding the role of lateral inhibition , feedback mechanisms between different layers of the visual process and to lead new experiments and hypothesis on receptive fields of v1 and v2 .many studies , which relies on neuro - physiological and imaging data , show the evidence that neurons in at least two visual areas , v1 and v2 , carry signals related to illusory contours , and that signals in v2 are more robust than in v1 ( , , reviews , ) . a more recent study on the tilt illusion , in which the perceived orientation of a grating differs from its physical orientation when surrounded by a tilted context , measured the activated connectivity in and between areas of early visual cortices ( ) .these findings suggest that for gois these areas may be involved as well .neurophysiology can help to provide a physical basis to phenomenological experience of gois opening to the possibility of mathematically modeling them and to integrate subjective and objective experiences . the pioneering work of hoffman dealt with illusions of angle ( i.e. the ones involving the phenomenon of angular expansion , which is the tendence to perceive under certain conditions acute angles as larger and obtuse ones as smaller ) modeling the generated perceived curves as orbits of a lie transformation group acting on the plane .the proposed model allows to classify the perceptual invariance of the considered phenomena in terms of lie derivatives , and to predict the slope .another model mathematically equivalent to the one proposed by hoffman has been proposed by smith , , who stated that the apparent curve of geometrical optical illusions of angle can be modeled by a first - order differential equation depending on a single parameter . by computing this value an apparent curvecan be corrected and plotted in a way that make the illusion being not perceived anymore ( see for example fig . 8 of ) .this permits to introduce a _ quantitative _ analysis of the perceived distortion .ehm and wackerman in , started from the assumption that gois depend on the context of the image which plays an active role in altering components of the figure . on this basis they provided a variational approach computing the deformed lines as minima of a functional depending on length of the curve and the deflection from orthogonality along the curve .this last request is in accordance to the phenomenological property of regression to right angle .one of the problems pointed out by the authors is that the approach does nt take into account the underlying neurophysiological mechanisms .an entire branch for modeling neural activity , the bayesian framework , had its basis in helmholtz s theory : _ our percepts are our best guess as to what is in the world , given both sensory data and prior experience . _the described idea of unconscious inference is at the basis of the bayesian statistical decision theory , a principled method for determining optimal performance in a given perceptual task ( ) .these methods consists in attributing a probability to each possible true state of the environment given the stimulus on the retina and then to establish the way prior experience influences the final guess , the built proximal stimulus ( see for examples of bayesian models in perception ) .an application of this theory to motion illusions has been provided by weiss et al in , and a review in .fermller and malm in attributed the perception of geometric optical illusions to the statistics of visual computations .noise ( uncertainty of measurements ) is the reason why systematic errors occur in the estimation of the features ( intensity of the image points , of positions of points and orientations of edge elements ) and illusions arise as results of errors due to quantization .walker ( ) tried to combine neural theory of receptive field excitation together with mathematical tools to provide an equation able to determine the disparity between the apparent line of an illusion and its corresponding actual line , in order to reproduce the perceptual errors that occur in gois ( the ones involving straight lines ) . in our model we aim to combine psycho - physical evidence and neurophysiological findings , in order to provide a neuro - mathematical model able to interpret and simulate gois .the visual process is the result of several retinic and cortical mechanisms which act on the visual signal .the retina is the first part of the visual system responsible for the trasmission of the signal , which passes through the lateral geniculate nucleus , where a pre - processing is performed and arrives in the visual cortex , where it is further processed .the receptive field ( rf ) of a cortical neuron is the portion of the retina which the neuron reacts to , and the receptive profile ( rp ) is the function that models the activation of a cortical neuron when a stimulus is applied to a point of the retinal plane .simple cells of visual cortices v1 and v2 are sensitive to position and orientation of the contrast gradient of an image .their properties have been experimentally described by de angelis in , see figure [ fig:2e ] . from the neurophysiological point of view the orientation selectivity ,the spatial and temporal frequency of cells in v2 differs little from the one in v1 ( ) .receptive fields in v2 are larger from those in v1 ( , ) . considering a basic geometric model , the set of simple cellsrps can be obtained via translations of vector and rotation of angle from a unique mother profile .daugman , jones and palmer showed that gabor filters were a good approximation for receptive profiles of simple cells in the primary visual cortices v1 and v2 .another approach is to model receptive profiles as gaussian derivatives , as introduced by young in and koenderink in , but for our purposes the two approaches are equivalent .a good expression for the mother gabor filter is : where is the ratio between and the spatial wavelength of the cosine factor .translations and rotations can be expressed as : hence a general rp can be expressed as : a set of rps generated with equation [ group_law ] is shown in figure [ fig43332 ] .the retinal plane is identified with the -plane , whose local coordinates will be denoted with . when a visual stimulus of intensity activates the retinal layer of photoreceptors , the neurons whose rfs intersect spike and their spike frequencies can be modeled ( taking into account just linear contributions ) as the integral of the signal with the set of gabor filters .the expression for this output is : in the right hand side of the equation the integral of the signal with the real and imaginary part of the gabor filter is expressed .the two families of cells have different shapes , hence they detect different features. in particular odd cells will be responsible for boundary detection .the term _ functional architecture _ refers to the organisation of cells in the primary visual cortex in structures .the hypercolumnar structure , discovered by the neuro - physiologists hubel and wiesel in the 60s ( ) , organizes the cells of v1/v2 in columns ( called hypercolums ) covering a small part of the visual field and corresponding to parameters such as orientation , scale , direction of movement , color , for a fixed retinal position .we have the set of all possible orientations , title="fig : " ] we have the set of all possible orientations , title="fig : " ] in our framework over each retinal point we will consider a whole hypercolumn of cells , each one sensitive to a specific instance of orientation . hence for each position of the retina we associate a whole set of filters this expression associates to each point of the proximal stimulus in all possible feature orientations into the space of features , and defines a fiber over each point in this way the hypercolumnar structure is described in terms of differential geometry , but we need to explain how the orientation selectivity is performed by the cortical areas in the space of feature ( ) .physiologically the orientation selectivity is the action of short range connections between simple cells belonging to the same hypercolumn to select the most probable response from the energy of receptive profiles .horizontal connections are long ranged and connect cells of approximately _ the same orientation_. since the connectivity between cells is defined on the tangent bundle , we define now the generator of this space .the change of variable defined through in acts on the basis for the tangent bundle giving as frame in polar coordinates : as presented in , the whole space of features is described in terms of a 3-dimensional fiber bundle , whose generators are , for the base and for the fiber .these vector fields generate the tangent bundle of .since horizontal connectivity is very anysotropic , the three generators are weighted by a strongly anysotropic metric .we introduce now the sub - riemannian metric with whom citti and sarti in proposed to endow the group to model the long range connectivity of the primary visual cortex v1 . starting from the vector fields , and we define a metric for which the inverse ( responsible for the connectivity in the cortex ) is : with .cortical curves in v1 will be a linear combination of vector fields and , the generators of the 2-dimensional horizontal space , while they will have a vanishing component along ] .the scale parameter varies in dependence of the image resolution and is set in concordance with the stimulus processed .it is taken quite large in all examples in such a way to obtain a smooth tensor field covering all points of the image .this is in accordance with the hypothesis previously introduced that mechanisms in v2 , where the receptive field of simple cells is larger than in v1 , play a role in such phenomena .the constant has been chosen for all the examples as .the differential problem in is approximated with a central finite difference scheme and it is solved with a classical pde linear solver .we now start discussing all results obtained through the presented algorithm .1.6 in 1.6 in 1.6 in 1.6 in the hering illusion , introduced by hering , a german physiologist , in 1861 is presented in figure [ fig:1:4 ] . in this illusion twovertical straight lines are presented in front of radial background , so that the lines appear as if they were bowed outwards . in order to help the reader , in figure [ fig:11:1 ] we superpose to the initial illusion two red vertical lines , which indeed coincide with the ones present in the stimulus .as described in the previous sections , we first convolve the distal stimulus with the entire bank of gabor filters : we take 32 orientations selected in , pixels . following the process , we compute using equation, we solve equation obtaining the perceived displacement .once it is applied to the initial stimulus , the proximal stimulus is recovered .the result of computation is shown in figure [ her_5_fig1 ] .the distorted image folds the parallel lines ( in black ) against the straight lines ( in red ) of the original stimulus ( figure [ fig:11:4 ] ) .a variant of the hering illusion , introduced by wundt in the 19th century , is presented in figure [ fig:1:5 ] . in this illusiontwo straight horizonal lines look as if they were bowed inwards , due to the distortion induced by the crooked lines on the background . for the convolution of the distal stimulus with gabor filters we select 32 orientations in , pixels .then we apply the previous model , and obtain the result presented in figure [ wundt_sec5 ] .computed vector fields are concentrated in the central part of the image and point toward the center .they indicate the direction of the displacement , which bends the parallel lines inwards . in figure [ fig:15:4 ]the proximal stimulus is computed through the expression : . in blackwe indicate displaced dots of the initial details of the distances between the bent curves and the original straight lines are shown .1.6 in , in order to clarify that the horizontal lines present in the image are indeed straight .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give two straight lines as reference , in order to put in evidence the curvature of the target lines , title="fig : " ] 1.6 in , in order to clarify that the horizontal lines present in the image are indeed straight .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give two straight lines as reference , in order to put in evidence the curvature of the target lines , title="fig : " ] 1.6 in , in order to clarify that the horizontal lines present in the image are indeed straight .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image .in black we represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give two straight lines as reference , in order to put in evidence the curvature of the target lines , title="fig : " ] 1.6 in , in order to clarify that the horizontal lines present in the image are indeed straight .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give two straight lines as reference , in order to put in evidence the curvature of the target lines , title="fig : " ] this illusion , introduced by ehm and wackermann in , consists in presenting a square over a background of concentric circles , figure [ fig:1:1 ] .this context , the same we find in ehrenstein illusion , bends the edges of the square ( red lines in [ fig:19:1 ] ) toward the center of the image .here we take the same number of orientations , 32 , selected in and pixels .the resulting distortion is shown in figure [ fig:19:4 ] . 1.6 in .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give a square as reference , in order to put in evidence the curvature of the target lines , title="fig : " ] 1.6 in .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give a square as reference , in order to put in evidence the curvature of the target lines , title="fig : " ] 1.6 in .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give a square as reference , in order to put in evidence the curvature of the target lines , title="fig : " ] 1.6 in .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give a square as reference , in order to put in evidence the curvature of the target lines , title="fig : " ] here we present three modified hering illusions ( see figure [ mod_her ] ) : in the first one straight lines are positioned further from the center than in the classical hering illusion . in the second one straight linesare positioned nearer the center than in the reference hering illusion .for coherence with the hering example , orientations selected are 32 in and pixels .all other parameters are fixed during these three experiments . 1.6 in 1.6 in 1.6 in 1.6 in in the proposed modifiedhering illusions the vertical lines are straight and parallel as in the hering , but since they are located further / nearer the center of the image the perceived bending results to be less / more intense .in accordance with the displacement vector fields shown in figure [ fig:11:3 ] , as far as we outstrip / approach the center the magnitude of the computed displacement decreases / increases . in figure [ fig:22:4 ]two straight lines are put over an incoherent background , composed by random oriented segments .as we can see from [ fig:23:4 ] , any displacement is perceived nor computed by the present algorithm .1.6 in 1.6 in 1.6 in 1.6 in the wundt - hering illusion ( figure [ fig:1:6 ] ) combines the effect of the background of the hering and wundt illusions . in this illusiontwo straight horizontal lines are presented in front of inducers which bow them outwards and inwards at the same time , inhibiting the bending effect . as a consequencethe horizontal lines are indeed perceived as straight . aspreviously explained for the modified hering illusion , also this phenomenon can be interpreted in terms of lateral interaction between cells belonging to the same neighborhood .here we take 32 orientations selected in the interval , pixels . 1.6 in .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give two straight lines as reference , in order to put in evidence the curvature of the target lines , title="fig : " ] 1.6 in .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give two straight lines as reference , in order to put in evidence the curvature of the target lines , title="fig : " ] 1.6 in .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give two straight lines as reference , in order to put in evidence the curvature of the target lines , title="fig : " ] 1.6 in .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give two straight lines as reference , in order to put in evidence the curvature of the target lines , title="fig : " ] the zllner illusion ( figure [ fig:1:2 ] ) consists in a pattern of oblique segments surrounding parallel lines , which creates the effect of unparallelism . as in the previous experiments , in figure [ fig:27:1 ] we superimpose two red lines to identify the straight lines . herewe take 32 orientations selected in the interval , pixels .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give two straight lines as reference , in order to put in evidence the unparallelism of the target lines , title="fig : " ] 1.0 in .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give two straight lines as reference , in order to put in evidence the unparallelism of the target lines , title="fig : " ] + 1.0 in .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give two straight lines as reference , in order to put in evidence the unparallelism of the target lines , title="fig : " ] 1.0 in .representation of , projection onto the retinal plane of the polarized connectivity in [ ptensor ] .the first eigenvalue has direction tangent to the level lines of the distal stimulus . in blue the tensor field , in cyan the eigenvector related to the first eigenvalue .computed displacement field .displacement applied to the image . in blackwe represent the proximal stimulus as displaced points of the distal stimulus : . in redwe give two straight lines as reference , in order to put in evidence the unparallelism of the target lines , title="fig : " ] [ fig:27:4 ]in this paper we presented a neuro - mathematical model based on the functional architecture of the visual cortex to explain and simulate perceptual distortion due to geometrical - optical illusions and to embed geometrical context . in our model perceptual distortionis due to the riemannian metric induced on the image plane by the connectivity activated by the image stimulus .its inverse is interpreted as a strain tensor and we computed the deformation in terms of displacement field which arises as solution of .this technique has been applied to a number of test cases and results are qualitatively in good agreement with human perception . in the futurethis work could be extended to functional architectures involving the feature of scale , starting from models provided by sarti , citti and petitot in , .this will allow to provide a model for scale illusions , such as the delbouf , see .indeed , another direction for future works will be to provide a quantitative analysis for the described phenomena , such as the one proposed by smith and to direct compare the developed theory with observations of gois through neuro - imaging techniques .this project has received funding from the european union s seventh framework programme , marie curie actions- initial training network , under grant agreement n. 607643 , `` metric analysis for emergent technologies ( manet ) '' .we would like to thank b. ter haar romeny , university of technology eindhoven , and m. ursino , university of bologna , for their important comments and remarks to the present work .angelucci , a. , levitt , j.b . , walton , e.j ., hupe , j.m . , bullier , j. , lund , j.s . : circuits for local and global signal integration in primary visual cortex .the journal of neuroscience * 22*(19 ) , 86338646 ( 2002 ) bosking , w.h . , zhang , y. , schofield , b. , fitzpatrick , d. : orientation selectivity and the arrangement of horizontal connections in tree shrew striate cortex .the journal of neuroscience * 17*(6 ) , 21122127 ( 1997 ) murray , m.m . ,wylie , g.r . ,higgins , b.a . ,javitt , d.c . ,schroeder , c.e . ,foxe , j.j . : the spatiotemporal dynamics of illusory contour processing : combined high - density electrical mapping , source analysis , and functional magnetic resonance imaging .the journal of neuroscience * 22*(12 ) , 50555073 ( 2002 ) song , c. , schwarzkopf , d.s . ,lutti , a. , li , b. , kanai , r. , rees , g. : effective connectivity within human primary visual cortex predicts interindividual diversity in illusory perception .the journal of neuroscience * 33*(48 ) , 18,78118,791 ( 2013 )
geometrical optical illusions have been object of many studies due to the possibility they offer to understand the behaviour of low - level visual processing . they consist in situations in which the perceived geometrical properties of an object differ from those of the object in the visual stimulus . starting from the geometrical model introduced by citti and sarti in , we provide a mathematical model and a computational algorithm which allows to interpret these phenomena and to qualitatively reproduce the perceived misperception .
we report our analysis of the web traffic of approximately one thousand residential users over a two - month period .this data set preserves the distinctions between individual users , making possible detailed per - user analysis .we believe this is the largest study to date to examine the complete click streams of so many users in their place of residence for an extended period of time , allowing us to observe how actual users navigate a hyperlinked information space while not under direct observation .the first contributions of this work include the discoveries that the popularity of web sites as measured by distinct visitors is unbounded ; that many of the power - law distributions previously observed in web traffic are aggregates of log - normal distributions at the user level ; and that there exist two populations of users who are distinguished by whether or not their web activity is largely mediated by portal sites .a second set of contributions concerns our analysis of browsing sessions within the click streams of individual users .the concept of a web session is critical to modeling real - world navigation of hypertext , understanding the impact of search engines , developing techniques to identify automated navigation and retrieval , and creating means of anonymizing ( and de - anonymizing ) user activity on the web .we show that a simple timeout - based approach is inadequate for identifying sessions and present an algorithm for segmenting a click stream into _ logical sessions _ based on referrer information .we use the properties of these logical sessions to show that actual users navigate hypertext in ways that violate a stateless random surfer model and require the addition of backtracking or branching .finally , we emphasize which aspects of this data present possible opportunities for anomaly detection in web traffic .robust anomaly detection using these properties makes it possible to uncover `` bots '' masquerading as legitimate user agents .it may also undermine the effectiveness of anonymization tools , making it necessary to obscure additional properties of a user s web surfing to avoid betraying their identity .in the remainder of this paper , after some background and related work , we describe the source and collection procedures of our web traffic data .the raw data set includes over 400 million http requests generated by over a thousand residential users over the course of two months , and we believe it to provide the most accurate picture to date of the hypertext browsing behavior of individual users as observed directly from the network .our main contributions are organized into three sections : * we confirm earlier findings of scale - free distributions for various per - site traffic properties aggregated across users .we show this also holds for site popularity as measured by the number of unique vistors .( [ section - host ] ) * we offer the first characterization of individual traffic patterns involving continuous collection from a large population .we find that properties such as jump frequency , browsing rates , and the use of portals are not scale - free , but rather log - normally distributed .only when aggregated across users do these properties exhibit scale - free behavior .( [ section - user ] ) * we investigate the notion of a web `` session , '' showing that neither a simple timeout nor a rolling average provide a robust definition .we propose an alternative notion of _ logical _ session and provide an algorithm for its construction . while logical sessions have no inherent temporal scale , they are amenable to the addition of a timeout with little net effect on their statistical properties .( [ section - session ] ) we conclude with a discussion of the limitations of our data , the implications of this work for modeling and anomaly detection , and potential future work in the area .internet researchers have been quick to recognize that structural analysis of the web becomes far more useful when combined with actual _ behavioral _ data .the link structure of the web can differ greatly from the set of paths that are actually navigated , and it tells us little about the behavior of individual users .a variety of behavioral data sources exist that can allow researchers to identify these paths and improve web models accordingly .the earliest efforts have used browser logs to characterize user navigation patterns , time spent on pages , bookmark usage , page revisit frequencies , and overlap among user paths .the most direct source of behavioral data comes from the logs of web servers , which have been used for applications such as personalization and improving caching behavior .more recent efforts involving server logs have met with notable success in describing typical user behavior . because search engines serve a central role in users navigation , their log data is particularly useful in improving search results based on user behavior .other researchers have turned to the internet itself as a source of data on web behavior .network flow data generated by routers , which incorporates high - level details of internet connections without revealing the contents of individual packets , has been used to identify statistical properties of web user behavior and discriminate peer - to - peer traffic from genuine web activity .the most detailed source of behavioral data consists of actual web traffic captured from a running network , as we do here .the present study most closely relates to the work of qiu _ et al . _ , who used captured http packet traces to investigate a variety of statistical properties of users browsing behavior , especially the extent on which they appear to rely on search engines in their navigation of the web .we have also used captured http requests in our previous work to describe ways in which pagerank s random - surfer model fails to approximate actual user behavior , which calls into question its use for ranking search results .one way of overcoming these shortcomings is to substitute actual traffic data for ranking pages .however , this may create a feedback cycle in which traffic grows super - linearly with popularity , leading to a situation ( sometimes called `` googlearchy '' ) in which a few popular sites dominate the web and lesser known sites are difficult to discover .more importantly for the present work , simply accepting traffic data as a given does not further our understanding of user behavior .we can also overcome the deficiencies of the random - surfer model by improving the model itself . this paper offers analysis of key features of observed behavior to support the development of improved agent - based models of web traffic . the present study also relates to work in anomaly detection and anonymization software for the web .the web tap project , for example , attempted to discover anomalous traffic requests using metrics such as request regularity and interrequest delay time , quantities which we discuss in the present work .the success of systems that aim to preserve the anonymity of web users is known to be dependent on a variety of empirical properties of behavioral data , some of which we directly address here .the click data we use in this study was gathered from a dedicated freebsd server located in the central routing facility of the bloomington campus of indiana university ( figure [ fig : architecture ] ) .this system had a 1 gbps ethernet port that received a mirror of all outbound network traffic from one of the undergraduate dormitories .this dormitory consists of four wings of five floors each and is home to just over a thousand undergraduates .its population is split roughly evenly between men and women , and its location causes it to have a somewhat greater proportion of music and education students than other campus housing . to obtain information on individual http requests passing over this interface, we first use a berkeley packet filter to capture only packets destined for tcp port 80 . while this eliminates from consideration all web traffic running on non - standard ports, it does give us access to the great majority of it .we make no attempt to capture or analyze encrypted ( https ) traffic using tcp port 443 .once we have obtained a packet destined for port 80 , we use a regular expression search against the payload of the packet to determine whether it contains an http get request . if we do find an http get request in the packet, we analyze the packet further to determine the virtual host contacted , the path requested , the referring url , and the advertised identity of the user agent .we then write a record to our raw data files that contains the mac address of the client system , a timestamp , the virtual host , the path requested , the referring url , and a flag indicating whether the user agent matches a mainstream browser ( internet explorer , mozilla / firefox , safari , or opera ) .we maintain record of the mac address only in order to distinguish the traffic of individual users .we thus assume that most computers in the building have a single primary user , which is reasonable in light of the connectedness of the student population ( only a small number of public workstations are available in the dormitory ) . furthermore, as long as the users do not replace the network interface in their computer , this information remains constant .the aggregate traffic of the dormitory was sufficiently low so that our sniffing system could maintain a full rate of collection without dropping packets .while our collection system offers a rare opportunity to capture the complete browsing activity of a large user population , we do recognize some potential disadvantages of our data source . because we do not perform tcp stream reassembly, we can only analyze http requests that fit in a single 1,500 byte ethernet frame . while the vast majority of requests do so , some get - based web services generate extremely long urls . without streamreassembly , we can not log the web server s response to each request : some requests will result in redirections or server errors , and we are unable to determine which ones . finally , a user can spoof the http referrer field ; we assume that few students do so , and those who do generate a small portion of the overall traffic .the click data was collected over a period of about two months , from march 5 , 2008 through may 3 , 2008 .this period included a week - long vacation during which no students were present in the building . during the full data collection period, we logged nearly 408 million http requests from a total of 1,083 unique mac addresses .not every http request from a client is indicative of an actual human being trying to fetch a web page ; in fact , such requests actually constitute a minority of all http requests . for this reason ,we retain only those urls that are likely to be requests for actual web pages , as opposed to media files , style sheets , javascript code , images , and so forth .this determination is based on the extension of the url requested , which is imprecise but functions well as a heuristic in the absence of access to the http _ content - type _ header in the server responses .we also filtered out a small subset of users with negligible activity ; their traffic consisted largely of automated windows update requests and did not provide meaningful data about user activity .finally , we also discovered the presence of a poorly - written anonymization service that was attempting to obscure traffic to a particular adult chat site by spoofing requests from hundreds of uninvolved clients .these requests were also removed from the data set .we found that some web clients issue duplicate http requests ( same referring url and same target url ) in nearly simultaneous bursts .these bursts occur independently of the type of url being requested and are less than a single second wide .we conjecture that they may involve checking for updated content , but we are unable to confirm this without access to the original http headers .because this behavior is so rapid that it can not reflect deliberate activity of individual users , we also removed the duplicate requests from the data set .privacy concerns and our agreement with the human subjects committee of our institution also obliged us to try to remove all identifying information from the referring and target urls .one means of doing so is to strip off all identifiable query parameters from the urls . applying this anonymization procedure affects roughly one - third of the remaining requests .the resulting data set ( summarized in table [ table : data ] ) is the basis for all of the description and analysis that follows ..approximate dimensions of the filtered and anonymized data set . [ cols="<,^ " , ] even though we have defined sessions logically , they can still be considered from the perspective of time .if we calculate the difference between the timestamp of the request that first created the session and the timestamp of the most recent request to add a leaf node , we obtain the duration of the logical session .when we examine the distribution of the durations of the sessions of a user , we encounter the same situation as for the case of interclick times : power - law distributions for every user .furthermore , when we consider the exponent of the best power - law fit of each user s data , we find the values are normally distributed with a mean value , as shown in figure [ fig : bracket_gamma ] .no user has a well - defined mean duration for their logical sessions ; as also suggested by the statistics of interclick times , the presence of strong regularity in a user s session behavior would be anomalous . for the best power - law approximation to the distribution of logical session duration for each user .the fit is a normal distribution with mean and standard deviation .these low values of indicate unbounded variance and the lack of any central tendency in the duration of a logical session . ]it is natural to speculate that we can get the best of both worlds by extending the definition of a logical session to include a timeout , as was done in previous work on referrer trees .such a change is quite straightforward to implement : we simply modify the algorithm so that a request can not attach to an existing session unless the attachment point was itself added within the timeout .this allows us to have one branch of the browsing tree time out while still allowing attachment on a more active branch . while the idea is reasonable, we unfortunately find that the addition of such a timeout mechanism once again makes the statistics of the sessions strongly dependent on the particular timeout selected .as shown in figure [ fig : logical_stat ] , the number of sessions per user , mean node count , mean depth , and ratio of nodes to tree depth are all dependent on the timeout . on the other hand , in contrast to sessions defined purely by timeout , this dependence becomes smaller as the timeout increases , suggesting that logical sessions with a timeout of around 15 minutes may be a reasonable compromise for modeling and further analysis .in this paper we have built on the network - sniffing approach to gathering web traffic that we first explored in , extending it to track the behavior of individual users .the resulting data set provides an unprecedented and accurate picture of human browsing behavior in a hypertext information space as manifested by over a thousand undergraduate students in their residences .the data confirm previous findings about long - tailed distributions in site traffic and reveal that the popularity of sites is likewise unbounded and without any central tendency .they also show that while many aspects of web traffic have been shown to obey power laws , these power - law distributions often represent the aggregate of distributions that are actually log - normal at the user level .the lack of any regularity in interclick times for web users leads to the conclusion that sessions can not be meaningfully defined with a simple timeout , leading to our presentation of logical sessions and an algorithm for deriving them from a click stream .these logical sessions illustrate further drawbacks of the random surfer model and can be modified to incorporate timeouts in a relatively robust way .these findings have direct bearing on future work in modeling user behavior in hypertext navigation .the stability of the proportion of empty - referrer requests across all users implies that although not every page is equally likely to be the cause of a jump , the overall chance of a jump occurring is constant in the long run .the finding that the branching factor of the logical sessions is definitely greater than one means that plausible agent - based models for random walks must incorporate state , either through backtracking or branching .our indications as to which distributions show central tendencies and which do not are of critical importance for anomaly detection and anonymization . to appear plausibly human ,an agent must not stray too far from the expected rate of requests , proportion of empty - referrer requests , referrer - to - host ratio , and node count and tree depth values for logical sessions . because these are log - normal distributions, agents can not deviate more than a multiplicative factor away from their means . at the same time , a clever agent must mimic the heavy - tailed distributions of the spacing between requests and duration of sessions ; too _ much _ regularity appears artificial .although our method of collection does afford us with a large volume of data , it suffers from several disadvantages which we are working to overcome in future studies .first , our use of the file extension ( if any ) in requested urls is a noisy indicator of whether a request truly represent a page fetch .we are also unable to detect whether any request is actually satisfied or not ; many of the requests may actually result in server errors or redirects . both of these problems could be largely mitigated without much overhead by capturing the first packet of the server s response , which should indicate an http response code and a content type in the case of successful requests .this data set is inspiring the development of an agent - based model that replaces the uniform distributions of pagerank with more realistic distributions and incorporates bookmarking behavior to capture the branching behavior observed in logical sessions .the authors would like to thank the advanced network management laboratory at indiana university and dr. jean camp of the iu school of informatics for support and infrastructure .we also thank the network engineers of indiana university for their support in deploying and managing the data collection system .special thanks are due to alessandro flammini for his insight and support during the analysis of this data .this work was produced in part with support from the institute for information infrastructure protection research program .the i3p is managed by dartmouth college and supported under award number 2003-tk - tx-0003 from the u.s .dhs , science and technology directorate .this material is based upon work supported by the national science foundation under award number 0705676 .this work was supported in part by a gift from google .opinions , findings , conclusions , recommendations or points of view in this document are those of the authors and do not necessarily represent the official position of the u.s .department of homeland security , science and technology directorate , i3p , national science foundation , indiana university , google , or dartmouth college .m. bouklit and f. mathieu .backrank : an alternative for pagerank ? in _www 05 : special interest tracks and posters of the 14th international conference on world wide web _ , pages 11221123 , new york , ny , usa , 2005 .acm .s. pandey , s. roy , c. olston , j. cho , and s. chakrabarti .shuffling a stacked deck : the case for partially randomized ranking of search engine results . in k.bhm , c. s. jensen , l. m. haas , m. l. kersten , p .- .larson , and b. c. ooi , editors , _ proc .31st international conference on very large databases ( vldb ) _ , pages 781792 , 2005 .f. qiu , z. liu , and j. cho .analysis of user web traffic with a focus on search activities . in a.doan , f. neven , r. mccann , and g. j. bex , editors , _ proc .8th international workshop on the web and databases ( webdb ) _, pages 103108 , 2005 .
we examine the properties of all http requests generated by a thousand undergraduates over a span of two months . preserving user identity in the data set allows us to discover novel properties of web traffic that directly affect models of hypertext navigation . we find that the popularity of web sites the number of users who contribute to their traffic lacks any intrinsic mean and may be unbounded . further , many aspects of the browsing behavior of individual users can be approximated by log - normal distributions even though their aggregate behavior is scale - free . finally , we show that users click streams can not be cleanly segmented into sessions using timeouts , affecting any attempt to model hypertext navigation using statistics of individual sessions . we propose a strictly logical definition of sessions based on browsing activity as revealed by referrer urls ; a user may have several active sessions in their click stream at any one time . we demonstrate that applying a timeout to these logical sessions affects their statistics to a lesser extent than a purely timeout - based mechanism . [ http ] [ information networks ] [ navigation , user issues ]
we derive information on the physical properties of the solar atmosphere by interpreting the polarization profiles of spectral lines .extracting this information is not easy , because the observed profiles depend on the atmospheric parameters in a highly non - linear manner through the absorption matrix and the source function vector . to solve the problem ,least - squares inversion techniques ( its ) based on analytical or numerical solutions of the radiative transfer equation were developed in the past .these methods compare the observed stokes profiles with synthetic profiles emerging from an initial guess model atmosphere .the misfit is used to modify the atmospheric parameters until the synthetic profiles match the observed ones .this yields a model atmosphere capable of explaining the measurements , within the assumptions and limitations of the model .the first its were proposed by and .many other codes have been developed since then ( cf .table 1 in * ? ? ?* ; * ? ? ?today we have an it for almost any application we may be interested in : lte or non - lte line formation , one - component or multi - component model atmospheres , photospheric or chromospheric lines , etc .these codes have been used intensively to study the magnetism of the solar atmosphere , and now they are essential tools for the analysis of spectro - polarimetric measurements .this paper concentrates on recent achievements and future challenges of its .additional information on its can be found in the reviews by del toro iniesta & ruiz cobo ( 1996 ) , socas - navarro ( 2001 ) , and del toro iniesta ( 2003 ) .there have been no significant improvements of classical least - squares its in recent years : no new algorithms have appeared , and existing codes have not been optimized for speed .however , the experience accumulated has been used to develop new codes for specific purposes .the complexity of both atmospheric and line - formation models has also increased .for example , a few years ago the most sophisticated inversions of stokes profiles from sunspots were based on one - component models with gradients of the physical parameters ( westendorp plaza et al .2001 ) , while two - component inversions are now performed on a routine basis .even more complex models are necessary to explain the net circular polarization ( ncp ) of spectral lines emerging from sunspot penumbrae . used micro - structured magnetic atmospheres , assuming that the penumbra is formed by optically thin magnetic fibrils .adopted an uncombed penumbral model with two different components representing inclined flux tubes embedded in a more vertical ambient field . both modelssuccessfully reproduce the anomalous stokes profiles observed near the neutral line , for visible and infrared ( ir ) lines considered separately .the synthetic ncps , however , are a bit smaller than the observed ones , implying that there is still room for improvement . an important advance in the field has been the development of alternative methods for real - time analyses of large data sets .other significant achievements have come from the application of classical its to state - of - the - art observations .these issues are examined in more detail in the next subsections .conceptually , the simplest inversion is one that uses a look - up table .the idea is to create a database of synthetic stokes profiles from a large number of known model atmospheres , and look for the profile in the database which is closest to the observed spectrum .the corresponding model is adopted as representative of the physical conditions of the atmosphere from which the profile emerged . despite its simplicity , this method had seldom been put into practice until drew attention to principal component analysis ( pca ) as a means to accelerate the search in the look - up table . by virtue of pca , the stokes profiles can be expressed in terms of a few coefficients only .the comparison between observed and synthetic profiles is then performed very quickly , because the calculation does not involve the many wavelength points describing the full line profiles .pca lies at the heart of several codes developed in the last years .the database is the most critical component of any it based on look - up tables , as its size increases dramatically with the number of free parameters . to keep this number to a minimum , only milne - eddington ( me ) atmosphereshave been used for pca inversions of photospheric lines ( e.g. , socas - navarro , lpez ariste , & lites 2001 ) . even under meconditions , the parameter space can not be sampled very densely .the discrete nature of the database introduces numerical errors , and so pca analyses are less accurate than least - squares me inversions ( e.g. , * ? ? ?* ; * ? ? ?however , the method gives an idea of the quality of the fit in terms of the so - called pca distance .when this distance is large , the observed profile can not be associated with any profile in the database , making it possible to identify pixels that deserve closer attention .a nice feature of pca inversions is that the search algorithm is independent of the database .the synthesis of stokes profiles may be very time - consuming , but the inversion will always be fast .this opens the door to the analysis of lines for which atomic polarization effects are important .have developed a pca inversion code to exploit the diagnostic potential of the hanle effect in the d line at nm .the database is created using a line formation code which solves the statistical equilibrium of a he atom with 5 terms , in the presence of magnetic fields .coherences between fine - structure levels within each atomic term are accounted for to treat the zeeman and hanle regimes , including level crossing ( incomplete paschen - back effect ) .this code has been applied to prominences and spicules .the speed of pca inversions makes it possible to handle large amounts of data in real time . since 2004, pca is used at the french - italian themis telescope to derive vector magnetic fields from mtr measurements of the nm lines .full maps are inverted in about 10min , which is more or less the time needed to take the observations . at the telescope, pca is very useful for quick - look analyses , allowing one to select interesting targets or to continue the observation of interesting regions . while real - time analyses are appealing , it is important to keep in mind their limitations . in most cases ,a proper interpretation of the observations will require more sophisticated its , which however can use the results of pca methods as initial guesses .another promising technique explored in recent years is stokes inversion based on artificial neural networks ( anns ) .the idea was introduced by carroll & staude ( 2001 ) and developed by . essentially , an ann is an interpolation algorithm .one starts by setting up the structure of the network , i.e. , the number of layers and the number of neurons in each layer .the input layer receives the observations ( the stokes profiles or a suitable combination thereof ) and the last layer outputs the unknown atmospheric parameters .the ann must be trained before applying it to real data . to this end , stokes profiles synthesized using different me atmospheres are presented to the ann in order to find the synaptic weights and biases of the neurons that return the model parameters used to compute the profiles .the training process is very slow but , once accomplished , the ann will invert a full map in a matter of seconds .indeed , anns are the fastest its available nowadays . for the moment, however , they have not been used in any scientific application .several strategies have been explored to optimize the performance of anns .it seems that the best choice is to invert one parameter at a time with a dedicated ann .finding all atmospheric parameters with a single ann is possible , but requires a larger number of neurons and the training process becomes very complicated . has shown that the mean field strengths inferred with the help of anns are reasonably accurate from a statistical point of view .on average they resemble those provided by me inversions .however , the errors can be very large for individual pixels , as evidenced by the large r.m.s .uncertainties ( fig . 8 in that paper indicates an uncertainty of 0.3 kg for fields of kg , i.e. , a relative error of 25% ; the error is even larger for weaker fields ) .this means that anns may be appropriate for quick - look analyses and other applications where high precision is not required . for detailed studies of physical processes ,current anns seem not to be accurate enough .a limitation of both pca - based inversions and anns is the use of me atmospheres , which precludes the determination of gradients of physical parameters from the observed profiles .it remains to be seen whether these methods can be modified in order to reliably recover gradients along the line of sight ( los ) .as discussed in sect .3.3 , gradients appear to be essential for the analysis of observations at very high spatial resolution .major breakthroughs have come from the application of its to state - of - the - art observations , including spectro - polarimetry in the near ir , simultaneous observations of visible and ir lines , spectro - polarimetry of molecular lines , and observations at very high spatial resolution .the development of polarimeters for the ir , most notably the tenerife infrared polarimeter ( tip ; * ? ? ?* ; * ? ? ?* ) , has opened a new window with lines that offer excellent magnetic sensitivity and chromospheric coverage .one example is the triplet at nm , which has become an essential tool to investigate the upper chromosphere .the formation of the triplet is complex and not really well understood .however , since the triplet lines are nearly optically thin , me atmospheres provide a good description of their shapes .an inversion code specifically designed for nm , helix , was presented by .it is based on the unno - rachkovsky solution of the radiative transfer equation and includes an empirical treatment of the hanle effect . outside active regions ,the linear polarization profiles of nm show the signatures of the hanle effect , which needs to be taken into account for correct retrievals of vector magnetic fields .helix implements the pikaia genetic algorithm , rather than the more common marquardt algorithm employed by other least - squares its .using the magnetic information obtained with helix , were able to trace individual coronal loops in an emerging flux region .they found upflows at the apex of the loops and downdrafts near the footpoints , which is what one expects for magnetic loops rising from deeper layers .other applications of the code have been presented by , , and .routine observations of the ir triplet of at nm are now possible with the mtr mode of themis and the spectro - polarimeter for infrared and optical regions ( spinor ; * ? ? ?* ; * ? ? ?* ) mounted on the dunn solar telescope ( dst ) of the national solar observatory at sacramento peak .the ca ir triplet lines are excellent diagnostics of the chromosphere , with the advantage that their interpretation is much simpler than that of other chromospheric lines such as h , k , and h .however , they still require non - lte computations .non - lte inversions of the ca ir triplet lines observed with spinor have been presented by and .although these analyses are very demanding in terms of computational resources , they hold great promise for quantitative diagnostics of the thermal and magnetic structure of the solar chromosphere. profiles of the pair of lines at 630 nm _( left ) _ and the lines at 1565 nm _( right ) _ observed simultaneously with polis and tip in the limb - side penumbra of ar10425 , near the neutral line .dots indicate the observations , solid the best - fit profiles resulting from a simultaneous inversion of the data . _bottom : _ uncombed penumbral model inferred from the inversion . from left to right : temperature , los velocity , field strength , and field inclination in the background ( solid ) and the flux - tube ( dashed ) components ( from beck , bellot rubio , & schlichenmaier , in preparation ) .[ fig1],title="fig:",height=130 ] profiles of the pair of lines at 630 nm _ ( left ) _ and the lines at 1565 nm _ ( right ) _ observed simultaneously with polis and tip in the limb - side penumbra of ar10425 , near the neutral line .dots indicate the observations , solid the best - fit profiles resulting from a simultaneous inversion of the data . _bottom : _ uncombed penumbral model inferred from the inversion . from left to right : temperature , los velocity , field strength , and field inclination in the background ( solid ) and the flux - tube ( dashed ) components ( from beck , bellot rubio , & schlichenmaier , in preparation ) .[ fig1],title="fig:",height=102 ] profiles of the pair of lines at 630 nm _( left ) _ and the lines at 1565 nm _ ( right ) _ observed simultaneously with polis and tip in the limb - side penumbra of ar10425 , near the neutral line .dots indicate the observations , solid the best - fit profiles resulting from a simultaneous inversion of the data . _bottom : _ uncombed penumbral model inferred from the inversion . from left to right : temperature , los velocity , field strength , and field inclination in the background ( solid ) and the flux - tube ( dashed ) components ( from beck , bellot rubio , & schlichenmaier , in preparation ) .[ fig1],title="fig:",height=102 ] profiles of the pair of lines at 630 nm _ ( left ) _ and the lines at 1565 nm _ ( right ) _ observed simultaneously with polis and tip in the limb - side penumbra of ar10425 , near the neutral line .dots indicate the observations , solid the best - fit profiles resulting from a simultaneous inversion of the data . _bottom : _ uncombed penumbral model inferred from the inversion . from left to right : temperature , los velocity , field strength , and field inclination in the background ( solid ) and the flux - tube ( dashed ) components ( from beck , bellot rubio , & schlichenmaier , in preparation ) .[ fig1],title="fig:",height=130 ] profiles of the pair of lines at 630 nm _ ( left ) _ and the lines at 1565 nm _ ( right )_ observed simultaneously with polis and tip in the limb - side penumbra of ar10425 , near the neutral line .dots indicate the observations , solid the best - fit profiles resulting from a simultaneous inversion of the data . _bottom : _ uncombed penumbral model inferred from the inversion . from left to right : temperature , los velocity , field strength , and field inclination in the background ( solid ) and the flux - tube ( dashed ) components ( from beck , bellot rubio , & schlichenmaier , in preparation ) .[ fig1],title="fig:",height=102 ] profiles of the pair of lines at 630 nm _ ( left ) _ and the lines at 1565 nm _ ( right ) _ observed simultaneously with polis and tip in the limb - side penumbra of ar10425 , near the neutral line .dots indicate the observations , solid the best - fit profiles resulting from a simultaneous inversion of the data ._ bottom : _ uncombed penumbral model inferred from the inversion . from left to right : temperature , los velocity , field strength , and field inclination in the background ( solid ) and the flux - tube ( dashed ) components ( from beck , bellot rubio , & schlichenmaier , in preparation ) .[ fig1],title="fig:",height=102 ] simultaneous observations of visible and ir lines improve the accuracy of inversion results due to their different sensitivities to the various atmospheric parameters .several instruments are capable of such observations , including tip and polis ( polarimetric littrow spectrograph ; * ? ?* ; * ? ? ?* ) at the german vtt in tenerife , and spinor at the dst .figure 1 shows examples of stokes profiles of the lines at nm , nm , nm , and nm observed in the penumbra of ar10425 near the neutral line . these profiles were taken strictly simultaneously with tip and polis on august 9 , 2003 .both the visible and ir lines exhibit large asymmetries and even pathological shapes , suggesting the presence of two magnetic components in the resolution element .we have carried out an analysis of these profiles in terms of an uncombed penumbral model using the code described by .the best - fit profiles from the simultaneous inversion are represented by the solid lines .the quality of the fits is certainly remarkable , but it should be stressed that even better fits are achieved from the inversion of only the visible or the ir lines . indeed , fitting both sets of lines simultaneously is much more difficult , because a model atmosphere appropriate for the visible lines may not be appropriate for the ir lines , and vice versa .measurements in two or more spectral regions thus constrain the range of acceptable solutions .the lower panels of fig . 1show the uncombed model resulting from the inversion ( beck , bellot rubio , & schlichenmaier , in preparation ) .the dashed lines represent a penumbral flux tube with larger los velocities than the background atmosphere , indicated by the solid lines .the field is weaker and more horizontal in the tube .these inversions confirm that the uncombed model is able to explain the observed shapes of both visible and ir lines . at the same time , they allow us to determine the position and width of the penumbral tubes , which is not easy from visible or ir lines alone . other examples of the analysis of simultaneous measurements in the visible and ir are given by and .the new polarimeters have also extended our capabilities to observe molecular lines , providing increased thermal sensitivity .molecular lines are mostly seen in sunspot umbrae , because higher temperatures dissociate the parent molecules .usually , they show smaller polarization signals than atomic lines .two codes capable of inverting molecular lines are spinor and the one developed by .spinor has been used to analyze the oh lines at nm and nm , improving the determination of umbral temperatures .the second code has been employed to invert the cn lines at nm .these lines are very interesting because in the umbra they show large linear polarization signals but very small stokes signals , just the opposite behavior of atomic lines . finally, major progress has come from the application of its to high spatial resolution spectroscopic and spectro - polarimetric measurements .the main advantage of high spatial resolution is that the results are less dependent on filling factor issues .an example of the inversion of high spatial resolution stokes profiles is the work of , who derived the magnetic and thermal properties of umbral dots from observations taken with the la palma stokes polarimeter at a resolution of about 07 .another promising type of measurements are those provided by fabry - prot interferometers such as the interferometric bidimensional spectrometer ( ibis ; * ? ? ?* ; * ? ? ?* ) and the telecentric solar spectrometer ( tesos ; * ? ? ?* ; * ? ? ?* ) , which has recently been equipped with the kis / iaa visible imaging polarimeter ( vip ; * ? ? ?* ; * ? ? ?combined with adaptive optics systems , these instruments perform 2d vector spectro - polarimetry at high spatial , spectral , and temporal resolutions , which is necessary to investigate fast processes in large fields of view .an example of the inversion of 2d spectroscopic measurements with tesos is the derivation of the thermal and kinematic properties of a sunspot penumbra at different heights in the atmosphere ( ; see also ) .the angular resolution of these observations is 05 . in the future ,significant progress may come from the routine inversion of lines showing hyperfine structure , such as nm and nm ( lpez ariste , tomczyk , & casini 2002 ) .these lines exhibit sign reversals in the core of stokes and multiple peaks in stokes and for weak fields .interestingly , the shape of the anomalies depends on the magnetic field strength , rather than on the magnetic flux . such an unusual behavior can be used to investigate the magnetism of the quiet sun .in fact , from the shape of the observed profiles it would be possible to determine directly the strength of the magnetic field , even in the weak field regime . to exploit the diagnostic potential of these lines , however , it is necessary to implement the appropriate zeeman patterns in existing its and to lower the noise level of current observations , which is barely enough to detect the subtle signatures induced by hyperfine structure .its have proven to be essential tools to characterize the properties of the solar atmosphere .their application to high precision spectro - polarimetric measurements , however , has started to raise concerns on the limitations of some spectral lines .this is an important problem that deserves further investigation .other challenges facing its in the near future include the implementation of more realistic model atmospheres and the development of strategies for the analysis of the large amounts of data to be delivered by upcoming instruments . have shown that it is possible to fit a given set of stokes profiles of the pair of lines at nm with very different field strengths by slightly changing the temperature stratification and the microturbulent velocity , for typical conditions of quiet sun internetwork regions .the reason is the different formation height of the two lines .this quite unexpected result suggests that one can not determine reliable internetwork field strengths from nm and nm without a prior knowledge of the actual temperature stratification .other lines such as nm and nm could provide the necessary information . we have found a similar problem with the nm lines even in the umbra , where the strong field regime applies . more specifically , we have detected a cross - talk problem between the stray light coefficient , the temperature , and the magnetic field strength and inclination ( see * ? ? ?* ; * ? ? ?the results of one - component inversions of umbral profiles with the stray light contamination as a free parameter do differ from those in which the stray light factor is fixed to the value inferred from a simultaneous inversion of visible and ir lines . with a mere difference of 7% in the stray light coefficient, the temperatures at from the two inversions may differ by up to , and the field strength by about g .the fits are equally good in both cases , so it is not possible to decide which inversion is better .it appears that the relatively small zeeman splitting of the nm lines does not allow to clearly distinguish between larger stray light factors and weaker fields , which produces cross - talk among the various atmospheric parameters .given these concerns , more detailed studies of the limitations of visible and ir lines seem warranted , for a better understanding of the results obtained from them . to minimize the risk of cross - talk problems in the inversion , it is desirable to use simultaneous measurements in different spectral ranges. this will require modifications of current inversion codes to account for different stray light levels and different instrumental profiles in the different spectral ranges .the new observational capabilities , in particular the availability of simultaneous observations of visible and ir lines , offer us a unique opportunity to increase the realism of the atmospheric models implemented in existing its .the need for better models is indicated by the small ( sometimes systematic ) residuals observed in inversions of profiles emerging from complex magnetic structures .as an example , consider the uncombed penumbral model .right now we use two different lines of sight to represent the background and flux - tube atmospheres ( cf . ) , but this is a very simplistic approximation .the ambient field lines have to wrap around the flux tube , hence the properties of the background can not be the same far from the tube and close to it .this may have important consequences for the generation of asymmetrical stokes profiles . in much the same way ,the flux tube is not always at the same height within the resolution element , because the magnetic field is not exactly horizontal .therefore , different rays will find the tube at different heights .finally , lines of sight crossing the center of the tube sense the properties of the tube over a larger optical - depth range than lines of sight crossing the tube at a distance from its axis .neither of these effects are modeled by current inversion codes . probably , the subtle differences between observed and best - fit profiles ( cf .[ fig1 ] ) would disappear with a more complex treatment of the magnetic topology of sunspot penumbrae .stokes polarimetry at the diffraction limit is needed to study the physical processes occurring in the solar atmosphere at their intrinsic spatial scales .we are pushing our technological capabilities to the limit by building grating spectro - polarimeters and filter magnetographs for diffraction - limited observations . on the ground ,examples of already operational or upcoming state - of - the - art instruments include tip , polis , dlsp , spinor , ibis and tesos+vip . among space - borne instruments we have the spectro - polarimeter and filter polarimeter onboard solar - b , imax onboard sunrise , hmi onboard sdo , and vim onboard solar orbiter .these instruments will deliver data of unprecedented quality in terms of spatial and spectral resolution .we hope to further our understanding of the solar magnetism with them . however , the success of this endeavor will critically depend on our ability to extract in an appropriate way the information contained in the observations .we do not only want to investigate the morphology and temporal evolution of the various solar structures from diffraction - limited images , but also to derive their magnetic and kinematic properties accurately using polarization measurements .reliable determinations of vector magnetic fields call for least - squares inversions .the problem is the enormous data flows expected : classical least - squares its are considered to be too slow for real - time analyses of the observations .this is the reason why it is taken for granted that the most sophisticated inversions of the data will be based on me models .the question naturally arises as to whether or not me atmospheres are appropriate for the interpretation of stokes measurements at very high spatial resolution . and profiles of 525.06 nm emerging from an intergranular lane as computed from mhd simulations ( solid ) . dotted anddashed lines represent the best - fit profiles from a me inversion and a sir inversion with gradients , respectively . _ middle and right : _ stratifications of atmospheric parameters used for the spectral synthesis ( solid ) .the results of the me inversion and the sir inversion are given by the dotted and dashed lines , respectively .[ fig2],title="fig : " ] and profiles of 525.06 nm emerging from an intergranular lane as computed from mhd simulations ( solid ) . dotted anddashed lines represent the best - fit profiles from a me inversion and a sir inversion with gradients , respectively . _ middle and right : _ stratifications of atmospheric parameters used for the spectral synthesis ( solid ) .the results of the me inversion and the sir inversion are given by the dotted and dashed lines , respectively .[ fig2],title="fig : " ] and profiles of 525.06 nm emerging from an intergranular lane as computed from mhd simulations ( solid ) . dotted anddashed lines represent the best - fit profiles from a me inversion and a sir inversion with gradients , respectively . _ middle and right : _ stratifications of atmospheric parameters used for the spectral synthesis ( solid ) .the results of the me inversion and the sir inversion are given by the dotted and dashed lines , respectively .[ fig2],title="fig : " ] and profiles of 525.06 nm emerging from an intergranular lane as computed from mhd simulations ( solid ) .dotted and dashed lines represent the best - fit profiles from a me inversion and a sir inversion with gradients , respectively . _ middle and right : _ stratifications of atmospheric parameters used for the spectral synthesis ( solid ) .the results of the me inversion and the sir inversion are given by the dotted and dashed lines , respectively .[ fig2],title="fig : " ] and profiles of 525.06 nm emerging from an intergranular lane as computed from mhd simulations ( solid ) . dotted anddashed lines represent the best - fit profiles from a me inversion and a sir inversion with gradients , respectively . _ middle and right : _ stratifications of atmospheric parameters used for the spectral synthesis ( solid ) .the results of the me inversion and the sir inversion are given by the dotted and dashed lines , respectively .[ fig2],title="fig : " ] and profiles of 525.06 nm emerging from an intergranular lane as computed from mhd simulations ( solid ) . dotted anddashed lines represent the best - fit profiles from a me inversion and a sir inversion with gradients , respectively . _middle and right : _ stratifications of atmospheric parameters used for the spectral synthesis ( solid ) .the results of the me inversion and the sir inversion are given by the dotted and dashed lines , respectively .[ fig2],title="fig : " ] to shed some light on this issue , i have used the mhd simulations of to synthesize the stokes profiles of the imax line ( nm ) emerging from a typical magnetic concentration in an intergranular lane .the atmospheric parameters needed for the calculation have been taken from a simulation run with average magnetic flux density of g .figure [ fig2 ] displays the atmospheric stratifications and the corresponding stokes profiles at 01 resolution ( solid lines ) .the dotted lines show the results of a me inversion of the synthetic profiles . as can be seen , the fits to stokes and are not very successful , due to the extreme asymmetries of the profiles .the atmospheric parameters inferred from the me inversion are some kind of average of the real stratifications , but the significance and usefulness of these average values are questionable when the parameters feature such strong variations along the los .an analysis of the same profiles with sir allowing for vertical gradients of field strength and velocity yields much better fits to stokes and ( dashed lines ) .although the fits can still be improved , the important point is that this simple sir inversion is able to recover the gradients of field strength and velocity with fewer free parameters than the me inversion ( 8 as opposed to 9 ) .this additional information could be essential to understand many physical processes , so it is important to have it .present - day computing resources are sufficient to determine gradients from the high spatial resolution observations delivered by _ grating instruments _ such as polis and the solar - b spectro - polarimeter ( solar - b / sp ; lites , elmore , & streander 2001 ) .a sir inversion of the four stokes profiles of 2 spectral lines ( 10 free parameters , 135 wavelength points , model atmosphere discretized in 41 grid points ) takes on a dual xeon workstation running at .optimizing the code , a cadence of may easily be reached .the real - time analysis of a full polis slit ( 450 pixels in ) would then require 20 such workstations .the analysis of solar - b / sp data ( 1000 pixels every ) would require 50 workstations . in both cases ,the total cost would be a minor fraction of the cost of the instruments themselves .the situation is rather different for vector magnetographs .these instruments measure only a few wavelength points , i.e. , line profiles are not available .probably , the most we can do with this kind of data is a full least - squares me inversion .an additional complication is that the data rates will be huge , much larger than those expected from grating instruments .for example , hmi will observe about pixels every 80 . to cope with such data flows , pca methods and annsare being proposed as the only option to invert the observations in real time .we have already mentioned that on average the results of these methods coincide with those from me inversions .however , since large errors occur for many individual pixels , it is clear that me inversions would be preferable over pca or ann analyses .but , how to perform me inversions of vector magnetograph data at the required speed ?the solution could be hardware inversion on field programmable gate arrays ( fpgas ) , which is about 10 times faster than software inversion depending on the frequency of the processor and the implementation of the algorithm . at the iaa, we are studying the feasibility of such an electronic inversion for the analysis of vim data .the first working prototype is expected to be ready by the end of 2007 .inversion techniques ( its ) have become essential tools to investigate the magnetism of the solar atmosphere .nowadays , they represent the best option to extract the information contained in high precision polarimetric measurements .the reliability and robustness of least - squares stokes inversions have been confirmed many times with the help of numerical tests .part of the community , however , is still concerned with uniqueness issues .these concerns will hopefully disappear with the implementation of more realistic model atmospheres . during the last years, major progress in the field has resulted from the application of its to state - of - the - art observations .the advent of spectro - polarimeters for the near ir has represented a breakthrough , allowing the observation of atomic and molecular lines that provide increased magnetic and thermal sensitivity , and extended chromospheric coverage .the potential of simultaneous observations of visible and ir lines for precise diagnostics of solar magnetic fields has just started to be exploited .visible and ir lines constrain the range of acceptable solutions , which is especially useful for the investigation of complex structures with different magnetic components and/or discontinuities along the los .finally , we have begun to invert spectro - polarimetric observations at very high spatial resolution , with the aim of reaching the diffraction limit of current solar telescopes ( 0102 ) .high spatial resolution allows to separate different magnetic components that might coexist side by side , thus facilitating the determination of their properties .the application of its to these observations is casting doubts on the capabilities of certain lines for investigating particular aspects of solar magnetism. a detailed study of the limitations of spectral lines , in particular the often used pair at nm , seems necessary to clarify their range of usability .an obvious cure for any problem that might affect the observables is to invert visible and ir lines simultaneously .this will require modifications of current its to account for different instrumental effects in the different spectral ranges .perhaps the most important challenge facing its in the next years is the analysis of the enormous data sets expected from upcoming space - borne polarimeters .so far , the efforts have concentrated on the development of fast pca - methods and anns for real - time inversions of the data .however , the unprecedented quality of these observations in terms of spectral and spatial resolution makes it necessary to explore the feasibility of more complex inversions capable of determining gradients of field strength and velocity along the los .tests with numerical simulations demonstrate the importance of gradients to reproduce the very large asymmetries of the stokes profiles expected at a resolution of 0102 .current computational resources allow us to determine gradients from full line profiles observed with grating spectro - polarimeters such as the one onboard solar - b , at a very reasonable cost .gradients might also be recovered from high - resolution filtergraph observations if sufficient wavelength points are available .interestingly , real - time me inversions of data with limited wavelength sampling seem possible using fpgas .the feasibility of such electronic me inversions needs to be assessed . at the same time, it is important to continue the development of pca and ann methods to provide the more complex me inversions with good initial guesses .bellot rubio , l. r. , schlichenmaier , r. , & tritschler , a. 2006 , , 453 , 1117 bellot rubio , l. r. , tritschler , a. , kentischer , t. , beck , c. , & del toro iniesta , j. c.2006 , 26th meeting of the iau , 16 - 17 august , 2006 , prague , jd03 , # 58 martnez pillet , v. , collados , m. , snchez almeida , j. , et al . in asp conf .183 , high resolution solar physics : theory , observations , and techniques , ed .t. r. rimmele , k. s. balasubramaniam & r. r. radick ( san francisco : asp ) , 264 orozco surez , d. , lagg , a. , & solanki , s. k. 2005 , in esa sp-596 , proc.intl.sci.conf . chromospheric and coronal magnetic fields , ed .d. e. innes , a. lagg , s. k. solanki & d. danesy ( noordwijk : esa ) , 59
inversion techniques ( its ) allow us to infer the magnetic , dynamic , and thermal properties of the solar atmosphere from polarization line profiles . in recent years , major progress has come from the application of its to state - of - the - art observations . this paper summarizes the main results achieved both in the photosphere and in the chromosphere . it also discusses the challenges facing its in the near future . understanding the limitations of spectral lines , implementing more complex atmospheric models , and devising efficient strategies of data analysis for upcoming ground - based and space - borne instruments , are among the most important issues that need to be addressed . it is argued that proper interpretations of diffraction - limited stokes profiles will not be possible without accounting for gradients of the atmospheric parameters along the line of sight . the feasibility of determining gradients in real time from space - borne observations is examined .
since adler s seminal paper , several groups have reported the formation and the propagation of concentration waves in bacteria suspensions . typically , a suspension of swimming bacteria such as _ e. coli _ self - concentrates in regions where the environment is slightly different such as the entry ports of the chamber ( more exposed to oxygen ) or regions of different temperatures .after their formation , these high concentration regions propagate along the channel , within the suspension .it is commonly admitted that chemotaxis ( motion of cells directed by a chemical signal ) is one of the key ingredients triggering the formation of these pulses .we refer to for a complete review of experimental assays and mathematical approaches to model these issues and to for all biological aspects of _ e. coli_. our goal is to derive a macroscopic model for chemotactic pulses based on a mesoscopic underlying description ( made of kinetic theory adapted to the specific run - and - tumble process that bacteria undergo ) .we base our approach on recent experimental evidence for traveling pulses ( see fig . [fig : wavechannel ] ) .these traveling pulses possess the following features which we are able to recover numerically : constant speed , constant amount of cells , short timescale ( cell division being negligible ) , and strong asymmetry in the profile .we describe as usual the population of bacteria by its density ( at time and position ) .we restrict our attention to the one - dimensional case due to the specific geometry of the channels .the cell density follows a drift - diffusion equation , combining brownian diffusion together with directed fluxes being the chemotactic contributions .this is coupled to reaction - diffusion equations driving the external chemical concentrations . in this paperwe consider the influence of two chemical species , namely the chemoattractant signal , and the nutrient .although this is a very general framework , it has been shown in close but different conditions that glycine can play the role of the chemoattractant .similarly , glucose is presumed to be the nutrient .the exact nature of the chemical species has very little influence on our modeling process .in fact there is no need to know precisely the mechanisms of signal integration at this stage .the model reads as follows : the chemoattractant is assumed to be secreted by the bacteria ( at a constant rate ) , and is naturally degraded at rate , whereas the nutrient is consumed at rate .both chemical species diffuse with possibly different molecular diffusion coefficients .we assume a linear integration of the signal at the microscopic scale , resulting in a summation of two independent contributions for the directed part of the motion expressed by the fluxes and .we expect that the flux will contribute to gather the cell density and create a pulse .the flux will be responsible for the motion of this pulse towards higher nutrient levels .several systems of this type have been proposed and the upmost classical is the so - called keller - segel equation for which fluxes are proportional to the gradient of the chemical : in the absence of nutrient , such systems enhance a positive feedback which counteracts dispersion of individuals and may eventually lead to aggregation .there is a large amount of literature dealing with this subtle mathematical phenomenon ( see and references therein ) .self - induced chemotaxis following the keller - segel model has been shown successful for modeling self - organization of various cell populations undergoing aggregation ( slime mold amoebae , bacterial colony , ) . in the absence of a chemoattractant produced internally , this model can be used to describe traveling pulses .however it is required that the chemosensitivity function is singular at . following the work of nagai and ikeda , horstmann and stevenshave constructed a class of such chemotaxis problems which admit traveling pulses solutions , assuming the consumption of the ( nutrient ) signal together with a singular chemosensitivity .we also refer to for a presentation of various contributions to this problem , and to for recent developments concerning the stability of traveling waves in some parabolic - hyperbolic chemotaxis system .in addition , the contribution of cell division to the dynamics of keller - segel systems ( and specially traveling waves ) has been considered by many authors ( see and the references therein ) .however these constraints ( including singular chemosensitivity or growth terms ) seem unreasonable in view of the experimental setting we aim at describing .an extension of the keller - segel model was also proposed in seminal paper by brenner _ for the self - organization of _e. coli_. production of the chemoattractant by the bacteria triggers consumption of an external field ( namely the succinate ) .their objective is to accurately describe aggregation of bacteria along rings or spots , as observed in earlier experiments by budrene and berg that were performed over the surface of gels .one phase of the analysis consists in resolving a traveling ansatz for the motion of those bacterial rings .however the simple scenario they first adopt can not resolve the propagation of traveling pulses .the authors give subsequently two possible directions of modeling : either observed traveling rings are transient , or they result from a switch in metabolism far behind the front .the experimental setting we are based on is quite different from budrene and berg s experiments ( in particular regarding the dynamics ) : for the experiments discussed in the present paper , the bacteria swim in a liquid medium and not on agar plates. therefore we will not follow .on the other hand salman et al . consider a very similar experimental setting .however the model they introduce to account for their observations is not expected to exhibit pulse waves ( although the mathematical analysis would be more complex in its entire form than in ) .actually fig .5 in is not compatible with a traveling pulse ansatz ( because the pulse amplitude is increasing for the time of numerical experiments ) .traveling bands have also been reported in other cell species , and especially the slime mold _ dictyostelium discoideum _ .notice that the original model by keller and segel was indeed motivated by the observation of traveling pulses in _ dictyostelium _ population under starvation .this question has been developped more recently by hfer et al . using the keller - segel model , as well as dolak and schmeiser and erban and othmer using kinetic equations for chemotaxis .according to these models , the propagating pulse waves of chemoattractant ( namely camp ) are sustained by an excitable medium .the cells respond chemotactically to these waves by moving up to the gradient of camp .great efforts have been successfully performed to resolve the `` back - of - the - wave paradox '' : the polarized cells are supposed not to turn back when the front passed away ( this would result in a net motion outwards the pulsatile centers of camp ) .although we are also focusing on the description of pulse waves , the medium is not expected to be excitable and the bacteria are not polarized .nevertheless , we will retain from these approaches the kinetic description originally due to alt and coauthors .this mathematical framework is well - suited for describing bacterial motion following a microscopic run - and - tumble process .a new class of models for the collective motion of bacteria has emerged recently .it differs significantly from the classical keller - segel model . rather than following intuitive rules ( or first order approximations ), the chemotactic fluxes are derived analytically from a mesoscopic description of the run - and - tumble dynamics at the individual level and possibly internal molecular pathways , see . the scaling limit which links the macroscopic flux ( or similarly ) to the kinetic descriptionis now well understood since the pioneering works .here we propose to follow the analysis in , which is based on the temporal response of bacteria , denoted by in appendix .namely we write these fluxes as : where is a ( small ) parameter issued from the microscopic description of motion . namely is the ratio between the pulse speed and the speed of individual cells ( they differ by one order of magnitude at least according to experimental measurements ) .the function contains the microscopic features that stem from the precise response of a bacterium to a change in the environment ( see the appendix ) .it mainly results from the so - called response function at the kinetic level that describe how a single bacterium responds to a change in the concentration of the chemoattractant in its surrounding environment .we give below analytical and numerical evidence that traveling pulses exist following such a modeling framework .we also investigate the characteristic features of those traveling pulses at the light of experimental observations .the experiments presented in the present paper will be described in more details in a subsequent paper .briefly , in a setup placed under a low magnification fluorescence microscope maintained at , we fill polymer microchannels section : ca . , length ca . with a suspension of fluorescent _e. coli _ bacteria strain rp437 considered wild - type for motility and chemotaxis , transformed with a pze1r - gfp plasmid allowing quantitative measurement of bacteria concentration inside the channel .we concentrate the cells at the extremity of the channel and monitor the progression of the subsequent concentration wave along the channel . in particular , we dynamically extract the shape of the front and its velocity ( see fig .[ fig : wavechannel ] ) . coupling the model with the formula results in a parabolic type partial differential equation for the bacterial density , such as in the keller - segel system .it significantly differs from it however , as it derives in our case from a kinetic description of motion .especially the flux is uniformly bounded , whereas the chemotactic flux in keller - segel model generally becomes unbounded when aggregative instability occurs , which is a strong obstacle to the existence of traveling pulses .it is usually impossible to compute explicitely traveling pulse solutions for general systems such as . to obtain qualitative properties is also a difficult problem : we refer to for examples of rigorous results . here , we are able to handle analytical computations in the limiting case of a stiff signal response function , when the fluxes are given by the expression ( see the appendix ): in other words , a specific expression for in is considered in this section .it eventually reduces to as .we seek traveling pulses , in other words particular solutions of the form , , where denotes the speed of the wave .this reduces ( [ eq : full model ] ) to a new system with a single variable : we prescribe the following conditions at infinity : we impose without loss of generality .this means that the fresh nutrient is located on the right side , and thus we look for an increasing nutrient concentration .we expect that the chemoattractant profile exhibits a maximum coinciding with the cell density peak ( say at ) , and we look for a solution where changes sign only once at .then , the fluxes - express under the traveling wave ansatz as : integrating once the cell density equation we obtain , the flux takes two values ( with a jump at ) , whereas the flux is constant .therefore the cell density is a combination of two exponential distributions : this combination of two exponentials perfectly match with the numerical simulations ( fig .[ fig : front propagation ] ) . + to close the analysis it remains to derive the two unknowns : the maximum cell density and the speed , given the mass and the constraint that vanishes at ( because reaches a maximum ) . on the one hand , the total mass of bacteria is given by .on the other hand the chemotactic field is given by , where the fudamental solution of the equation for is to match the transition in monotonicity condition , the chemical signal should satisfy , that is , which leads to this leads to the following equation that we shall invert to obtain the front speed : from this relation we infer : we deduce from monotonicity arguments that there is a unique possible traveling speed . according to the expected pulse speed does not depend upon the total number of cells when the response function is stiff .this can be related to a recent work by mittal _ where the authors observe experimentally such a fact in a different context ( see section [ sec : cluster ] below ) . in the case of a smooth tumbling kernel in, our model would predict a dependency of the speed upon the quantity of cells .but this analysis suggests that the number of cells is presumably not a sensitive biophysical parameter .observe that the speed does not depend on the bacterial diffusion coefficient either .therefore we expect to get the same formula if we follow the hyperbolic approach of in order to derive a macroscopic model . indeed the main difference at the macroscopic level lies in the diffusion coefficient which is very small in the hyperbolic system .nevertheless , the density distribution would be very different , being much more confined in the hyperbolic system .furthermore , scaling back the system to its original variables , we would obtain a pulse speed being comparable to the individual speed of bacteria ( see appendix ) .this is clearly not the case .mittal _ et al ._ have presented remarkable experiments where bacteria _e. coli _ self - organize in coherent aggregated structures due to chemotaxis .the cluster diameters are shown essentially not to depend on the quantity of cells being trapped .this experimental observation can be recovered from direct numerical simulations of random walks .we can recover this feature in our context using a model similar to derived from a kinetic description .following section [ sec : analytical ] we compute the solutions of in the absence of nutrient ( assuming again a stiff response function ) . observe that stationary solutions correspond here to zero - speed traveling pulses .the problem is reduced to finding solutions of the following system : we assume again that .this simply leads to , this is compatible with the postulate that changes sign only once , at ( the source being even ) .the typical size of the clusters is of the order , which does not depend on the total number of cells .this is in good agreement with experiments exhibited in .the fact that we can recover them from numerical simulations indicates that these stationary states are expected to be stable .cluster formation provides a good framework for investigating the situation where we relax the stiffness assumption of the response function .below is characterized by the stiffness parameter through ( see appendix ) .consider the caricatural model ( in nondimensional form ) : we rewrite denotes the range of action of the chemical signal .we investigate the linear stability of the constant stationary state where is the meanvalue over the domain ] , but the results contained in this paper do not depend on this particular choice .kinetic models of chemotaxis have been studied recently in .the turning kernel describes the frequency of changing trajectories , from to .it expresses the way external chemicals may influence cell trajectories .a single bacterium is able to sense time variations of a chemical along its trajectory ( through a time convolution whose kernel is well described since the experiments performed by segall _ et al . _ ) . for the sake of simplicity we neglect any memory effect , and we assume that a cell is able of sensing the variation of the chemical concentration along its trajectory .following , this is to say that is given by the expression (v'\to v ) = \psi\left(\dfrac{ds}{dt}\right ) = \psi\left(\partial_ts+v'\cdot \nabla_xs\right)\ , .\label{eq : temporal response}\ ] ] the signal integration function is non - negative and decreasing , expressing that cells are less likely to tumble ( thus perform longer runs ) when the external chemical signal increases ( see fig .[ fig : tumbling ] for such a tumbling kernel in the context of the present application ) .it is expected to have a stiff transition at 0 , when the directional time derivative of the signal changes sign .our study in section [ sec : num ] boils down to the influence of the stiffness , by introducing a one parameter family of functions . :the tumbling probability is higher when moving to the left ( upper dashed line ) at the back of the pulse , whereas the tumbling probability when moving to the right is lower ( upper plain line ) , resulting in a net flux towards the right , as the pulse travels ( see fig .[ fig : limited nutrient profile ] ) . notice that these two curves are not symmetric w.r.t . to the basal rate 1 ,but the symmetry defect is of lower order ( ) .the peak location is also shown for the sake of completeness ( lower plain line ) . ]the main parameters of the model are the total number of bacteria which is conserved , the maximum speed of a single bacterium , and the mean turning frequency ( where denotes the dimension of space according to our discussion above ) .the main unknown is the speed of the traveling pulse , denoted by .we rescale the kinetic model into a nondimensional form as follows : we aim at describing traveling pulses in the regime .experimental evidence show that the bulk velocity is much lower than the speed of a single bacterium .this motivates to introduce the ratio .according to experimental measurements , we have .the kinetic equation writes : where .following the experimental setting ( see introduction , fig . [fig : wavechannel ] and fig .[ fig : front propagation ] ) and the biological knowledge , we choose the scales , , and .. therefore we rewrite this ratio as : where the nondimensional coefficient is of order 1 . to perform a drift - diffusion limit when ( see , and for other scaling limits , _e.g. _ hyperbolic ) , we shall assume that the variations of around its meanvalue are of amplitude at most .it writes in the nondimensional version as follows : .hence the chemotactic contribution is a perturbation of order of a unbiased process which is constant in our case because the turning kernel does not depend on the posterior velocity and the first order contribution is required to be symmetric with respect to .this hypothesis is in agreement with early biological measurements .it is also relevant from the mathematical viewpoint as we are looking for a traveling pulse regime where the speed of the expected pulse is much slower than the speed of a single individual .this argues in favour of a parabolic scaling as performed in this appendix .the rest of this appendix is devoted to the derivation of the keller - segel type model in one dimension of space : dislike the classical keller - segel model ( used for instance by salman et al . ) , singularities can not form ( excessively populated aggregates ) with the chemotactic flux given in below .this is because the latter remains uniformly bounded ( see also mittal_ et al . _ where clusters emerge which are plateaux and thus not as singular as described for ks system in a mathematical sense ) .we start from the nondimensional kinetic equation : (v')\right ) f(t , x , v')\ , dv ' \right.\\ \left. - |v|\left ( 1 + { \epsilon}\phi_\delta[s](v)\right ) f(t , x , v)\right\}\ , , \end{gathered}\ ] ] which reads as follows , (v ' ) f(t , x , v')\ , dv ' - |v| \phi_\delta[s](v ) f(t , x , v)\right)\ , .\label{eq : parabolic scaling}\end{gathered}\ ] ] therefore the dominant contribution in the tumbling operator is a relaxation towards a uniform distribution in velocity at each position : as , where .notice that more involved velocity profiles can be handled , but this is irrelevant in our setting as the tumbling frequency does not depend on the posterior velocity .the space density remains to be determined .for this purpose we first integrate with respect to velocity and we obtain the equation of motion for the local density : to determine the bacterial flow we integrate against : (v ) f(t , x , v)\ , dv\ , .\end{gathered}\ ] ] we obtain formally , as : (v ) \ , dv \ , .\ ] ] finally , the drift - diffusion limit equation reads in one dimension of space : } |v|^2\ , dv\right ) \partial^2_{xx } \rho + \partial_x \left(\rho \int_{v\in [ -1,1 ] } v \phi_\delta\left ( { \epsilon}\partial_t s + v\partial_x s\right ) \ , \frac{dv}{2 } \right)\ , .\ ] ] to sum up , we have derived a macroscopic drift - diffusion equation , where the bacterial diffusion coefficient and the chemotactic flux are given by : } |v|^2\ , dv\ , , \quad u_s = - \int_{v\in [ -1,1 ] } v \phi_\delta\left ( { \epsilon}\partial_t s + v\partial_x s\right ) \ , \frac{dv}{2 } \ , .\label{eq : kinflux}\ ] ] in the limiting case where the internal response function is bivaluated : , the flux rewrites simply as : for the sake of comparison , we highlight the corresponding expressions which have been obtained by dolak and schmeiser . in authors perform a hyperbolic scaling limit leading to the following chemotactic equation for the density of bacteria : where is an anisotropic diffusion tensor and the chemotactic flux is given by : for some renormalizing factor .the two approaches do not differ that much at first glance ( especially when is bivaluated ) .notice however that the `` small '' parameter does not appear at the same location : in front of the diffusion coefficient in the hyperbolic limit and inside the chemotactic flux in the parabolic limit .n. bournaveas , v. calvez , s. gutirrez and b. perthame , _ global existence for a kinetic model of chemotaxis via dispersion and strichartz estimates _ , comm .partial differential equations * 33 * ( 2008 ) , 7995 . k. a. landman , m. j. simpson , j. l. slater , and d. f. newgreen , _ diffusive and chemotactic cellular migration : smooth and discontinuous traveling wave solutions _ , siam j. appl . math .* 65 * ( 2005 ) , 1420 .s. park , p.m. wolanin , e.a .yuzbashyan , h. lin , n.c .darnton , j.b .stock , p. silberzan and r. austin , _ influence of topology on bacterial social interaction _usa * 100 * ( 2003 ) , 139105 .b. perthame , _ pde models for chemotactic movements : parabolic , hyperbolic and kinetic _ , appl .* 49 * ( 2004 ) , 539564 .h. salman , a. zilman , c. loverdo , m. jeffroy , and a. libchaber , _ solitary modes of bacterial culture in a temperature gradient _ , phys .lett . * 97 * ( 2006 ) , 118101 .segall , s.m .block and h.c .berg , _ temporal comparisons in bacterial chemotaxis _natl . acad .usa * 83 * ( 1986 ) , 89878991 .
the keller - segel system has been widely proposed as a model for bacterial waves driven by chemotactic processes . current experiments on _ e. coli _ have shown precise structure of traveling pulses . we present here an alternative mathematical description of traveling pulses at a macroscopic scale . this modeling task is complemented with numerical simulations in accordance with the experimental observations . our model is derived from an accurate kinetic description of the mesoscopic run - and - tumble process performed by bacteria . this model can account for recent experimental observations with _ e. coli_. qualitative agreements include the asymmetry of the pulse and transition in the collective behaviour ( clustered motion versus dispersion ) . in addition we can capture quantitatively the main characteristics of the pulse such as the speed and the relative size of tails . this work opens several experimental and theoretical perspectives . coefficients at the macroscopic level are derived from considerations at the cellular scale . for instance the stiffness of the signal integration process turns out to have a strong effect on collective motion . furthermore the bottom - up scaling allows to perform preliminary mathematical analysis and write efficient numerical schemes . this model is intended as a predictive tool for the investigation of bacterial collective motion .
molecular communication is a paradigm that aims to develop communication systems at the nanoscale . in order to ensure efficiency and biocompatibility ,the objective of this new communication paradigm is to develop communication systems by utilizing components that are found in the nature .such a communication system will include at least one transmitter nanomachine which encodes information into molecules ( i.e. , ions , deoxyribonucleic acid [ dna ] molecules ) .these molecules will be transported to the receiver and decoded .example models of molecular communication that have been proposed include molecular diffusion of information molecules and those using active carriers such as bacteria .enabling communication at the nanoscale and interconnecting the molecular nanonetworks to the internet could provide opportunities for a new generation of _ smart city _ and _ health - care _ applications. examples of these applications include : * _ environmental sensing _ : the future smart city envisions more accurate and efficient sensing techniques for the environment .this sensing process may include early detection of pathogens that may affect the crops or live stocks . since bacteria are found widespread within the environment , they can serve as information carriers between nano sensors , and collect information at fine granular scale . *_ biofuel quality monitoring _: one alternative source of energy is the conversion of biomass to fuel production .recently , scientists have experimentally shown how engineered bacteria could turn glucose into hydrocarbon that are structurally identical to commercial fuel . therefore ,utilizing bacterial nanonetworks could improve the quality of biofuel production , and at the same time provide accurate quality control . *_ personalized health - care _ : the process of early disease detection within the human body is a major challenge .detecting diseases at an early stage can provide opportunities of curing the condition and prevent further spreading of the disease .since bacteria are found in the gut flora , embedding nanonetworks into the intestine can provide fine granular sensing at the molecular scale .besides sensing , the bacterial nanonetworks can also provide new methods for targeted drug delivery .in this article , we focus on the use of bacteria to transport dna - encoded information between the nanomachines . in a bacterial nanonetwork, bacteria are kept inside the nanomachines and then released to commence the information transmission process . while numerous works have investigated the feasibility of bacterial nanonetworks ( e.g. , ) , the communication models used in the earlier works have not considered bacterial social behavior .bacteria usually co - exist as a community , which at times could consist of multi - cellular community .the community structure enables bacteria to cooperate and co - exist in varying environmental conditions .however , extreme environmental conditions ( e.g. , scarce resources ) could also lead to competitive and non - cooperative behavior among the bacteria species .this usually results in each species developing strategies for survival . since bacterial nanonetworks will rely on bacteria carrying messages between the different nanomachines , the social properties can affect the performance and reliability of the bacterial nanonetworks .we provide an overview of various bacterial social behavior and the challenges as well as opportunities they create in the context of the reliability of communication in bacterial nanonetworks .an analogy can be drawn between the social - based bacterial nanonetworks and the social - based delay - tolerant networks ( dtns ) , where the social behavior of people can affect the performance of mobile ad hoc networks .the key contributions of this article can be summarized as follows : * we review and analyze the impact of bacterial social behaviors on the performance of the nanonetworks .we describe the various challenges and opportunities that arise due to the bacterial social behavior in such networks . * using computer simulations , we demonstrate the use of bacterial cooperative social behavior that help to entice the bacterial motility towards the destination .the results from the simulations show that the cooperation can substantially improve the network performance .* this article creates a new direction of research to address the challenges in future molecular nanonetworks that utilize bacteria as information carrier .in particular , the article provides a guideline to exploit bacterial social properties in a dynamic environment to improve the communication performance .the remainder of the article is organized as follows : after an overview of the bacterial nanonetworks , we provide an introduction to the communication mechanisms among different bacterial species .this is followed by a review of the social properties of bacterial community .then , we describe the challenges and opportunities that arise due to the bacterial social behavior from the perspective of the communication performance in bacterial nanonetworks .we present results from simulation studies to evaluate the effect of dynamic social behavior on the performance of the communication nanonetwork . to this end , we highlight several future research scopes in this emerging multi - disciplinary field before we conclude the article .the social behaviors of bacteria result from their communication capabilities . again , this communication results from bacterial linguistics , which is enabled by emitting various biochemical agents .the communication process of the bacteria is not only limited to bacteria of the same species , but it can also extend to multi - colony and inter - species communication .recent studies have identified that inter - species message - passing occurs quite regularly in multi - species _biofilms_. the biofilms refer to surface - attached densely - populated communities formed by the bacteria . for instance , larger population of antibiotic resistant cells within a bacterial population can emit chemical signals ( e.g. , small molecules ) to increase antibiotic resistance in less resistant cells .these small molecules are not limited to protect the cells within the same species , but can also extend to other species . [ cols="<,<,<",options="header " , ]as described in the introduction , our aim is to utilize bacteria as an information carrier between the nanomachines in order to enable molecular communication .however , the uncertain conditions as well as the non - cooperative social behavior could affect the bacteria carrying the message . on the other hand , the cooperative behavior could be beneficial for the performance of the nanonetworks .the cooperative behavior could lead to population survivability , which implies that this will support the bacteria carrying the message to successfully arrive at the destination nanomachine . as described earlier ,an example of this is when the cooperation allows the bacteria to form fluidic boundaries in order to protect other bacteria in the population . a key issue , however , is the non - cooperative behavior of the bacteria which could affect the information transmission probability . in this situation ,the bacteria released from the transmitter nanomachine , which are carrying the message , are vulnerable and may not successfully arrive at the receiver nanomachine . in the following we list a number of challenges and opportunities arising due to the social behavior of bacteria that can affect the communication performance .bacteria , similar to most organisms , rely on environmental nutrients for survival .the previous section described how cooperative behavior between the bacteria can enable nutrients to be discovered ( e.g. , sensing ) as well as fair delivery ( e.g. , foraging ) .however , we have also seen that depletion of nutrients can lead to the bacterial species switching towards negative behavior .this will not only affect multi - species bacteria , where one species may try to kill off another species , but also amongst the same species . in the context of molecular communication , the bacterial species that is killed maybe responsible for the information transfer .therefore , the design of communication between the nanomachines will need to consider fluctuations of nutrients in the environment , and obtain solutions to cope with the bacteria that are trying to eliminate each other .one approach to mitigate this situation and turn this into an opportunity is to ensure a stable environment .stable environment with sufficient nutrients minimizes the competition among the bacteria and hence improves communication reliability .the nanomachines that will release the bacteria with the embedded information could also encapsulate nutrients from the nanomachine .these nutrients can be released at the same time as the bacteria with the encoded information .once the nutrients are diffused into the environment , the bacteria with the encoded information can reproduce in numbers .this will enable the species of the bacteria carrying the messages to possibly outnumber the other competing species , in the event they decide to release toxins to kill the other species .although the changes in the quantity of nutrients can affect the environment , this is not the only factor that can change the social interactions of the bacteria .as has been described earlier , certain bacteria can switch to selfish behavior in order to seek individual benefit .the learning capabilities of the bacteria may also lead to the behavioral switching .for instance , if the bacteria are initially cooperating and sense a high enough density of population within the environment , they may decide to switch the behavior believing that their change may not be detected by the general population .in such a case , if a nanonetwork is embedded within a biofilm and this biofilm structure fails , this could lead to a full breakdown of the nanonetwork .one solution to mitigate this problem is to ensure that the environment contains an optimum density of nanomachines forming the network so that the network will be robust under failures .therefore , in the event of biofilm breakdown , the nanonetwork may be subdivided into sub - networks .previous discussions have described the destructive effects of non - cooperative bacteria on the communication performance .one method to improve the communication performance is to apply antibiotics within the environment to kill off bacteria that are harmful .however , the bacteria could develop resistance to the antibiotics and this resistance could be through a gene within a plasmid . through the conjugation processthese plasmids with resistance to antibiotics could be passed between the bacteria .note that the conjugation process is generally beneficial for bacterial nanonetworks since it increases the quantity of messages that could be delivered to the destination nanomachine .since this could be negatively utilized by the harmful bacteria , both the positive and negative effects of conjugation should be taken into consideration when designing bacterial social networks . in order to curb the non - cooperative behaviors ,the nanomachines within the environment could also dispense antibiotics .this will require the bacteria carrying the plasmid with the encoded information to also possess the antibiotic resistance genes . in the event that the bacteria carrying legitimate plasmids are conjugated with the other bacteria, they will also transfer the plasmids with the antibiotic - resistant genes .when the antibiotics are diffused before any transmission , this will ensure that the non - cooperators without the resistance genes are eliminated from the environment .therefore , this will lower the probability of conjugating with the non - cooperators .in order to observe the impact of cooperative bacterial social interaction , in this section , we evaluate the communication performance in a bacterial nanonetwork through simulations . ) .we use matlab to develop the simulator .] we compare and analyze the results with the bacteria - based nanonetwork approaches that have been proposed in the existing literature ( e.g. , ) , where cooperation is not considered .we simulate a network with two nanomachines , which are the source nanomachine and the receiver nanomachine , separated at some distance as shown in fig .[ fig : bacsimnets ] . we consider _e. coli _ bacteria as the information carrier .for realistic modeling purpose , we use similar simulation parameters used in the earlier studies ( e.g. , , ) , by mathematical biologists who have developed the models based on the experimental results .since the data message ( encoded in dna plasmid ) is embedded in the bacteria , each bacterium can be considered as an individual data packet .we utilize bacterial chemotaxis in order to attract the bacteria to swim toward the destination nanomachine .this is achieved by the destination nanomachine releasing the chemoattractant ( e.g. , nutrient ) .bacteria move through a biased random _ running _ and _ tumbling _ process and eventually carry the plasmid to the destination .we assume that the source nanomachine transmits in a time division manner , and if the bacterium does not reach the destination within a fixed timeout duration , the information is considered to be lost .we observe the reliability of the network in terms of the successful transmission probability defined by , where and denote the total number of bacteria released from the source nanomachine and the number of bacteria that reach the destination nanomachine , respectively . among the different bacterial social interactions , here we consider the cooperative communication process by means of qs .the cooperative process is established when the bacterium observes increasing chemoattractant density and notifies the others through diffusion of cooperative signaling molecules .the objective is to entice the bacteria carrying the message to bias its directional mobility towards the destination .we assume that in our environment , there is no supporting architecture ( e.g. , nanotube ) between nanomachines and the bacteria are freely swimming in the medium .we model the bacterial mobility as follows : where denotes the position of the bacterium at time within the timeout duration ; and denote the direction and magnitude of the bacteria movement , is the step size of the bacterium during one time interval , is an i.i.d .gaussian random vector representing the tumbling effect and the _ brownian motion_. brownian motion refers to the random collision of molecules in the medium . due tobrownian motion , even in running mode the direction of the bacterium will change in a random manner .the binary decision variable determines whether the bacterium will run or tumble at a time instance . at each time instance, the bacterium decides whether it will run or tumble based on its own ability to make a decision and the information obtained from the environment ( e.g. , from other bacteria ) .if the sequence of decisions for eventually leads the bacterium to the destination nanomachine within the timeout duration , the information transmission process is considered to be successful .note that , as mentioned in section [ subsec : coop ] , bacteria can release cooperative molecules and by sensing the density of the molecules ( released by other bacteria ) , a bacterium biases its mobility accordingly . for the case of cooperative communication ,the decision sequence is determined based on both the chemoattractant density and the cooperative molecular signals released by individual bacterium during the qs process . when there is no interaction among the bacteria , the decision sequence is determined based on only the density of chemoattractant observed from the medium .we consider a steady - state chemoattractant density ( e.g. , the density of the chemoattractant will not change over time ) .however , the observed density by the bacterium will vary according to the distance between the current position of the bacterium and the chemoattractant source ( e.g. , receiver nanomachine ) .we also assume a stable environment with sufficient nutrients .therefore , this will lead to minimal non - cooperative behaviors and competition of nutrients among the bacteria , and the bacterial behavior will not change during the communication process . in fig .[ fig : distance_eff ] , we vary the distance between the source and destination nanomachine . for a fixed timeout duration , when the distance between the nanomachines is high , the bacteria are unable to reach the destination which reduces the probability of successful transmission .note that , even when there is no cooperation [ e.g. , dotted curve in fig .[ fig : distance_eff ] ] , a small number of bacteria can still reach the destination using their own sensing abilities ( e.g. , utilizing the chemotaxis process ) .cooperative communication among bacteria helps to attract them toward the chemoattractant gradient .for example , a bacterium obtains additional information about the chemoattractant sources from the other bacteria in the environment and adapts its decision of running and tumbling accordingly .however , at larger distances between the source and destination , the effect of cooperation is less prominent due to the fact that the cooperative signaling molecules spread too far and have minimal influence on the bacteria sensing . in fig .[ fig : bacteria_dyn ] , we vary the number of bacteria and observe the effect of cooperative signaling molecules on the communication performance in terms of successful transmission probability . we define the relative gain of cooperation as , where and denote the observed network reliability due to cooperative communication and without cooperation , respectively .note that although increasing the number of cooperating bacteria improves the relative gain , there comes a point when the cooperative behavior leads to a declining gain .for example , when the number of bacteria , the relative gain of cooperation is around , which is considerably less than that when the population size ( e.g. , ) .in such a scenario , although increasing the number of bacteria results in an increased number of bacteria at the receiver , the reliability ( in terms of successful transmission probability ) does not increase substantially , which leads to a lower gain .[ fig : pop_dyn ] shows the individual bacterial behavior with different chemoattractant density profile .we consider a situation where a fraction of the population cooperates by producing cooperative signaling molecules that bias the bacteria toward the chemoattractant gradient .however , the rest of the population use only the chemoattractant gradient .note that , increasing the distance limits the success probability .when the nutrient density is reduced to half ( i.e. , from to , as represented in the dotted curves ) , the success probability drops significantly , especially for shorter distances . for shorter distances ,the bacteria are close to the chemoattractant source . as a result , the effect of changes in the cooperative signaling molecules is more prominent . in low - density scenarios ,the bacteria are unable to observe the gradient of the chemoattractant nutrient ( and hence also fail to signal and cooperate with the other bacteria ) which leads to a lower success rate .an interesting observation is that the percentage of bacteria that do not participate in the cooperation , but are able to reach the destination , is higher compared to the percentage of bacteria that cooperate and reach the destination .we can explain this fact as follows : the bacteria that are not part of the cooperative group , can still benefit from the diffused molecules released by the cooperative bacteria .this demonstrates how the non - cooperative bacteria can benefit from the cooperative bacteria .in addition to their own sensing capability , the bacteria that do not cooperate , also benefit from others diffused information . as a result, a higher percentage of bacteria will arrive at the destination .note that even though certain bacteria diffuse cooperative signaling molecules to the other bacteria , this does not guarantee that those bacteria will reach the destination .[ fig : chemo_den ] shows the communication performance under varying chemoattractant density . in fig .[ fig : density_vs_prob ] , as the density of the chemoattractant increases , this leads to a higher success rate of information transfer . in high - density conditions ,the bacteria are able to sense the gradient of the chemoattractant more rapidly which enables them to reach the destination successfully .however , we can still see the benefits of cooperative signaling which helps to bias the directional movement of the bacteria toward the destination . the relative gain in terms of the successful transmission probability ( due to cooperation ) with varying chemoattractant density is illustrated in fig . [fig : density_vs_gain ] . during the low - density conditions , the effect of cooperative communication is more significant .we can attribute this to the fact that under low chemoattractant density conditions , the bacteria are unable to sense the chemoattractant gradient efficiently , especially when they are far from the chemoattractant source . in such cases , the cooperative signaling molecules aid and compensate for the low chemoattractant density , leading to higher gains . although cooperative signaling molecules help the bacteria compared to the case when there is no cooperation , its influence on the bacteria is far less compared to a situation with a high density of chemoattractant .among numerous research opportunities that will emerge from this new multi - disciplinary research field , we list out a few examples below .the first research opportunity is the increased research synergy between ict researchers and molecular biologists , in particular , for the development of wet lab experimental platforms .nsf monaco _ project has began developing an experimental platform that brings together communication engineers , microfluidic experts , and molecular biologists .however , the project is only limited to validating bacterial nanonetworks using molecule - based communication .therefore , future wet lab experimental validations can take on the dna - based communication in bacterial nanonetworks . by developing experimental platforms for dna - based communication, a new collaborative synergy can be established between ict researchers , experimental bacteriologists , and biotechnologists .the experimental validations can lead to the potential applications that have been described in the introduction , e.g. , environmental sensing , biofuel quality monitoring , or new solutions for personalized health - care .another research prospect is to integrate the bacterial nanonetworks with the established solutions found in present nanotechnology research and/or industrial products .a number of research efforts have been dedicated to produce nanoscale components that can be assembled into nanomachines .these nanomachines can perform limited functionalities such as sensing and releasing drug payloads to the diseased cells . incorporating bacterial delivery process through the nanonetworkscan enhance the probability of delivering the elements to the targeted location .lastly , the area of bacterial nanonetworks along with molecular communication can play a major role in the field of synthetic biology .the objective of synthetic biology is the development of artificial creation of biological components and systems that are tailored to perform specific functions .therefore , using existing knowledge and tools in synthetic biology can help design tailored bacterial nanonetworks that have a certain performance reliability for a specific application .the use of bacteria as an information carrier has been proposed for molecular communication . utilizing the bacterial properties such as their ability of carrying plasmids ( this could represent the information that has been encoded ) and their mobility , could enable information to be transferred between the different nanomachines .similar to most organisms , bacteria also exhibit social properties , which include both cooperative and non - cooperative behavior . in this article, we have presented an overview of the various communication mechanisms as well as the social properties of bacteria .we have discussed the challenges that arise due to these mechanisms which can affect the information transfer performance in the bacterial nanonetworks .in particular , the challenges due to non - cooperation and opportunities due to cooperation have been discussed .these opportunities can be exploited in designing nanomachines .for example , the cooperative and non - cooperative behaviors can be modeled using _ game theory _ and the bacterial nanonetworks can be engineered to achieve the optimal outcome , for example , by using _ mechanism design_.simulation results have been presented to evaluate the impact of bacterial cooperative behavior in improving the information transfer performance in a single - link nanonetwork .the results have shown improvement in the communication performance for varying distances between the source and destination nanomachines , as well as situations when the chemoattractant density is varied .the solutions to the fundamental research challenges in conventional ad hoc networks , such as social - based dtns , can provide lessons for analyzing communication networks at the nanoscale ( e.g. , bacterial nanonetworks ) .the commonality between these two different networks is that the nodes and the organisms , respectively , which carry the information , exhibit social behavior . a new direction of research to address the research challenges in future social - based molecular nanonetworks can thus be envisagedthis work was supported in part by a discovery grant from the natural sciences and engineering research council of canada ( nserc ) , in part by the academy of finland fidipro program `` nanocommunication networks , '' 20122016 , and in part by the academy research fellow program ( project no .284531 ) .t. p. howard , s. middelhaufe , k. moore , c. edner , d. m. kolak , g. n. taylor , d. a. parker , r. lee , n. smirnoff , s. j. aves _et al . _ , `` synthesis of customized petroleum - replica fuel molecules by targeted modification of free fatty acid pools in _ escherichia coli _ , '' _ proceedings of the national academy of sciences _ , vol .110 , no .19 , pp . 76367641 , 2013 .m. gregori and i. akyildiz , `` a new nanonetwork architecture using flagellated bacteria and catalytic nanomotors , '' _ ieee journal on selected areas in communications _ , vol .28 , no . 4 , pp .612619 , may 2010 .e. b. jacob , y. shapira , and a. i. tauber , `` seeking the foundations of cognition in bacteria : from schrdinger s negative entropy to latent information , '' _ physica a : statistical mechanics and its applications _ , vol .359 , no . 0 , pp . 495524 , 2006 .r. popat , s. a. crusz , m. messina , p. williams , s. a. west , and s. p. diggle , `` quorum - sensing and cheating in bacterial biofilms , '' _ proceedings of the royal society b : biological sciences _ ,1748 , pp . 47654771 , 2012 .j. chen , x. zhao , and a. sayed , `` bacterial motility via diffusion adaptation , '' in _ conference record of the forty fourth asilomar conference on signals , systems and computers ( asilomar ) _, 2010 , pp . 19301934 .monowar hasan ( s13 ) is currently working toward his m.sc .degree in the department of electrical and computer engineering at the university of manitoba , winnipeg , canada .he has been awarded the university of manitoba graduate fellowship .monowar received his b.sc .degree in computer science and engineering from bangladesh university of engineering and technology ( buet ) , dhaka , in 2012 .his current research interests include internet of things , wireless network virtualization , and resource allocation in 5 g cellular networks .he served as a reviewer for several major ieee journals and conferences .ekram hossain ( s98-m01-sm06 ) is currently a professor in the department of electrical and computer engineering at university of manitoba , winnipeg , canada .he received his ph.d . in electrical engineering from university of victoria , canada , in 2001 .hossain s current research interests include design , analysis , and optimization of wireless / mobile communications networks , cognitive radio systems , and network economics .he has authored / edited several books in these areas ( http://home.cc.umanitoba.ca/~hossaina ) .hossain serves as the editor - in - chief for the _ ieee communications surveys and tutorials _ , an editor for _ ieee wireless communications_. also , currently he serves on the ieee press editorial board .previously , he served as the area editor for the _ ieee transactions on wireless communications _ in the area of `` resource management and multiple access '' from 2009 - 2011 , an editor for the _ ieee transactions on mobile computing _ from 2007 - 2012 , and an editor for the _ ieee journal on selected areas in communications _ - cognitive radio series from 2011 - 2014 .hossain has won several research awards including the university of manitoba merit award in 2010 and 2014 ( for research and scholarly activities ) , the 2011 ieee communications society fred ellersick prize paper award , and the ieee wireless communications and networking conference 2012 ( wcnc12 ) best paper award .he is a distinguished lecturer of the ieee communications society for the term 2012 - 2015 .hossain is a registered professional engineer in the province of manitoba , canada .sasitharan balasubramaniam ( sm14 ) received his bachelor ( electrical and electronic engineering ) and ph.d .degrees from the university of queensland in 1998 and 2005 , respectively , and the master s ( computer and communication engineering ) degree in 1999 from queensland university of technology .he is currently a senior research fellow at the nano communication centre , department of electronic and communication engineering , tampere university of technology ( tut ) , finland .sasitharan was the tpc co - chair for _ acm nanocom _ 2014 and _ ieee monacom _ 2011 .he is currently an editor for _ ieee internet of things _ and elsevier s _ nano communication networks_. his current research interests include bio - inspired communication networks , as well as molecular communication .yevgeni koucheryavy ( sm08 ) is a full professor and lab director at the department of electronics and communications engineering at the tampere university of technology ( tut ) , finland .he received his ph.d .degree ( 2004 ) from the tut .yevgeni is the author of numerous publications in the field of advanced wired and wireless networking and communications .his current research interests include various aspects in heterogeneous wireless communication networks and systems , the internet of things and its standardization , and nanocommunications .yevgeni is an associate technical editor of _ ieee communications magazine _ and editor of _ ieee communications surveys and tutorials_.
molecular communication promises to enable communication between nanomachines with a view to increasing their functionalities and open up new possible applications . due to some of the biological properties , bacteria have been proposed as a possible information carrier for molecular communication , and the corresponding communication networks are known as _ bacterial nanonetworks_. the biological properties include the ability for bacteria to mobilize between locations and carry the information encoded in deoxyribonucleic acid ( dna ) molecules . however , similar to most organisms , bacteria have complex social properties that govern their colony . these social characteristics enable the bacteria to evolve through various fluctuating environmental conditions by utilizing cooperative and non - cooperative behaviors . this article provides an overview of the different types of cooperative and non - cooperative social behavior of bacteria . the challenges ( due to non - cooperation ) and the opportunities ( due to cooperation ) these behaviors can bring to the reliability of communication in bacterial nanonetworks are also discussed . finally , simulation results on the impact of bacterial cooperative social behavior on the end - to - end reliability of a single - link bacterial nanonetwork are presented . the article concludes with highlighting the potential future research opportunities in this emerging field . molecular communication , bacterial nanonetwork , social behavior .
the recurrence quantification analysis ( rqa ) quantifies structures found in recurrence plots ( rps ) to yield a deeper understanding of the underlying process of a given time series .even though this method is widely applied , the scarce mathematical description is a main drawback .first steps in the direction of an analytical description were made by faure et al . , who gave analytical results for the cumulative distribution of diagonals in rps in the case of chaotic maps and linked the slope of this distribution to the kolmogorov - sinai entropy .gao and cai related the distribution to the largest lyapunov exponent and the information dimension .+ in this paper we give an analytical expression for the distribution of diagonals in rp in the case of stochastic processes and extend the results of to chaotic flows .further we compare our approach with the grassberger - procaccia ( g - p ) algorithm and show some advantages of the rp method estimating some invariants of the dynamics , such as the correlation entropy .one of the most remarkable differences between our approach and the g - p algorithm , is that we find two different scaling regions for chaotic flows , such as the rssler system , instead of the single one obtained with the g - p algorithm .this new scaling region can be linked to the geometry of the attractor and defines another characteristic time scale of the system .beyond we propose optimized measures for the identification of relevant structures in the rp .+ the outline of this paper is as follows . in sec . [2 ] we briefly introduce rps . after considering in sec .[ 3 ] the rps of white noise , we proceed to general chaotic system ( sec.[unknown ] ) .then , we exemplify our theoretical results for the rssler system ( sec.[roesslersystem ] ) and present the two different scaling regions that characterize the system .finally , we propose to estimate main characteristics of nonlinear systems from the rp which extends the importance of the rqa ( sec .rps were introduced to simply visualize the behavior of trajectories in phase space .suppose we have a dynamical system represented by the trajectory for in a -dimensional phase space .then we compute the matrix where is a predefined threshold and is the heaviside function .is in principle arbitrary . for theoretical reasons , that we will present later ,it is preferable to use the maximum norm .however the numerical simulations of this paper are based on the euclidian norm to make the results comparable with the literature .the theoretical results of this paper hold for both choices of the norm . ]the graphical representation of called recurrence plot is yielded encoding the value one as black and zero as white point .a homogeneous plot with mainly single points may indicate a mainly stochastic system .paling away from the main diagonal may indicate a drift i.e. non - stationarity of the time series .a main advantage of this method is that it allows to apply it to nonstationary data .+ to quantify the structures that are found in rps , the recurrence quantification analysis ( rqa ) was proposed .there are different measures that can be considered in the rqa .one crucial point for these measures is the distribution of the lengths of the diagonal lines that are found in the plot . in the case of deterministic systems the diagonal lines mean that trajectories in the phase space are close to each other on time scales that correspond to the lengths of the diagonals . in the next sections we show that there is a relationship between and the correlation entropy . on the other handwe compute the distribution of diagonals for random processes to see that even in this case , there are some diagonals which can lead to pitfalls in the interpretation of the rqa because noise is inevitable in experimental systems . a more detailed discussion of this problem is given in .in this section we compute analytically the probability to find a black or recurrence point and the distribution of diagonals of length in the rp in the case of independent noise . the probability to find a recurrence point in the rpis given by and the probability to find a diagonal of at least length in the rp is defined as where stands for cumulative .note that .+ we consider a random variable with probability density .suppose that for is a realization of and we are interested in the distribution of the distances of each point to all other points of the time series .this can be done by computing the convolution of the density is then gained by integrating over ]uniformly distributed noise , is given by and hence the probability for rps and crps is given by \ ] ] for gaussian white noise one finds , where is the standard deviation .now it is straightforward to compute in the in crps ( in rps only asymptotically ) . as the noise is independent ,we obtain the probability to find a recurrence point is in both rps and crps independent of the preceeding point on the diagonal ( except in the main diagonal ) .( [ decaypl ] ) shows that the probability to find a line of length decreases exponentially with . for our example of uniformly distributed noise we get note that in this case the exponential decay depends on .we present in this section an approach for chaotic systems .it is an extension of the results presented in for chaotic maps and also covers general chaotic flows . to estimate the distribution of the diagonals in the rp ,we start with the correlation integral note that the definition of coincides with the definition of the correlation integral this fact allows to link the known results about the correlation integral and the structures in rps . + we consider a trajectory in the basin of attraction of an attractor in the -dimensional phase space and the state of the system is measured at time intervals .let be a partition of the attractor in boxes of size .then denotes the joint probability that is in the box , is in the box , ... , and is in the box .the order-2 rnyi entropy is then defined as we can approximate by the probability of finding a sequence of points in boxes of length about , , ... , . assuming that the system is ergodic , which is always the case for chaotic systems as they are mixing , we obtain where represents the probability of being in the box at time , in the box at time , ... and in the box at time .further we can express by means of the recurrence matrix hence we obtain an estimator for the order-2 rnyi entropy by means of the rp note that is the cumulative distribution of diagonal lines ( eq . ( [ p_cum ] ) ) .therefore , if we represent in a logarithmic scale versus we should obtain a straight line with slope for large s .+ on the other hand , in the g - p algorithm the -dimensional correlation integral is defined as grassberger and procaccia state that due to the exponential divergence of the trajectories , requiring is essentially equivalent to which leads to the ansatz : further they make use of takens embedding theorem and reconstruct the whole trajectory from measurements of any single coordinate .hence they consider and use the same ansatz eq .( [ gpansatz ] ) for .then , the g - p algorithm obtains an estimator of considering due to the similarity of the rp approach to the g - p one , we state the difference between both approaches is that in we further consider information about vectors , whereas in we have just information about coordinates .besides this , in the rp approach is a length in the plot , whereas in the g - p algorithm it means the embedding dimension .as is defined for , the rp approach seems to be more appropriate than the g - p one , as it is always problematic to use very high embedding dimensions . +a further advantage of the rp method is that it does not make use of the approximation that eq .( [ naeherung ] ) is essentially equivalent to eq .( [ bedin1 ] ) .the quantity that enters the rps is directly linked to the conditions eq .( [ bedin1 ] ) and hence uses one approximation less than the g - p method .+ one open question for both methods is the determination of the scaling regions .it is somewhat subjective and makes a rigorous error estimation problematic . for the cases considered in this paper we have found that 10,000 data points assure reliable results for both methods .even 5,000 data points allow for a reasonable estimation , whereas 3,000 data points or less yield very small scaling regions that are difficult to identify . however , the rp method is advantageous for the estimation of as the representation is more direct .the most important advantage is presented in the next section : rps allow to detect a new scaling region in the rssler attractor that can not be observed with the g - p algorithm .we analyze the rssler system with standard parameters .we generate 15,000 data points based on the runge kutta method of fourth order and neglect the first 5,000 .the integration step is and the sampling rate is .+ first , we estimate by means of the g - p algorithm .[ gproessler ] shows the results for the correlation integral in dependence on .varies from ( top ) to ( bottom ) in steps of .[gproessler],scaledwidth=65.0% ] there is one well - expressed scaling region for each embedding dimension .then we get from the vertical distances between the lines an estimate of ( fig.[k2gproessler ] ) , . for the rssler system with the g - p algorithm .the line is plotted to guide the eye.[k2gproessler],scaledwidth=65.0% ] next , we calculate the cumulative distribution of the diagonal lines of the rp in dependence on the length of the lines ( fig . [ rproessler ] ) .varies logarithmically from to ( bottom to top)[rproessler],scaledwidth=65.0% ] for large and small the scaling breaks down as there are not enough lines in the rp .the most remarkable fact in this figure is the existence of two well differentiated scaling regions .the first one is found for and the second one for .the existence of two scaling regions is a new and striking point obtained from this analysis and is not observed with the g - p method .the estimate of from the slope of the first part of the lines is ( fig . [ slope1roessler ] ) and the one from the second part is ( fig .[ slope2roessler ] ) . in the first region for three different choices of the scaling region in .[slope1roessler],scaledwidth=65.0% ] in the second region for three different choices of the scaling region in [slope2roessler],scaledwidth=65.0% ] hence , is between 3 - 4 times higher than . as is defined for , the second slope yields the estimation of the entropy .+ however , the first part of the curve is interesting too , as it is also independent of .the region characterizes the short term dynamics of the system up to three cycles around the fix point and corresponds in absolute units to a time of , as we use a sampling rate of .these three cycles reflect a characteristic period of the system that we will call _ recurrence period _ .it is different from the dominant `` phase period '' , which is given by the dominant frequency of the power density spectrum . however , is given by recurrences to the same state in phase space .recurrences are represented in the plot by vertical ( or horizontal , as the plot is symmetric ) white lines .such a white line occurs at the coordinates if the trajectory for times is compared to the point .then the structure given by eq . [ whiteline ] can be interpreted as follows . at time the trajectory falls within an -box of .then for it moves outside of the box , until at it recurs to the -box of .hence , the length of the white line is proportional to the time that the trajectory needs to recur close to . + in fig .[ realrecurrence ] we represent the distribution of white vertical lines in the rp . and based on 60,000 data points.[realrecurrence],scaledwidth=65.0% ] the period of about 28 points corresponds to .however , the highest peak is found at a lag of about 87 points ( the second scaling region begins at ) .this means that after this time most of the points recur close to their initial state .this time also defines the recurrence period . for the rssler attractor with standard parameters we find . + for predictions on time scales below the recurrence period , is a better estimate of the prediction horizon than .this interesting result means that the possibility to predict the next value within an -range is in the first part by a factor of more than times worse than it is in the second part , i.e. there exist two time scales that characterize the attractor .the first slope is greater than the second one because it is more difficult to predict the next step if we have only information about a piece the trajectory for less than the recurrence period .once we have scanned the trajectory for more than , the predictability increases and the slope of in the logarithmic plot decreases .hence the first slope , as well as the time scale at which the second slope begins , reveal important characteristics of the attractor .+ to investigate how the length of the first scaling region depends on the form of the attractor , we have varied the parameter of the rssler system with fixed , so that different types of attractors appear . especially we have studied the cases , which yields , and , which gives . in both casesthe length of the first scaling region corresponds as expected to .+ on the other hand , the existence of the two scalings may be linked to the nonhyperbolic nature of the rssler system for this attractor type , because the resulting two time scales have been also recently found by anishchenko et al .based on a rather subtle method .this effect also is detectable in other oscillating nonhyperbolic systems like the lorenz system and will be studied in more detail in a forthcoming paper .with regard to our theoretical findings in sec . [ unknown ] we have to assess the quality of the possible results of the rqa .+ the measures considered in the rqa are not invariants of the dynamical system , i.e. they usually change under coordinate transformations , and especially , they are in general modified by embedding . hence , we propose new measures to quantify the structures in the rp , that are invariants of the dynamical system . +* the first measure * we propose , is the slope of the cumulative distribution of the diagonals for large .we have seen that it is ( after dividing by ) an estimator of the rnyi entropy of second order , which is a known invariant of the dynamics . on the other hand, we also can consider the slope of the distribution for small s , as this slope shows a clear scaling region , too .the inverse of these two quantities , is then related to the forecasting time at different horizons .especially the transition point from the first to the second scaling region is an interesting characteristic of the system . + * the second measure * we introduce , is the vertical distance between for different . from eq .( [ main ] ) one can derive this is an estimator of the correlation dimension .the result for the rssler system is represented in fig .[ corrdimrp ] . for the rssler attractor by the rp method .the parameters used for the rssler system and the integration step are the same as in sec .[ roessler].[corrdimrp],scaledwidth=65.0% ] the mean value of is in this case .this result is in accordance with the estimation of by the g - p algorithm given in , where the value is obtained . with a modified g - p algorithm a value of reported .+ * the third measure * we suggest , is an estimator of the generalized mutual information of order , where are the generalized rnyi s second order entropy ( also correlation entropy ) and its corresponding joint second order entropy .this measure can be estimated using the g - p algorithm as follows instead , we can estimate using the recurrence matrix . as discussed in the preceding sections , one can estimate as .\ ] ] analogously we can estimate the joint second order entropy by means of the recurrence matrix .\ ] ] we compare the estimation of based on the g - p algorithm with the one obtained by the rp method in fig .[ mi ] ..[mi],scaledwidth=65.0% ] we see , that the rp method yields systematically higher estimates of the mutual information , as in the case of the estimation of the correlation entropy .however , the structure of the curves is qualitatively the same ( it is just shifted to higher values by about ) .a more exhaustive inspection shows , that the difference is due to the use of the euclidean norm .the estimate based on the rp method is almost independent of the norm , whereas the estimate based on the g - p algorithm clearly depends on the special choice . if the maximum norm is used ( in g - p and rp )both curves coincide .+ note that the estimators for the invariants we propose are different from the ones of the g - p algorithm .therefore , the obtained values are slightly different , too .+ the three measures that we have proposed , are not only applicable for chaotic systems but also for stochastic ones as the invariants are equally defined for both kinds of systems .in this paper we have presented an analytical expression for the the distribution of diagonals for stochastic systems and chaotic flows , extending the results presented in .we have shown that is linked to the 2-order rnyi entropy rather than to the lyapunov exponent .further we have found in the logarithmic plot of two different scaling regions with respect to , that characterize the dynamical system and are also related to the geometry of the attractor . this is a new point that can not be seen by the g - p algorithm andwill be studied in more detail in a forthcoming paper .the first scaling region defines a new time horizon for the description of the system for short time scales . beyond the rp methoddoes not make use of high embedding dimensions , and the computational effort compared with the g - p algorithm is decreased .therefore the rp method is rather advantageous than the g - p one for the analysis of rather small and/or noisy data sets . besides this, we have proposed different measures for the rqa , like estimators of the second order rnyi entropy , the correlation dimension and the mutual information , that are , in contrast to the usual ones , invariants of the dynamics .
we present an analytical description of the distribution of diagonal lines in recurrence plots ( rps ) for white noise and chaotic systems , and find that the latter one is linked to the correlation entropy . further we identify two scaling regions in the distribution of diagonals for oscillatory chaotic systems that are hinged to two prediction horizons and to the geometry of the attractor . these scaling regions can not be observed with the grassberger - procaccia algorithm . finally , we propose methods to estimate dynamical invariants from rps .
on 16 september 1927 , bohr introduced the concept of complementarity into physics at the international conference on physics at como . in his lecture _ the quantum postulate and the recent development of atomic theory _bohr argued that the postulate of an essential discontinuity in atomic processes entails deep consequences for their proper description . discussing the infamous wave - particle ( light - matter ) problem in quantum systems , he stated a `` reciprocal relation between the maximum sharpness of definition of the space - time and energy - momentum vectors associated with an individual '' event . more generally , bohr referred to this reciprocal relation as a `` complementarity of the space - time description and the claims of causality '' ( bohr 1928 , p. 582 ) . in classical physics , waves and particlesare considered as mutually incompatible entities .therefore , if a system is correctly described in terms of one of these entities , the description in terms of the other one must be incorrect . in quantum physics ,however , waves and particles are no ontological entities but manifestations of a system under particular ( e.g. empirical ) contexts .although they mutually exclude one another , they are together necessary to describe the system completely .this is the basic deviation from classical thinking that bohr suggested to cover by the term complementarity . today we know that he imported the notion of complementarity from psychology and philosophy ( for details see holton 1970 ) .with this background it is not astonishing that bohr was always keen to expand the significance of complementarity beyond physics ( cf .favrholdt 1999 ) . in his _ atomic theory and the description of nature _( bohr 1934 , p. 5 ) he wrote : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we are concerned with the recognition of physical laws which lie outside the domain of our ordinary experience and which present difficulties to our accustomed forms of perception ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ and in his article _ on the notions of causality and complementarity _( bohr 1948 , p. 318 ) we read : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ recognition of complementary relationship is not least required in psychology , where the conditions for analysis and synthesis of experience exhibit striking analogy with the situation in atomic physics .in fact , the use of words like `` thoughts '' and `` sentiments '' , equally indispensible to illustrate the diversity of psychical experience , pertain to mutually exclusive situations characterized by a different drawing of the line of separation between subject and object .in particular , the place left for the feeling of volition is afforded by the very circumstance that situations where we experience freedom of will are incompatible with psychological situations where causal analysis is reasonably attempted . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the concept of complementarity has played a central role in various versions of the copenhagen interpretation of quantum theory which bohr originally designed together with heisenberg . butbohr s attempts to generalize complementarity beyond physics were joined by only few others , notably pauli and later wheeler .it should take a long time , until the 1990s , before their speculations started to be turned into concrete projects ( see wang _ et al ._ 2013 for a brief overview ) .although the wave - particle duality was the historical root of complementarity in physics , a modern perspective allows us to understand it as a consequence of different representations of a system .the present view , first formulated by bernays ( 1948 ) , states that complementarity can be related to two basic formal features of quantum theory . 1 .it can be based on _ non - commutative algebras of observables _ , pioneered by murray and von neumann in the mid 1930s .non - commuting observables are the formal core of their incompatibility , and a `` maximal '' form of incompatibility defines complementarity ( raggio and rieckers 1983 ) .it can be based on _ non - distributive lattices of propositions _ , pioneered by birkhoff and von neumann ( 1936 ) .non - distributive lattices are the formal core of the non - boolean nature of complementary propositions . it is important to note that both ( i ) and ( ii ) are not restricted to physics in principle , and even less so to quantum physics .while propositions about a system are a very general possibility to characterize it , not every proposition can be turned into a formally well - defined observable . for this reason , ( ii )offers a wider scope to apply complementarity outside physics than ( i ) . in the followingwe discuss how complementary observables and non - boolean propositions can arise in descriptions of systems and their dynamics that are based on state spaces . a crucial strategy in this approachis the definition of _ epistemic _ states and associated propositions based on partitions of the relevant space .the way in which the partition is constructed decides whether the resulting descriptions are compatible , resp .boolean , or not .measurements ( or observations ) require the preparation of a state of the system to be measured ( or observed ) , choices of initial and boundary conditions for this state , and the selection of particular measurement setups .they refer to operationally defined _ observables _ which can be deliberately chosen by the experimenter ( pauli 1950 , primas 2007 ) .a classical dynamical system is characterized by the fact that all observables are compatible with each other . however , in general this holds only for a so - called _ ontic description _ ( atmanspacher 2000 ) where the state of a system is considered as if it could be characterized precisely as it is .on such an account , the _ ontic state _ of the system is given by a point in state space .classical observables are real - valued functions , such that is the value of observable in state .a family of observables spans one of many possible _ observation spaces _ ( birkhoff and von neumann 1936 ) . only if all functions spanning the observation space are injective , their pre - images contain exactly one point for all , and can be called _ ontic observables_. by contrast , _ epistemic descriptions _ acquire significance if at least one observable is not injective .they refer to the knowledge that can be obtained about an ontic state ( atmanspacher 2000 ) from representing a measurement result as a point in observation space .states in a state space of a classical system ( left ) and the real numbers as the range of a classical observable ( right ) .epistemically equivalent states belong to the same equivalence class .,height=170 ] figure [ ga : fig1 ] displays a situation in which an observable is not injective , such that different states lead to the same measurement result in this case , the states and are _ epistemically _indistinguishable by means of the observable ( shalizi and moore 2003 , beim graben and atmanspacher 2006 ) .measuring can not tell us whether the system is in state or .the two states are _ epistemically equivalent _ with respect to ( beim graben and atmanspacher 2006 ) . in this way, the observable induces an equivalence relation `` '' on the state space : if .the resulting equivalence classes of ontic states partition the state space into mutually exclusive and jointly exhaustive subsets such that for all and .these subsets can be identified with the _ epistemic states _ that are induced by the observable .more generally , we refer to subsets in state space as to epistemic states .- algebra in measure theory ( beim graben and atmanspacher 2006 ) . for a simplified exposition , which captures the very basic ideas ,set - theoretical concepts are sufficient ( cf .beim graben and atmanspacher 2009 ) . ]the collection of epistemic states is a then state space _partition_. we call an _ epistemic observable _ if the partition is not the _ identity partition _ where every cell is a singleton set containing exactly one element ( shalizi and moore 2003 ) . in this limiting case, is injective and becomes an ontic observable . in the opposite limit, there is only one cell covering the entire state space , and epistemic observables are constant over : for all . in this case , all states are epistemically equivalent with one another and belong to the ( same ) equivalence class of the _ trivial partition _ .most interesting for our purposes are finite partitions ( where is finite ) which are neither trivial nor identity .figures [ ga : fig2](a , b ) display two such finite partitions and from which a _ product partition _ as in figure 2(c ) can be constructed .it contains all possible intersections of sets in with sets in : the product partition is a _ refinement _ of both partitions and . the refinement relation introduces a partial ordering relation `` '' among partitions . if is a refinement of , , then there is a `` factor partition '' such that . if neither is a refinement of nor _ vice versa _ ( and ) , the partitions and are called _ incomparable _ ( shalizi and moore 2003 ) . a dynamical system evolves as a function of parameter time . in other words , any present state ( e.g. an initial condition ) in state space , , gives rise to future states .this evolution is described by a flow map . in the simple case of a deterministic dynamics in discrete time, maps any state onto its successor , as illustrated in fig .[ ga : fig3 ] . iterating the map yields a _ trajectory _ for integer positive times .likewise , the inverse map can be iterated if the dynamics is invertible : , again for integer positive times . in this way, the dynamics of an invertible discrete - time system is described by the one - parameter group of integer numbers . in sect .[ stateobs ] , we have described _ instantaneous _measurements by the action of an observable on an ontic state .now we are able to describe _ _ extended measurements__the notion of an extended measurement refers to a series of measurements extending over time . by combining the action of an observable with the dynamics .let the system be in state at time .measuring tells us to which class of epistemically equivalent states in the partition , associated with , the state belongs .suppose that this is the cell .suppose further that measuring in the subsequent state reveals that is contained in another cell .a discrete - time dynamics of a classical system is given by a map which assigns to a state at time its successor at time .,height=151 ] an alternative way to describe this situation is to say that the initial state belongs to the pre - image of .the information about that is gained by measuring is , then , that the initial state was contained in the intersection .continuing the observation of the system over one more instant in time yields that the initial state belonged to the set if the third measurement result was . a systematic investigation of extended measurementscan now be based on the definition of the pre - image of a partition , which consists of all pre - images of the cells of the partition .then , an extended measurement over two successive time steps is defined by the product partition , containing all intersections of cells of the original partition with cells of its pre - image .the result of the measurement of over two time steps is .this product partition is called the _ dynamic refinement _ of , illustrated in fig .[ ga : fig4 ] .most information about the state of a system can be gained by an ideal , `` ever - lasting '' extended measurement that began in the infinite past and will terminate in the infinite future .this leads to the _ finest dynamic refinement _ expressed by the action of the `` finest - refinement operator '' upon a partition .it would be desirable that such an ever - lasting measurement yields complete information about the initial condition in state space .this is achieved if the refinement ( [ ga : eq : finestrefine ] ) yields the identity partition , a partition obeying ( [ ga : eq : generator ] ) is called _generating_. generating partitions are structurally stable in the sense that they are robust under the dynamics . in other words , points on bondaries of cells are typically mapped onto points on boundaries of cells . in this waythe epistemic states defined by the cells do not change over time which would be the case for non - generating partitions . for more details concerning the issue of stability in dynamical systems see atmanspacher and beim graben ( 2007 ) .given the ideal finest refinement of a ( generating or non - generating ) partition that is induced by an epistemic observable , we are able to regain a description of extended measurements of arbitrary finite duration by joining subsets of which are visited by the system s trajectory . supplementing the `` join '' operation by the other boolean set operations over leads to a _ partition algebra _ of .then , every set in is an epistemic state measurable by .note that the concept of a generating partition in the ergodic theory of deterministic systems is related to the concept of a _ markov chain _ in the theory of stochastic systems .every deterministic system of first order gives rise to a markov chain which is generally neither ergodic nor irreducible .such markov chains can be obtained by so - called _ markov partitions _ that exist for expanding or hyperbolic dynamical systems ( sinai 1968 , bowen 1970 , ruelle 1989 ) . for non - hyperbolic systemsno corresponding existence theorem is available , and the construction can be even more tedious than for hyperbolic systems ( viana _ et al .for instance , both markov and generating partitions for nonlinear systems are generally non - homogeneous . in contrast to figure [ ga : fig2 ] , their cells are typically of different size and form .note further that every markov partition is generating , but the converse is not necessarily true ( crutchfield 1983 , crutchfield and packard 1983 ) .for the construction of `` optimal '' partitions from empirical data it is often more convenient to approximate them by markov partitions ( froyland 2001 ; see also deuflhard and weber 2005 , gaveau and schulman 2005 ) .see allefeld _et al . _( 2009 ) for a concrete example of how a markov partition can be constructed from empirical data .if a partition is not generating , its finest refinement is not the identity partition . in this case , the refinement operator yields a partition with some residual coarse grain .moreover , the cells of a non - generating partition are not stable under the dynamics , so that they become dynamically ill - defined a disaster for any attempt to formulate a robust coarse - grained ( epistemic ) description ( bollt _ et al ._ 2001 , atmanspacher and beim graben 2007 ) .let be an epistemic state of the finest refinement of . because is induced by an observable whose epistemic equivalence classes are the cells of , all cells of can be accessed by extended measurements of .however , as is not the identity partition , the singleton sets representing ontic states in are not accessible by measuring .an arbitrary epistemic state is called _ epistemically accessible with respect to _ ( beim graben and atmanspacher 2006 ) if belongs to the partition algebra produced by the finest refinement of . measuring the observable in all ontic states belonging to an epistemic state yields the same result since is by construction constant over .therefore , the variance of across ( with respect to some probability measure ) vanishes such that is dispersion - free in the epistemic state . in other words, is an eigenstate of .one can now easily construct another observable that is not dispersion - free in such that is not a common eigenstate of and . as a consequence , the observables and are incompatible as they do not share all ( epistemically accessible ) eigenstates .beim graben and atmanspacher ( 2006 ) refer to this construction as an _ epistemic quantization _ of a classical dynamical system . in an ontic description of a classical system , ontic states are common eigenstates of all observables .therefore , classical observables associated with ontic states are always compatible .by contrast , if the ontic states are not epistemically accessible by extended measurements , the smallest epistemically accessible states are cells in the finest refinement of a partition induced by an epistemic observable .these epistemic states are not eigenstates of every observable , such that observables associated with them are incompatible . as in quantum theory ,two observables and are complementary if they do not have any ( epistemically accessible ) eigenstate in common , i.e. if they are maximally incompatible ( raggio and rieckers 1983 ) .beim graben _( 2013 ) demonstrated the incompatibility of position and momentum of a classical harmonic oscillator subjected to time - discretization and spatial coarse - graining .nevertheless , even in an epistemic description , classical observables and can be compatible with one another .this is the case if all ontic states are epistemically accessible with respect to both and .the necessary and sufficient condition for this is that the partitions , be generating ( eq . 6 ) .this leads to a generalization of the concepts of compatibility and complementarity : two partitions are called compatible if and only if they are both generating : .they are incompatible if , which is always the case if at least one partition is not generating .they are complementary if their finest refinements are disjoint : .already in their seminal paper on quantum logics , birkhoff and von neumann ( 1936 ) asked for the propositional calculus emerging from the epistemic restrictions upon an arbitrarily selected observation space .a proposition such as `` the observable assumes the value in state '' , or briefly `` '' , induces a binary partition of the state space of a classical dynamical system into two subsets of where . because propositions can be combined by the logical connectives `` and '' , `` or '' , and `` not '' , the structure of a _ classical propositional logic _ is that of a boolean algebra of subsets of the state space ( birkhoff and von neumann 1936 , primas 1977 , westmoreland and schumacher 1993 ) . given a classical dynamical system with state space , dynamics , and a representatively chosen epistemic observable , this observable induces a partition whose finest refinement is .the partition algebra , comprising all subsets of that can be formed by the boolean set operations of intersection , union , and negation , applied to the states that are epistemically accessible by means of extended measurements of , is a boolean set algebra describing a classical propositional logic .however , things become much more involved when we add another epistemic observable , that is not compatible with . in that case , the finest refinements yields overlapping partition algebras . if the overlap is not trivial ( i.e. neither nor ) , these partition algebras form a _ partition logic _ , or equivalently a _ partition test space _( dvureenskij _ et al ._ 1995 ) .test spaces have been introduced as _generalized sample spaces _ in a so - called _ operational statistics _( foulis and randal 1972 , randal and foulis 1973 ) in order to clarify problems of incompatible observables in general ( classical ) measurement situations .their argumentation is nicely illustrated by foulis ( 1999 ) `` firefly box '' thought experiment : imagine a firefly caught in a box with only two translucent windows , one at the front and one at one side of the box .assume that both windows allow us to assess the firefly s position only in a coarse - grained manner : either left ( `` l '' ) or right ( `` r '' ) with respect to the front view , and front ( `` f '' ) or back ( `` b '' ) with respect to the side view . because the firefly can be observed either from the front or from the side , these `` measurements '' are mutually exclusive and can give rise to incompatible descriptions .that this is in fact the case results from a third possibility : the firefly does not glow ( `` n '' ) , which creates epistemically equivalent events that are subsumed in the following six propositions : ( 1 ) firefly is somewhere , ( 0 ) firefly is nowhere , firefly is in the left , firefly is in the right, firefly is not glowing , firefly is in the front , firefly is in the back , and the respective negations thereof ( indicated by `` '' ) . figure [ ga : fig5 ] displays the lattice diagrams of the resulting boolean logics .boolean propositional lattices for two observations of the firefly in the box : ( a ) front view , ( b ) side view.,height=226 ] non - boolean ( orthomodular ) propositional lattice for joint observations of the firefly in the box.,height=264 ] crucially , the boolean lattices depicted in figures [ ga : fig5](a , b ) can be pasted together along their common overlaps , giving rise to the lattice shown in fig .[ ga : fig6 ] . by this procedure one obtains a so - called _ orthomodular lattice _ ( primas 1977 ) . according to piron s representation theorem ( and under some technical requirements ,see baltag and smets 1995 , and blutner and beim graben to appear ) , orthomodular lattices can be represented by the orthomodular projector lattice of a hilbert space .hence , the firefly example indicates how nontrivially overlapping partition algebras induced by incompatible observables can be pasted together to orthomodular lattices forming the basis of non - boolean logics ( dvureenskij _ et al ._ 1995 , greechie 1968 , blutner and beim graben to appear ) .the case of a non - boolean lattice with boolean sublattices is also called a partial boolean lattice ( primas 2007 and references therein ) .the notion of complementarity and the non - boolean structure of quantum logic are tightly connected to a third basic feature of quantum theory : _entanglement_. in quantum physics proper , entanglement relies on the dispersion of ontic ( pure ) quantum states .while classical ontic ( pure ) states are dispersion - free and , thus , can not be entangled , it is still possible to define a kind of epistemic entanglement for classical epistemic ( mixed ) states based on state space partitions .this was proven , and illustrated by examples , in a recent paper by beim graben _the strategy of the proof rests on properly defined `` epistemically pure '' dispersive classical states , in contrast to `` ontically pure '' dispersive quantum states .epistemically pure classical states arise out of the finest refinement of a partition with respect to all epistemically realizable observables .we demonstrated that , under particular conditions , classical states that are epistemically pure with respect to a global observable may be mixed with respect to a suitably defined local observable .the structure of this argument resembles the definition of quantum entanglement , where the pure state of a system as a whole is not separable into pure states of decomposed subsystems .the decomposition leads to mixed states with nonlocal correlations between the subsystems .it can be demonstrated how a related inseparability occurs for classical dynamical systems with non - generating partitions ( see beim graben _ et al ._ 2013 ) .nonlocal correlations in this case are due to partition cells whose boundaries change dynamically and lead to non - empty intersections among cells .these correlations contribute to the dynamical entropy of the system and imply an underestimation of its kolmogorov - sinai entropy .( 2005 ) studied such a case for brownian motion : they used a homogeneous partition with respect to particle velocities to determine correlations in the system considered .such a partition is not generating , which raises the overall amount of correlations and can lead to the impression of entanglement .this is consistent with their observation that increasing refinement of the partition entails decreasing correlations .our results substantiate a recent conjecture of epistemic entanglement by atmanspacher _it should be of interest to all kinds of applications which can be formalized in terms of state space represenations .this applies in particular to situations in cognitive science for which state space representations have generated increasing attention .recent work by allefeld _( 2009 ) shows how generating state space partitions can be constructed so as to avoid quantum features in mental systems .an important criterion for entanglement correlations is the violation of bell - type inequalities .there is considerable recent interest in various kinds of bounds inherent in such inequalities as applied to mental systems ( atmanspacher and filk 2014 , dzhafarov and kujala 2013 ) .one thrilling application is due to bruza _( 2009 ) , who challenged a long - standing dogma in linguistics by proposing non - separable concept combinations in the human mental lexicon .another intriguing example is due to atmanspacher and filk ( 2010 ) who suggested temporally nonlocal phenomena in mental states .temporally nonlocal mental states can be interpreted as states that are not sharply ( pointwise ) localized along the time axis , and their characterization by sharp ( classical ) observables is inappropriate . rather , such states appear to be `` stretched '' over an extended time interval whose length may depend on the specific system considered . within this interval , relations such as `` earlier '' or `` later '' are illegitimate designators and , accordingly , causal connections are ill - defined .there is accumulating evidence that concepts like complementarity , entanglement , dispersive states , and non - boolean logic play significant roles in mental processes ( see wang _ et al . _( 2013 ) for a compact recent overview ) . within the traditional framework of thinking, this would imply that the brain activity correlated with those mental processes is in fact governed by quantum physics .quite a number of quantum approaches to consciousness have been proposed along these lines ( cf .atmanspacher 2011 ) , with little empirical support so far .our results underline that quantum brain dynamics is not the only possible explanation of quantum features in mental systems . assuming that mental states arise from partitions of neural states in such a way that statistical neural states are co - extensive with individual mental states , the nature of mental processes depends strongly on the kind of partition chosen .if the partition is not generating , it is possible or even likely that mental states and observables show features that resemble quantum behavior although the correlated brain activity may be entirely classical quantum mind without quantum brain .large parts of the material in sections 2 and 3 are reproduced from sections 3 and 4 in beim graben and atmanspacher ( 2009 ) , which the present paper refines and develops .10 bollt em , stanford t , lai yc , yczkowski k ( 2001 ) : what symbolic dynamics do we get with a misplaced partition ? on the validity of threshold crossings analysis of chaotic time - series ._ physica d _ * 154 * , 259286 .
the concept of complementarity in combination with a non - boolean calculus of propositions refers to a pivotal feature of quantum systems which has long been regarded as a key to their distinction from classical systems . but a non - boolean logic of complementary features may also apply to classical systems , if their states and observables are defined by partitions of a classical state space . if these partitions do not satisfy certain stability criteria , complementary observables and non - boolean propositional lattices may be the consequence . this is especially the case for non - generating partitions of nonlinear dynamical systems . we show how this can be understood in more detail and indicate some challenging consequences for systems outside quantum physics , including mental processes .
linear mixed models have been studied for a long time theoretically , and have many applications , for example , longitudinal data analysis in biostatistics , panel data analysis in econometrics , and small area estimation in official statistics .the problem of selecting explanatory variables in linear mixed models is important and has been investigated in the literature . is a good survey on model selection in linear mixed models . when the purpose of the variable selection is to find a set of significant variables for a good prediction , akaike - type information criteria are well - known methodshowever , the akaike information criterion ( aic ) based on marginal likelihood , which integrates out likelihood with respect to random effects , is not appropriate when the prediction is focused on random effects .then , proposed considering akaike - type information based on the conditional density given the random effects and proposed the conditional aic ( caic ) . here, we provide a brief explanation of the caic concept .let be a conditional density function of given , where is an observable random vector of the response variables , is a vector of the unknown parameters , and is a random vector of the random effects .let be a density function of .then , proposed measuring the prediction risk of the plug - in predictive density relative to kullback leibler divergence : f(\y|\b,\bta ) \pi(\b|\bta ) \dd\y\dd\b , \label{eqn : ckl}\end{aligned}\ ] ] where is an independent replication of given , and and are some predictor or estimator of and , respectively .the caic is an ( asymptotically ) unbiased estimator of a part of the risk in ( [ eqn : ckl ] ) , which is called the conditional akaike information ( cai ) , given by the caic as a variable selection criterion in linear mixed models has been studied by , , , , , and others .furthermore , the caic has been constructed as a variable selection criterion in generalized linear mixed models by , , , , and others .considering the prediction problem , it is often the case that the values of covariates in the predictive model are different from those in the observed model , which we call covariate shift . here, we call the model in which is the vector of the response variables the `` observed model , '' and we call the model in which is the vector of the response variables the `` predictive model . ''it is noted that the term `` covariate shift '' was first used by , who defined it as the situation in which the distribution of the covariates in the predictive model differs from that in the observed model . in this study , although we treat the covariates as non - random , we use the same term `` covariate shift '' as .even when the information about the covariates in the predictive model can be used , most of the akaike - type information criteria do not use it .this is because it is assumed that the predictive model is the same as the observed model for deriving the criteria .as for the abovementioned caic , the conditional density of given and that of given are the same , both of which are denoted by . on the other hand , under the covariate shift , the conditional density of given is different from that of given and is denoted by .when the aim of the variable selection is to choose the best predictive model , it is not appropriate to use the covariates only in the observed model. then , we redefine the cai under covariate shift , as follows , and construct an information criterion as an asymptotically unbiased estimator of the cai . a similar problem in the multivariate linear regression model and proposed a variable selection criterion .it is important to note that we do not assume that the candidate model is overspecified , in other words , that the candidate model includes the true model .although most of the akaike - type information criteria make the overspecified assumption , this is not appropriate for estimating the cai under covariate shift .we discuss this point in section [ subsec : drawback ] . as an important applicable example of covariate shift , we focus on small area prediction , which is based on a finite super - population model .we consider the situation in which we are interested in the finite subpopulation ( area ) mean of some characteristic and that some values in the subpopulation are observed through some sampling procedure .when the sample size in each area is small , the problem is called small area estimation . for details about small area estimation ,see , , , and others .the model - based approach in small area estimation often assumes that the finite population has a super - population with random effects and borrows strength from other areas to estimate ( predict ) the small area ( finite subpopulation ) mean .the well - known unit - level model is the nested error regression model ( nerm ) , which is a kind of linear mixed model , and was introduced by .the nerm can be used when the values of the auxiliary variables for the units with characteristics of interest ( response variables in the model ) are observed through survey sampling .this is the observed model in the framework of our variable selection procedure . on the other hand ,two types of predictive model can be considered .one is the unit - level model , which can be used in the situation in which the values of the auxiliary variables are known for all units .the other is the area - level model , which can be used in the situation in which each mean of the auxiliary variables is known for each small area .the latter is often the case in official statistics and the model introduced by is often used in this case .the rest of this paper is organized as follows . in section [ sec : setup ] , we explain the setup of variable selection problem . in section [ sec : criteria ] , we define the cai under covariate shift in linear mixed models and obtain an asymptotically unbiased estimator of the cai . in section [ sec : ex ] , we provide an example of covariate shift , which is focused on small area prediction . in section[ sec : simu ] , we investigate the numerical performance of the proposed criteria by simulations , one of which is design - based simulation based on a real dataset of land prices .all the proofs are given in the appendix .we focus on the variable selection of the fixed effects .first , we consider the collection of candidate models as follows .let matrix consist of all the explanatory variables and assume that .in order to define candidate models by the index set , suppose that denotes a subset of containing elements , _i.e. _ , , and consists of columns of indexed by the elements of .we define the class of the candidate models by , namely , the power set of , in which we call the full model .we assume that the true model exists in the class of the candidate models , which is denoted by .it is noteworthy that the dimension of the true model is , which is abbreviated to .we next introduce the terms `` overspecified '' and `` underspecified '' models .candidate model is overspecified if ] .the set of underspecified models is denoted by .it is important to note that most of the akaike - type information criteria are derived under the assumption that the candidate model is overspecified .however , the assumption is not appropriate for considering the covariate shift , which is explained in section [ subsec : drawback ] .thus , we derive the criterion without the overspecified assumption . in the following two subsections ,we clarify the observed model and predictive model for deriving the criteria .the candidate observed model is the linear mixed model where is an observation vector of response variables , and are and matrixes of covariates , respectively , is a vector of regression coefficients , is a vector of random effects , and is an vector of random errors .let and be mutually independent and , , where and are and positive definite matrixes and is a scalar .we assume that and are known and is unknown .the true observed model is where , and is vector of regression coefficients , whose components are exactly and the rest of the components are not .the fact that we can write the true model as the equation above implies that the true model exists in the class of candidate models .note that is matrix of covariates for the full model .then , the marginal distribution of is where . for the true model , the conditional density function of given and the density function of denoted by and , respectively . the candidate predictive model is the linear mixed model , which has the same regression coefficients and random effects as the candidate observed model , but different covariates , namely , where is random vector of the target of prediction , and are and matrixes of covariates whose columns correspond to those of and , respectively , and is vector of random errors , which is independent of and and is distributed as , where is a known positive definite matrix .we assume that we know the values of and in the predictive model and that they are not necessarily the same as those of and in the observed model .we call this situation covariate shift. the conditional density function of given for the model is denoted by .the true predictive model is where is matrix of covariates and .then , the marginal distribution of is where . for the true model , the conditional density function of given denoted by .the cai introduced by measures the prediction risk of the plug - in predictive density , where and are maximum likelihood estimators of and , respectively , which are given as respectively , and is the empirical bayes estimator of for quadratic loss , which is given by then , the cai under covariate shift is \\ = & \ e^{(\y,\b_*)}e^{\ybt|\b _ * } \left [ m\log(2\pi\sih_j^2 ) + \log|\rbt| + ( \ybt - \xbt(j)\bbeh_j -\zbt\bbh_j)^\tp \rbt^{-1}(\ybt - \xbt(j))\bbeh_j -\zbt\bbh_j ) / \sih_j^2 \right],\end{aligned}\ ] ] where and denote expectation with respect to the joint distribution of and the conditional distribution of given , namely , respectively . taking expectation with respect to and for , we obtain ,\ ] ] where most of the akaike - type information criteria are derived under the assumption that the candidate model includes the true model , namely , the overspecified assumption .although the assumption seems to be too strong , the influence is restrictive in practice .this is because the likelihood part of the criterion is a naive estimator of the risk function , namely , the cai in the context of the caic . under the covariate shift situation ,however , we can not construct the likelihood part as a good estimator of the cai .that is , the drawback of overspecified assumption is more serious in the situation of covariate shift than the usual one . in section [ subsec : simu_bias ] , we show that an unbiased estimator of the cai under the overspecified assumption in ( [ eqn : cscaic ] ) has large bias for estimating the cai of the underspecified models .thus , we evaluate and estimate the cai directly both for the overspecified models and underspecified models in the following subsection , which is essential work in selecting variables in covariate shift .we evaluate the cai in ( [ eqn : cscai ] ) both for the overspecified model and for the underspecified model .we assume that the full model is overspecified , that is , the collection of the overspecified models is not an empty set . in addition , we assume that the size of the response variable in the predictive model is of order .when the candidate model is overspecified , follows the chi - squared distribution .then , we can evaluate the expectation in ( [ eqn : cscai ] ) exactly .however , for the underspecified model , this is not true . in this case , we asymptotically approximate the cai as the following theorem .[ thm : caim ] for the overspecified case , it follows that + \log |\rbt| + r^* ] and . for the underspecified case , is approximated as + \log|\rbt| + r^ * + r_1 + r_2 + r_3 + r_4 + o(n^{-1}),\ ] ] where and for , and when the candidate model is overspecified , it follows that , , , and are exactly .because the approximation of cai in ( [ eqn : caiapp ] ) includes unknown parameters , we have to provide an estimator of cai for practical use .first , we obtain estimators of and , which are polynomials of .we define , , and as and when , it follows that where denotes the beta distribution .this implies that for the overspecified case . for the underspecified case ,on the other hand , it follows that = \la^k + o(n^{-1}) ] and = 2(\si_*^2)^2 / ( n - p_\om) ] , where .because the conditional mean and variance of given , denoted by and , are it follows that substituting with , we obtain the empirical best predictor ( ebp ) : where is some estimator of . then , we use the following predictor of : as , we use unbiased estimators of and proposed by and the gls estimator of .we measure the performance of this design - based simulation by mean squared error ( mse ) of the predictor , where and are finite subpopulation mean and its predictor of the sampled data .we construct using the unit - level predictive model ( [ eqn : p1nerm ] ) and the area - level predictive model ( [ eqn : p2nerm ] ) , and let and denote the corresponding mses . to compare the performance ,we compute the ratio of mses as follows : where is the mse of the predictor based on the caic of .figure [ fig : mse ] shows the results . although the performance of based on the unit - level predictive model is similar to the caic of , based on the area - level predictive model has much better performances in most areas .it is valuable to point out that the mse of the predictor of the finite subpopulation mean can be improved using our proposed criteria , which motivates us to use them for variable selection in the small area prediction problem .based on area - level predictive model ( solid line ) and to based on unit - level predictive model ( dashed line ) ] * acknowledgments * we are grateful to professor j.n.k .rao for his helpful comment and for holding a seminar at statistics canada during our stay in ottawa .this research was supported by grant - in - aid for scientific research from the japan society for the promotion of science , grant numbers 16k17101 , 16h07406 , 15h01943 and 26330036 .because and are mutually independent , the cai in ( [ eqn : cscai ] ) can be rewritten as + \log|\rbt| + n\cdot e\big [ ( n\sih_j^2 / \si_*^2)^{-1 } \big ] \big\ { \tr(\rbt^{-1}\bla ) + e[\a^\tp\rbt^{-1}\a / \si_*^2 ] \big\},\ ] ] for the overspecified case , we can easily evaluate the cai , noting that follows chi - squared distribution with degrees of freedom and is independent of .thus , it suffices to show that the cai is evaluated as ( [ eqn : caiapp ] ) for the underspecified case . from ( b.4 ) in , we can evaluate ] .let .then , we can rewrite in as next , we can rewrite in as then , we obtain moreover , it follows that thus , ] is expanded as = r_3(\bta _ * ) + b_1(\bta _ * ) + o(n^{-1}),\ ] ] where is given as ( [ eqn : b1 ] ) . because , it follows that , which shows that = r_3 + o(n^{-1}).\ ] ] in the same way , we obtain = e[r_4(\btat ) ] = r_4(\bta _ * ) + o(n^{-1}) ] is expanded as = r_3(\bta _ * ) + b_1(\bta _ * ) + b_2(\bta _ * ) + o(n^{-2}) ] and = b_2(\bta _ * ) + o(n^{-2}) ] . akaike , h. ( 1973 ) .information theory and an extension of the maximum likelihood principle . in _2nd international symposium on information theory _ , ( petrov , b.n . and csaki . , f. , eds . ) , 267281 , akademia kiado , budapest . battese , g.e . , harter , r.m . , and fuller , w.a .( 1988 ) . an error - components model for prediction of county crop areas using survey and satellite data . _ journal of the american statistical association _ , * 83 * , 2836 . saefken , b. , kneib , t. , van waveren , c.s . , and greven , s. ( 2014 ) . a unifying approach to the estimation of the conditional akaike information in generalized linear mixed models ._ electronic journal of statistics _ ,* 8 * , 201225 .yu , d. , zhang , x. , and yau , k.k.w .information based model selection criteria for generalized linear mixed models with unknown variance component parameters . _ journal of multivariate analysis _ , * 116 * , 245262 .
in this study , we consider the problem of selecting explanatory variables of fixed effects in linear mixed models under covariate shift , which is when the values of covariates in the predictive model differ from those in the observed model . we construct a variable selection criterion based on the conditional akaike information introduced by . we focus especially on covariate shift in small area prediction and demonstrate the usefulness of the proposed criterion . in addition , numerical performance is investigated through simulations , one of which is a design - based simulation using a real dataset of land prices . : akaike information criterion ; conditional aic ; covariate shift ; linear mixed model ; small area estimation ; variable selection .
in the era of time - domain survey astronomy , dedicated telescopes scan the sky every night and strategically revisit the same area several times .the raw data are images , but surveys commonly provide , not only image data , but also _ catalogs _ , summaries of the image data that aim to enable a wide variety of studies without requiring users to analyze raw or processed image data. catalogs typically report object properties , based on algorithms that detect sources in images with a measure of statistical significance above some threshold , chosen so that the resulting catalog is likely to be highly pure ( i.e. , with few or no spurious sources ) .the question we address in this paper is how to combine information from a sequence of independent observations to maximize the ability to detect faint objects at or near a chosen detection threshold , ameliorating the data explosion due to false detections from a lower threshold that would be required by a suboptimal method .focusing on the faint objects that typically dominate the collected data is an important and timely problem for a number of ongoing surveys and vital for planning the next - generation data processing pipelines .there are two different ways one can approach the problem .a traditional , resource - intensive approach is to wait until all observations are completed , performing detection by stacking the multi - epoch image data ( with potential complications related to registration , resampling , and point spread function matching ) .an optimal procedure for threshold - based detection with image stacks was introduced by .once a master object catalog is produced from the stacked images , time series of source measurements are created by forced photometry at the master catalog locations .an alternative ( non - exclusive ) approach is to perform source detection independently for each observation , producing a catalog of candidate sources at each epoch . the detection threshold may be different for each epoch .interim object catalogs may be produced by analysis of the series of overlapping source detections potentially associated with a single object using any available data ; a final catalog would be built using the catalogs from all epochs .of course , a catalog based on data from many epochs should be able to include many dim sources that escape confident detection in single - epoch or few - epoch catalogs . to enable construction of a deep multi - epoch catalog ,the single - epoch catalogs must report information for candidate sources with relatively low statistical significance ; i.e. , the single - epoch catalogs must have reduced purity .if we set the single - epoch threshold too high , there will be too few detections ; we will not have well - sampled time series for dim sources , and the final catalog will be too small .if , on the other hand , we set the threshold too low , the single - epoch catalogs will be overwhelmed with ( mostly false ) detections that were seen only once , requiring wasteful expenditure of storage and computing resources for constructing multi - epoch catalogs .an optimal threshold might preserve the size and quality of the final catalog , while enabling users to build interim catalogs on - the - fly , potentially tailored to specific , evolving needs .[ tamas ] we here address the second alternative , considering how best to accumulate evidence from possibly marginal detections while the observations are in progress , to prune spurious source detections but keep the sources associated with genuine objects .the study presented here is exploratory , to establish the basic ideas and provide initial metrics for studying the feasibility of the incremental approach . to make the analysis analytically tractable and the results easy to interpret, we restrict ourselves to an idealized setting ; we will present a more general and formal treatment in a subsequent paper .we adopt the terminology of lsst and other time - domain synoptic surveys , using _ source _ to refer to single - epoch detection and measurement results , and _ object _ to refer to a unique underlying physical system ( e.g. , a star or galaxy ) that may be associated with one or more sources .( here we limit ourselves to objects that would appear as a single source . ) for simplicity , we consider observations in a single band unless stated otherwise .consider an object with constant flux and direction ( a unit - norm vector on the sky ) . at each epoch ,analysis of the image data corresponding to a small patch of sky of solid angle produces a _ source likelihood function _( slf ) for the basic observables , flux and direction , of a candidate source in the patch .the slf is the probability for the data as a function of the ( uncertain ) values of the observables , where denotes various contextual assumptions influencing the analysis , e.g. , specification of the photometric model and information about instrumental and sky backgrounds .( since is common to all subsequent probabilities , we henceforth consider it as implicitly present . ) for example , if photometry is done via point spread function ( psf ) fitting with weighted least squares , and if the noise level and backgrounds are known , then it may be a good approximation to take $ ] , where is the familiar sum of squared weighted residuals as a function of the source flux and direction .we consider a catalog at a given epoch to report summaries of the likelihood functions for candidate sources that have met some detection criteria .the most commonly reported summaries are best - fit fluxes ( or magnitudes ) with a quantification of flux uncertainty ( typically a standard error or the half - width of a 68% confidence region ) , and , separately , best - fit sky coordinates with an uncertainty for each coordinate.we here take these summaries to correspond to a factored approximation of the source likelihood function , where the epoch - specific flux factor , , is a gaussian with mode ( the catalog flux estimate at epoch ) and standard deviation , and the direction factor , , is an azimuthally symmetric bivariate gaussian with mode , and standard deviation . , because direction is a two - dimensional quantity . if is the single - coordinate standard deviation , the angular radius of a 68.3% ( `` '' ) confidence region or flat - prior credible region is . ] this may be a rather crude approximation ; we will address it further elsewhere , here merely noting that it is implicitly adopted for most survey catalogs . for simplicity , we take the flux factors to have the same standard deviation at all epochs , . we adopt a simple source detection criterion : a candidate source with a flux likelihood mode larger than a single - epoch threshold value , , is deemed a detection .the probability for detection in a single - epoch catalog is the probability that source with true flux will produce a single - epoch measurement that falls above the threshold .this probability is just the integral of the gaussian flux likelihood function above the threshold , which we denote by where is the complementary error function .for comparison , consider detection probabilities in the case of stacked exposures from observations .we assume that the objects are stationary and have a constant flux , and that the dominant source of noise is still the sky , so the relative noise is reduced by after stacking . for a stacked exposure flux threshold ,the probability for detection is figure [ fig:1 ] displays the detection probability as a function of true object flux for single - epoch and stacked data , for various choices of the single - epoch and stacked thresholds .the dotted green lines represent the single - exposure situation as a function of the true flux in units for detection thresholds of 2 , 3 , 4 , and 5 .similarly the solid yellow lines correspond to the stacked detections with exposures .consider two curves corresponding to the same threshold , so .the probability for detection at is 50% for both a single exposure and stacked exposures .but the curve for stacked exposures is much steeper , with a higher probability for detecting sources brighter than , and a lower probability for detecting sources dimmer than .that is , when constructed with a common threshold , the catalog built from stacked data will be more complete above threshold , and will more effectively exclude sources with true flux below threshold .faint sources will not always be detected .the probability for making detections among observations follows the binomial distribution , giving the multi - epoch detection probability , an interesting quantity is the probability that an object would lead to source detections in or more observations .this is simply the sum ( this can be expressed in terms of the incomplete beta function ) . in figure [ fig:2 ]we plot these probabilities as a function of ( again in units ) for observations . from left to right , the solid red curves show the probability for detecting an object of given flux in exactly 1 , 2 , etc ., up to 9 observations . similarly the dashed blue curves correspond to cases ( 1 or more ) , , and so on .( note that the functions coincide for the case . )figure [ fig:3 ] compares detection probability curves for the stacked exposure case ( solid yellow curves , as in figure [ fig:1 ] ) and the multi - epoch , detection case ( dashed blue curves , as in figure [ fig:2 ] ) . for a particular stacked exposure case ,we see there is a multi - epoch case whose detection probability curve displays very similar performance . for example , the stacked exposure curve is very similar to the multi - epoch detection case .this indicates that collecting sources with detections from _single - epoch _ catalogs is nearly equivalent in terms of catalog completeness and purity to producing a separate , new stacked exposure catalog . analyzing the single - epoch catalogshas a number of advantages .it can be implemented in an incremental fashion that follows the schedule of the survey , and the time series data are readily available at a given time ; there is no need to go back to old images and to performed forced photometry at locations that are revealed only in the final stack .the previous calculations addressed detectability of a source of known true flux , . in real - life scenarios ,the problem is quite the opposite we are presented with the observations and would like to understand the properties of the sources . in this context ,our focus is on how one can reliably distinguish noise peaks from real sources .it is important to emphasize that we have more information than just the fact that a source has been detected ; we also have flux measurements , at multiple epochs .our approach is motivated by bayesian hypothesis testing , where the strength of evidence for presence of a source is quantified by the posterior probability for the source - present hypothesis , or equivalently , by the posterior odds in favor of a source being present vs. being absent ( the odds is the ratio of probabilities for the rival hypotheses ) .the posterior odds is the product of prior odds and the data - dependent _ bayes factor_. the prior odds depends on population properties ; it may be specified a priori when there is sufficient knowledge of the population under study , or learned adaptively by using hierarchical bayesian methods ( for examples of this in the related context of cross - identification , see ) . herewe focus on the bayes factor ; we will address hierarchical modeling in a follow - up paper .the bayes factor is the ratio of marginal likelihoods for the competing hypotheses , one that claims that the sources are associated with a real object , and its complement that assumes there is just noise : each marginal likelihood , , is the integral , with respect to all free parameters for the hypothesis , of the product of the likelihood function and the prior probability density for the parameters .let us now assume that out of observations , we measure detections with measured fluxes .we consider the two competing hypotheses separately .let denote the set of indices for epochs with detections , and denote the set of indices for epochs with nondetections : for a candidate object with source detections among catalogs , the likelihood for a candidate true flux is where is the probability of not detecting an object with true flux , which happens times , and is the flux likelihood function defined above ( gaussians with means equal to ) .the marginal likelihood for the real - object hypothesis is obtained by averaging over all possible true flux values , . for an object that is a member of a population with known flux probability density ,the prior probability for , used for the averaging in the marginal likelihood , is , so that which is a one - dimensional integral that can be analytically or numerically quickly evaluated .( when the population distribution is not known a priori , it may be estimated via joint analysis of the catalog data for many candidate objects , within a hierarchical model , a significant complication that we will elaborate on elsewhere . )the alternative hypothesis is that the detections are simply random noise peaks in the image .the noise hypothesis marginal likelihood , , is the probability for the catalog data presuming no real object is present .for epochs with a candidate source reported in the catalog , the datum is the flux measurement , , and the relevant factor in the marginal likelihood is , the _ noise peak distribution _ , evaluated at .this distribution will depend on the noise statistics for each catalog . for epochs with no reported detection , we instead know only that , so the relevant factor is the fraction of _ missed _ noise peaks , , not true flux ; in the gaussian regime assumed here the measured value may be negative , albeit with small probability .the estimated flux would be constrained to be positive via the prior density , , which would multiply the flux likelihood when computing posterior flux estimates . ] the probability for a false detection is then . to compute these quantities comprising , we need to know the noise peak distribution , .this distribution is not trivial to specify ; it will depend both on the noise sources , and on the source detection algorithm .typically , a source finder performs a scan , identifying local peaks of the measured fluxes smoothed with a kernel , e.g. , corresponding to a specified point source aperture . under the noise hypothesis, the source finder will be finding peaks of a smooth random field .the locations and amplitudes of the peaks will form a point process , whose statistical properties can be analytically calculated .the most important consequence is that even though the underlying noise at the pixel level may be independent and gaussian , the source finder output will correspond to sampling from a point process with a more complicated distribution of fluxes . in particular , although the pixel - level noise distributions are symmetric ( about the mean background ) , the distribution for ( falsely ) detected fluxes is skewed toward positive values .the relevant calculation is presented in the appendix .figure [ fig : surface ] shows the surface density of noise peaks as a function of the detection statistic ( in units ) , in the scenario when the sky noise is spatially independent and gaussian .the surface density is in units of objects per , where is the width of the point spread function ( see appendix ) .the noise peak distribution is the normalized version of this function .the surface density has a mode at and its shape is well approximated with a gaussian with standard deviation for all positive values of .the shaded ( magenta ) area highlights the excess density over the gaussian at negative values .as noted above , we obtain the probability for detecting a noise peak , , by integrating above the flux threshold .figure [ fig : frac]a shows the results as a function of the flux threshold in units ( left ) , and on a scale corresponding to an lsst - like magnitude ( right ; see [ sec : disc ] for a description of the magnitude scale ) .we see that the fraction of noise peaks above threshold is about 62% at 1 , dropping quickly to about a few percent at 3 , and becoming negligible at 5 .based on just this figure , it is tempting to set a high detection threshold to reject such `` ghost peaks '' and keep the catalog of detections nearly pure ; but that would mean we lose the opportunity to recover the numerous really faint sources .our multi - epoch approach suggests a different strategy : instead of seeking to make the catalogs for _ each _ epoch pure , we can adopt a lower single - epoch threshold , relying on the fusion of data across epochs to weed out ghosts .the marginal likelihood and bayes factor computations accomplish this data fusion .the marginal likelihood for the noise hypothesis is a product of the terms for the detections and non - detections : we now have all the ingredients for computing the bayes factor of eq .[ eq : bfac ] , providing an objective measure of how much the data prefer a real - source origin to a noise peak origin .so far we have only used the flux information in the data .genuine sources should have both consistent fluxes and consistent directions across all epochs . in practice , due to the noise and astrometric errors, the detections of the same object will shift in each exposure , thus the resulting catalogs have to be cross - matched . usinga probabilistic method can be to our direct benefit here , enabling straightforward combination of the flux and direction information .the detections from a real source are all connected , they are just displaced by a random astrometric error ; but noise peaks ( ghosts ) will be independent of each other and their associations can only be by chance . as we are working under the approximation that the flux and sky position estimates are independent ( see eq .[ eq : eplike ] ) , the bayes factor using both the photometric and astrometric information factors , the astrometric cross - match bayes factor , , has been derived in ( see eq .( 17 ) there , and eq . ( 19 ) for the tangent plane gaussian limit that holds for high - precision astrometry ) .that work also discusses generalizations that account for proper motion and other complications . in the following section we assess the discriminative power of multi - epoch source detection by applying it to simulated galaxies and noise peaks , both omitting and including the astrometric datawe here describe simulations that demonstrate the detection capability of our multi - epoch approach in a setting with known ground truth .the simulation parameters were chosen to produce data similar to that provided by modern large - scale optical surveys .we assume that galaxies are brighter than 28 magnitudes and that the 5 detection limit is 24 magnitudes , corresponding roughly to parameters of lsst photometry .panel ( b ) of figure [ fig : frac ] shows the noise peak detection probability as a function of magnitude based on these parameters , in contrast to the dimensionless presentation in panel ( a ) . to compute the marginal likelihood for the source - present hypothesis, we must specify a prior for the source flux , .here we use a standard faint - galaxy number counts law , with the number counts following the empirical formula of ; see .that approximately translates to the properly normalized population distribution of where is the limiting flux that corresponds to the previously defined magnitude limit .we generate sets of random detections for 20,000 galaxies with true fluxes between 28 and 23 magnitudes by simply drawing values from a gaussian centered on the actual fluxes .we also generate 2,000 ghost detection values from by inverting its cumulative distribution ( computed numerically on a grid ) .the number of exposures is set to the previously used with a single - epoch flux threshold of just 1 , deep in the noise . in observations with our specified parameters, the number of ghost detections will greatly outnumber the galaxy detections with this low threshold .the numbers of galaxies and ghosts were chosen to enable display of the distributions of bayes factors for the two classes of detections ( noise and true ) .we first analyze the simulated data considering only the photometric information ( i.e. , ignoring the directional bayes factors ) . in figure [ fig : bf - photo ] the ( red ) points represent the resulting bayes factors for the real sources ( right of the double dashed vertical lines ) and the noise peaks ( on the left ) .superficially , the bayes factors may appear surprisingly large ; even for dim sources the bayes factors are often , often considered strong evidence in settings where the competing hypotheses are assigned prior odds of unity . but here , the prior odds for a genuine association vs. a noise peak match are extremely small , because chance associations are likely due to the high spatial density of galaxies . , , and discuss how to compute the prior odds in various settings .figure [ fig : bf - photo ] shows that , as one would expect , the weight of evidence is strong for the bright galaxies but weakens for the faint galaxies .the smallest bayes factors arise for galaxies with true magnitudes near 26.5 , which corresponds to the mode of the noise peak distribution , .perhaps surprisingly , sources dimmer than magnitude 26.5 can have larger bayes factors than those with magnitude 26.5 .this happens because peaks away from , i.e. , we do not expect noise peaks to have arbitrarily small measured fluxes ; the peak - finding process biases the noise peak distribution away from zero flux . for the weakest detectable sources , the most likely number of detections among the epochs is one .the flat top of the distribution at the faint end corresponds to very dim sources detected only once , very near threshold .the smaller bayes factors in that region of the plot correspond to unlikely larger numbers of detections near the threshold ; the discreteness in the number of detections produces a subtle banding in the distribution .we now consider the astrometric data , by itself . for simplicity , we assume a constant direction uncertainty of for all detections . ]we simulate the coordinates for the mock galaxies as follows . around the true direction of each object ,we randomly generate points from a 2d gaussian .this flat sky approximation is excellent in this regime ; for such tight scatters , the approximation error is below the limit of the numerical representation of double precision floating point numbers .the coordinates of noise peaks are generated homogeneously .the surface density of the ghosts is analytically calculated and its integral above the 1 detection threshold yields .a simple algorithm is to pick a large enough square , with area , and randomly draw the number of peaks from a poisson distribution with expectation value . out of these ghosts, we pick a number equal to the number of flux detections , with locations such that are closest to the center , where the simulated object is placed .figure [ fig : bf - astro ] shows the distribution of astrometric bayes factors , for the real ( mock ) and noise sources .note the larger span of the ( log ) bayes factor axis .banding due to discreteness in the number of detections is now clearly apparent among the true - object bayes factors ; the 9 levels correspond to the different number of detections with the lowest being 1 .comparing to figure [ fig : bf - photo ] , we see that directional cross - matching is a stronger discriminant between real and noise sources in the dim source regime .the photometric data grow in importance as sources grow brighter .the astrometric bayes factors are essentially constant vs. magnitude for a given number of detections among the 9 epochs .this is a consequence of our simplifying assumption of a constant direction uncertainty .as noted in footnote [ fn : const - sig ] , in real surveys the astrometric precision decreases with increasing magnitude ( decreasing flux ) in the weak - source regime ; this would lead to some decrease in the bayes factors with increasing magnitude .figure [ fig : bf - both ] shows the distribution of bayes factors , for the real and noise sources , now combining the photometric and astrometric factors . for the lowest band , corresponding to a single detection , is unity by definition ( no constraint coming from a single detection ) , producing the same bayes factor distribution as in the flux - only calculation . for multiple detections ,the bayes factors for the true sources are greatly enhanced by including astrometric information ( note that the ordinate is logarithmic ) ; just two detections produces quite strong evidence for the true - object hypothesis .the bayes factors for the noise peaks have moved to much lower values , due to the low likelihood of directional coincidences .this computation demonstrates that flux and astrometric catalog data , combined across epochs , can strongly distinguish real objects from spurious detections .we have not addressed what threshold bayes factor to use for producing a multi - epoch catalog , or , in bayesian terms , how to convert bayes factors into posterior probabilities for candidate objects . as noted above, when the object population density ( on the sky and in flux ) is known a priori , the calculation is straightforward ( e.g. , the prior odds will be proportional to the ratio of true object and noise peak sky densities ) .when these quantities are unknown , a possibly complicated hierarchical bayesian calculation can jointly estimate them and the object properties .when many objects are detected , an approximate approach , plugging in empirical estimates of the densities based on the data , is likely to suffice , as described in .this paper presents an exploratory study of a new , incremental approach to the analysis of multi - epoch survey data , based on fusion of single - epoch catalogs produced using a source detection algorithm with a modest or low threshold .although the single - epoch catalogs will include many noise sources ( they may even be dominated by them ) , we show that probabilistic fusion of the single - epoch data can produce interim or final multi - epoch catalogs with properties similar to those expected from catalogs based on image stacking .the approach is essentially a generalization of cross - matching , where object detection corresponds to identifying a set of candidate sources that match in both flux and direction across epochs .using a probabilistic approach directly provides the required quantities , enabling fusion of information both across epochs , and between flux and direction , by straightforward multiplication of the relevant probabilities .the final quantification of strength of evidence is via marginal likelihoods and bayes factors ; these can be used for final thresholding , or for producing posterior probabilities for source detections when population properties such as sky densities are known or can be accurately estimated ( perhaps as part of the catalog analysis ) . the bayes factor compares predictions of the observed data based on true - object and noise - peak hypotheses for the data , and thus requires knowledge of the distribution of noise peaks .we derive the spatial properties of noise peaks that commonly appear in catalogs .the flux - dependent surface density of ghosts is asymmetric in flux , skewed toward positive flux values .it can be accurately approximated by a shifted gaussian for most practical purposes .based on the bayes factor , sources with single - epoch measured fluxes over a threshold start to separate out from the noise peaks when data are combined across just a few epochs .the evidence for a source becomes very strong once the single - epoch fluxes exceed ( 24 mag ) .when considering only the flux measurements , the faintest sources are hard to distinguish from the noise peaks with measurements at just a few epochs ; but astrometric data ( celestial coordinate estimates ) greatly help to separate genuine and spurious detections . in general , the specificity and the selectivity of the proposed discriminator depends on a number of parameters , most of which we discuss as part of the simulated case study .the pixel size , the single - epoch detection threshold , the number of exposures , and the point - spread function all affect the frequency of noise peaks .the population distribution of source properties also directly impacts detectability of faint sources. a hierarchical generalization of the approach could learn important features of the population distribution as part of the analysis .we have treated only the case of detection of constant - flux sources .detecting variable and transient sources can be accommodated by introducing one or more time series models into the flux matching part of the algorithm .models that accurately describe particular classes of sources will produce optimal catalogs , but flexible models perhaps simple stochastic processes , or even histogram or other partion - based models , with appropriate priors on variability may suffice for producing general - purpose multi - epoch catalogs for studying variable sources .this is a potentially complicated generalization of our framework that we plan to explore in future work .of course , variable source detection using image stacks is also an open research problem ; our framework provides an alternative avenue to address it .our exploratory study made simplifying assumptions .a strong assumption was that source candidates are isolated enough that the image space can be partitioned into patches that have at most one candidate source .when source candidates are close to each other , the matching across epochs must account for multiple possible source association hypotheses .similar complications appear when considering classes of objects that may be comprised of multiple sources per epoch , e.g. , radio galaxies .in other work , we have developed techniques for directional cross - matching in contexts with multiple candidate associations , and with complex object structure and object motion .these methods can be extended to include flux matching criteria to generalize the multi - epoch detection framework described here .the strategy we have described is quite different from conventional approaches to producing survey catalogs . implementing it will raise new processing and database management challenges ; users of the resulting catalogs will need to think about catalogs in a different way .in particular , a low - threshold single - epoch catalog will contain many spurious sources ; with a low enough threshold , the spurious sources will greatly outnumber real sources .however , evidence mounts quickly as catalogs are merged .if interim catalogs are produced consecutively , cumulative culling of early single - epoch catalogs could reduce the storage burden for catalogs subsequent to the first catalog .such issues , and the generalizations described above , will be topics for future study .the authors gratefully acknowledge valuable and inspiring discussions with andy connolly and robert lupton on various aspects of the topic .this study was supported by the nsf via grants ast-1412566 and ast-1312903 , and the nasa via the awards nng16pj23c and stsci-49721 under nas5 - 26555 .adler , r. j. 1981 , the geometry of random fields , chichester : wiley , 1981 , bardeen , j. m. , bond , j. r. , kaiser , n. , & szalay , a. s. 1986 , , 304 , 15 bond , j. r. , & efstathiou , g. 1987 , , 226 , 655 budavri , t. , & szalay , a. s. 2008 , , 679 , 301 budavri , t. 2011 , , 736 , 155 budavri , t. 2012 , statistical challenges in modern astronomy v , 291302 budavri , t. , & szalay , a. s. 2014 , astronomical data analysis software and systems xxiii , 485 , 207 budavri , t. , & loredo , t. j. 2015 , annual review of statistics and its application , 2 , 113139 kaiser , n. 2004 , `` the likelihood of point sources in pixellated images '' , pan - starrs internal report , psdc-002 - 010-xx kessler , r. , bernstein , j. p. , cinabro , d. , et al .2009 , , 121 , 1028 loredo , t. j. 2013 , statistical challenges in modern astronomy v , 303308 lund , j. , & rudemo , m. 2000 , biometrika , 87 , 2 , pp.235 - 249 ( http://www.jstor.org/stable/2673461 ) kerekes , g. , budavri , t. , csabai , i. , connolly , a. j. , & szalay , a. s. 2010 , , 719 , 59 madau , p. , & thompson , c. 2000 , , 534 , 239 press , w. h. 1997 , unsolved problems in astrophysics , p.49 - 60 , arxiv : astro - ph/9604126 riess , a. g. , press , w. h. , & kirshner , r. p. 1995, , 438 , l17 szalay , a. s. , connolly , a. j. , & szokoly , g. p. 1999, , 117 , 68consider a two dimensional gauusian random field , with a known power spectrum .its gradient would be , and the second derivative tensor .we would like to find out the density of peaks of this field above a certain height .we will follow the procedure outlined in .we will expand the field and its gradient to second order around a peak at the position : where we already use the fact that the gradient of the field at a peak is zero , i.e. .provided that is non - singular at , we can express from the second equation : we can write a dirac delta that picks all extremal points of as .\ ] ] this expression turns a continous random field , defined at all points over our two - dimensional space into a discrete point process , that of the extremal points of the field , .\ ] ] in order to pick the peaks of the gaussian random field we will also need to have a negative definite . if we only want peaks of a certain height , we need to calculate the appropriate ensemble average of this density over the constrained range of the variables .we have six random variables , the field , the three components of the symmetric tensor , and the two components of the gradient .the correlations can be computed in a straight - forward manner , given the power spectrum of the field .the gradient is uncorrelated with both the field and the second derivatives , due to the parity of the fourier representation .let us denote the correlation matrix of the field and the hessian by , and that of the gradient as .furthermore , let us define the different -moments of the power spectrum characterizing the field as we can now explicitely write down the correlation matrix of and for , as with these we can write the multivariate gaussian distribution using the inverse of the correlation matrix as a product of two independent distributions before we proceed further , the second derivative tensor can be described more conveniently with the two eigenvalues and a rotation angle , , as follows : for simplicity let us introduce the dimensionless variables , the trace of the second derivative tensor , , and .the jacobian of the transformation from to is let us also introduce the dimensionless and the characteristic scale as the quadratic form containing in the exponent can be written with the new variables as the determinants of and are in these variables , the unconstrained probability distribution for becomes in order to properly handle the symmetries of the problem , we can assume that . then still any pair can be mapped onto itself by a 180 degree rotation , so the valid range of is . sincenone of the terms depend on , we can integrate over , resulting in \frac{\,dx\ dz}{2\pi \sqrt{1-\gamma^2 } } \left ( e^{-y^2}\,2y\ dy\right).\ ] ] the constraint maps onto .if we perform the integration over , and over , we get 1 , as we should , for the unconstrained probability for a general point . as we introduce the peak constraints, we need to first consider the impact on the gradient .the constrained probability distribution is \frac{d^2 { \mathbf{h } } } { 2\pi|h|^{1/2}}.\ ] ] after integrating over we get the extremum weight this will multiply the unconstrained probability for the density of extremal points of the random field , \exp\left[-\frac{x^2 + 2\gamma x z + z^2}{2(1-\gamma^2)}\right ] \frac{\,dx\ dz}{2\pi \sqrt{1-\gamma^2}}.\ ] ] for a peak both eigenvalues of the second derivative tensor must be negative. in the rotated coordinates , this means that , and .we can easily integrate over the allowed range of next , yielding we are left with \frac{\,dx\ dz}{2\pi \sqrt{1-\gamma^2}}.\ ] ] let us introduce the function as .\ ] ] evaluating the integral over in mathematica , we obtain b(s,1 ) + b(s , 3- 2\gamma^2 ) , \bigg ] , \label{eq : npk}\ ] ] with we get the full surface density of noise peaks , , by integrating the conditional surface density in eq .[ eq : npk ] over all peak heights : finally , we need to evaluate the shape parameter .assume that the window function applied to the random field is a gaussian with a scale , its fourier transform is also a gaussian , we model the sky noise as a white noise with a flat spectrum .thus the correlations in the measured random field are determined by the window function , i.e. , with this power spectrum it is straightforward to compute the scale and the shape parameters as with this choice of psf and , we get in eq .[ eq : npk ] .now we are in a position to compute the probability that a noise peak is within a radius of our point of interest located at the origin .the spatial distribution of the noise peaks is described by a poisson process with the surface density .the cumulative probability that the peak is within a radius is given by the well - known expression .\ ] ] the differential probability is given by its derivative with respect to to , as \ ] ] both of these probabilities are shown on fig .[ fig : pk - shift ] .the differential probability starts off around the origin scaling with , due to the available area ( phase space for configuration ) .this in turn causes the cumulative function to rise as , resulting in a very small probability ( ) that a noise peak will appear within a psf scale .thus we can safely ignore noise peaks as a major contributor to false detections at a significant level .
observational astronomy in the time - domain era faces several new challenges . one of them is the efficient use of observations obtained at multiple epochs . the work presented here addresses faint object detection with multi - epoch data , and describes an incremental strategy for separating real objects from artifacts in ongoing surveys , in situations where the single - epoch data are summaries of the full image data , such as single - epoch catalogs of flux and direction estimates ( with uncertainties ) for candidate sources . the basic idea is to produce low - threshold single - epoch catalogs , and use a probabilistic approach to accumulate catalog information across epochs ; this is in contrast to more conventional strategies based on co - added or stacked image data across all epochs . we adopt a bayesian approach , addressing object detection by calculating the marginal likelihoods for hypotheses asserting there is no object , or one object , in a small image patch containing at most one cataloged source at each epoch . the object - present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object ; the no - object ( noise ) hypothesis interprets candidate sources as spurious , arising from noise peaks . we study the detection probability for constant - flux objects in a simplified gaussian noise setting , comparing results based on single exposures and stacked exposures to results based on a series of single - epoch catalog summaries . computing the detection probability based on catalog data amounts to generalized cross - matching : it is the product of a factor accounting for matching of the estimated fluxes of candidate sources , and a factor accounting for matching of their estimated directions ( i.e. , directional cross - matching across catalogs ) . we find that probabilistic fusion of multi - epoch catalog information can detect sources with only modest sacrifice in sensitivity and selectivity compared to stacking . the probabilistic cross - matching framework underlying our approach plays an important role in maintaining detection sensitivity , and points toward generalizations that could accomodate variability and complex object structure .
the human heart does not beat at a constant rate , even for a subject in repose .rather , there is strong variability of the heart rate .the complexity of this heart rate variability ( hrv ) presents a major challenge that has attracted continuing attention .many of the explanations proposed are by analogy with paradigms used in physics to describe complexity , including : deterministic chaos ; the statistical theory of turbulence ; fractal brownian motion ; and critical phenomenon .they have led to new approaches and time - series analysis techniques including a variety of entropies , dimensional analysis , the correlation of local energy fluctuations on different scales , the analysis of long range correlation , spectral scaling , the multiscale time asymmetry index , multifractal cascades .all these measures allow one to describe hrv as a non - stationary , irregular , complex fluctuating process .depending on the technique in use there has been a very wide range of conclusions about the regulatory mechanism of heart rate , ranging from a stochastic feedback configuration to the physical system being in a critical state .hrv can also be considered in terms of the interactions between coupled oscillators of widely differing frequencies .although we now have this huge variety of tools and approaches for the analysis of hrv , only the last - mentioned has enabled us to understand the origins of some of the time - scales embedded in hrv . each time - scale ( frequency ) in the coupled oscillator model is represented by a separate self - oscillator that interacts with the others , and each of the oscillators represents a particular physiological function .the frequency variations in hrv can therefore be attributed to the effects of respiration ( .25hz ) , and myogenic ( .1hz ) , neurogenic ( .03hz ) and endothelial ( .01hz ) activity .hrv also contains a fast ( short time - scale ) noisy component which forms a noise background in the hrv spectrum and can be modelled as a white noise source .its properties are currently an open question , and one that is important for both understanding and modelling hrv .a practical difficulty in experimental investigations is the presence of a strong perturbation , respiration , that occurs continuously and exerts a particularly strong influence in modulating the heart rate .this modulation involves several mechanisms : via mechanical movements of the chest , chemo - reflex , and couplings to neuronal control centres .spontaneous respiration gives rise to a complex non - periodic signal , and this complexity is inevitably reflected in hrv .so , in order to understand the properties of the fast noise , one would ideally remove the respiratory perturbation and consider the residual hrv which would then reflect fluctuations of the intrinsic dynamics of the heart control system .-intervals for ( a ) normal ( spontaneous ) and ( b ) intermittent respirations .respiration signals ( arbitrary units ) are shown by dashed lines . , width=529 ] ( a ) an ecg signal and ( b ) the corresponding hrv ( intervals ) signal . in ( a )the r - peaks are marked by ; the ecg signal is shown in arbitrary units ., width=453 ] ( a ) an ecg signal and ( b ) the corresponding hrv ( intervals ) signal . in ( a ) the r - peaks are marked by ; the ecg signal is shown in arbitrary units ., width=453 ] consideration of the intrinsic activity of the heart control system on short - time scales is important for general understanding of the function of the cardio - vascular system , leads potentially to diagnostics of causes of arrhythmia involving problems with neuronal control , and can be a benchmark for modeling hrv . in this paper we present the results of an experimental study of the intrinsic dynamics of the heart regulatory system and discuss these results in the context of modelling the fast noise component .a number of open problems are identified .we analyse the dynamics of the control system in the absence of explicit perturbations by temporarily _ removing _ the continuing perturbations caused by respiration [ figure [ fig1](b ) ] . to do so , we perform experiments involving modest breath - holding ( apna ) intervals . note that during long breath - holding the normal state of the cardiovascular system is significantly modified .the idea of the experiments came from the observation that spontaneous apna occurs during repose .apna intervals of up to 30 sec were used , enabling us to avoid either anoxia or hyper - ventilation .respiration - free intervals were produced by _ intermittent respiration _, involving an alternation between several normal ( non - deep ) breaths and then a breath - hold following the last expiration , as indicated by the dashed line in figure [ fig1](b ) .the respiratory amplitude was kept close to normal to avoid hyper - ventilation , and there were relatively long intervals of apna when the heart dynamics was not perturbed by respiration .it is precisely these intervals that are our main object of analysis .the durations of both respiration and apna intervals were fixed at 30 sec .measurements were carried out for 5 relaxed supine subjects , and they were approved by research ethics committee of lancaster university .note that the measurements presented have been selected from a larger number of measurements to form a set recorded under almost identical conditions of time and duration , with the subjects avoiding either coffee or a meal for at least 2 hours beforehand .they were 4 males and 1 female , aged in the range 2936 years , non - smokers , without any history of heart disease .we stress that the aim of the current investigation was exploratory : to study typical behaviour of the internal regulatory system ; we have not performed a large - scale trial of the kind widely used in medicine when a large number of subjects is necessary because of the need for subsequent statistical analysis of the data .the electrocardiogram ( ecg ) and respiration signals were recorded over 45 - 60 minutes .the ecg signals were transformed to hrv by using the marked events method for extraction of the -intervals which are shown in figure [ fig2 ] .figure [ fig1 ] shows -intervals found for the different types of respiration .it is evident that respiration changes the heart rhythm very significantly .immediately after exhalation ( b ) , there is an apna interval where the heart rhythm fluctuates around some level .these fluctuations correspond to the intrinsic dynamics of the heart control system .it is clear from ( a ) that heart rate is _ continuously _ perturbed during normal respiration , whereas in ( b ) one can distinguish an interval of intrinsic dynamics corresponding to apna .thus , the interval of apna is characterized by the time series ; here labels the -interval .finally , we form a set for analyses by considering the set as realizations of a random walk and analyzing their dynamical properties as such . to reveal dynamics additional to -intervals , the differential increments were analyzed .the differences between -intervals and their increments are illustrated in figure [ fig3 ] .each apna time - series exhibits a trend that is describable by the slope of a linear function , where is a heart beat number and marks apna interval .the trend can be characterized by the distribution of slopes shown in figure [ fig4 ] ( a ) . for all measurements the distributions are broad andtheir mean values differ from zero .thus the non - stationary nature of hrv on short time - scales is clearly apparent .note , that the distributions for the increments are significantly narrower [ figure [ fig4 ] ( b ) ] and that they are very well fitted to a normal distribution ; however , the mean values of the slopes differ from zero . ( a )-intervals and ( b ) increments corresponding to apna intervals are shown . for convenience of presentation , the difference between a given value and the first value of each apna interval is drawn in each case : and . , width=529 ] distributions of trend slopes of the sets ( a ) and ( b ) ., width=529 ] because the dynamics of -intervals is evidently non - stationary , we have applied detrended fluctuation analysis ( dfa ) for estimation of the scaling exponents for the apna sets . in doing so ,we adapted the dfa method for short time - series and used non - overlapped windows ( see appendix for details ) . because the time - series were short , time windows of length 415 -intervals were used to calculate . for all measured subjects ,this procedure yielded values of lying within the range , with a mean value of .if -intervals in the sets are replaced by realizations of brown noise ( the integral of white noise ) keeping same lengths of apna intervals , then the calculation gives .additionally , a surrogate analysis was performed for each subject by random shuffling of the time indices of -intervals , to confirm the importance of time - ordering of the -intervals . for each realization ( set ) , 100 surrogate sets were generated , 100 values of were obtained , and the mean value was calculated .values of for the surrogate sets lie in the same limits as those for the original sets , but with a small bias between , calculated using original sets , and ( see the appendix for values of and ) .it means that one can see a correlation between -intervals , but that it is weak . summarizing the dfa results, we can claim that the scaling exponent is similar to that for free diffusion of a brownian particle , but there is nonetheless some correlation between the -intervals .we also applied aggregation analysis in a similar manner and arrived at qualitatively the same conclusion . note that in the contrast to the initial idea of the dfa and aggregation analyses , which were used for revealing long - range correlations in time series , we have used these approaches to analyse the diffusion velocity because they can cope with trends .long - range correlations can not be revealed in the described measurements .examples of the autocorrelation function ( a ) with and ( b ) without an oscillatory component .the crosses indicate calculated on the basis of the increments .the solid line corresponds to the approximating curve ., width=529 ] to estimate the strength of the correlation , _ stationary _ time - series of the increments were considered .the autocorrelation function was calculated here ; the brackets denote calculation of the mean value ; and correspond to the heart beat number and apna interval respectively , , is the number of increments in the apna ; is the total number of apna intervals . figure [ fig5 ] presents examples of autocorrelation functions .one of them has pronounced oscillations .an approximation of by the function demonstrates that oscillations occur with frequency near hz , presumably corresponding to myogenic processes or ( perhaps equivalently ) to the mayer wave associated with blood pressure feedback .further investigations via the parametrical spectral analysis for each apna interval show that these oscillations are of an on - off nature , i.e. observed for parts of the apna intervals , and not in all of the measurements as can be seen in figure [ fig5 ] ( b ) .examples of apna intervals with and without oscillations are shown in figure [ fig5add ] .when an oscillatory component is present then its contribution to is much weaker than the contribution of the noisy component .the latter is characterized by a very short memory as demonstrated by fast decay of .examples of apna intervals with ( a ) and without ( b ) oscillation of hrv .the circles correspond to the values of the increments and the solid lines connecting points are guides to the eye .the dashed lines in figure ( a ) are added to reveal oscillations .the middle and upper time - series are shifted by 0.1 and 0.2 ( sec ) accordingly ., width=529 ] the properties of can also be characterized by the probability density function shown in figure [ fig6 ] ( a ) .figure ( b ) shows the probability density function of rr - intervals for comparison .following , the -stable distribution has been widely used to fit the distribution of increments , and _ strongly _ non - gaussian distributions were observed .we perform a similar fitting applying special software . since the distributions are almost symmetrical , our attention was concentrated on the tails , which were characterized by a stability index ] .another possible explanation could be that the control system is tracing the base rhythm but the short - time fluctuations have a non - stationary character .it is natural to expect that there could be other possible explanations , and additional investigations are needed to reach an understanding of the diffusive dynamics on short - time scales . in section 2it is suggested that we should consider the non - stationarity and diffusive dynamics of -intervals within the framework of a stochastic process with independent increments .it allows one to consider -intervals as realizations of the so - called auto - regressive process that is widely used in time - series analysis .it means that the direct spectral estimation of -intervals , currently used as one of the basic techniques , is not applicable here and that one must use the theory of stochastic processes with stationary increments for their spectral decomposition .if in the presence of respiration , the short - time stochastic component of hrv preserves non - stationarity then spectral estimation based on -intervals is not correct , and increments must be used instead .note , that the properties of short - time fluctuations in the presence of respiration are far from being completely understood .the theories of both stochastic processes with stationary increments and of auto - regressive analysis place some limitations on the analysed time - series .the first approach requires the existence of finite second - order momenta , whereas the second approach assumes uncorrelated statistics of increments .formally , however , non - gaussianity of the increments distribution means that the second - order momenta do not exist , but non - gaussianity can still be incorporated into the auto - regressive description . and _ vice versa _ , the presence of correlations in the increments dynamics requires a modification of the standard auto - regressive approach , and it is one that can be incorporated naturally into the general theory . in the current investigationwe ignore these issues .we calculate the auto - correlation function and use model ( [ spsi ] ) , because the finite length of the time - series guarantees the existence of the second - order momenta , and the simplicity of ( [ spsi ] ) means that the inclusion of the correlations is a trivial extension .our consideration has the formal character of time - series analysis because we do not incorporate any preliminary information about the possible dynamics of -intervals .the analysis is based on the use of a set of relatively short time - series , a fact that defines our choice of simple statistical measures .one can not exclude the possibility that the use of other approaches to such data might provide additional insight into hrv dynamics .for example , the fractional brownian motion approach and the theory of discrete non - stationary non - markov random process represent different paradigms , which are based on assumptions about the origin of the data .note , that despite a long history of developing the approaches and their applications , the approaches of fractional brownian motion and of stable random process are not standardized tools , whereas the approach of non - markov random process is not so popular .there is no definite recipe for choosing a set of measures which can uniquely specify ( or provide a good description of ) the properties of a renewal ( discrete time ) stochastic process .another way of attempting to understand the results is to try to reproduce the observed data properties from an appropriate model . in the context of our experiments ,the modelling should consist of a simulation of the electrical activity of sinoatrial node ( san ) where the heart beats are initiated . for modelling ,one option is to use a bottom - up approach , which is currently a very popular technique within the framework of the complexity paradigm .in fact , available san cellular models allow one to incorporate many details of physiological processes like the openings and closures of specific ion channels .however , despite the complexity of the models ( 40100 variables ) many important features are still missed .for example , the fundamentally stochastic dynamics of ion channels is represented by equations that are deterministic .heterogeneity of the san cellular locations and intercell communications are among other important open issues .an alternative option is the top - down approach using integrative phenomenological models .in contrast to detailed cell models , a toy model of the heart as a whole unit can be developed .it is known that an isolated heart , and a heart in the case of a brain - dead patient demonstrate nearly periodic behaviour .so , it is reasonable to assume that the observed hrv is induced by the neuronal heart control system , which is a part of the central nervous system .the control system includes a primary site for regulation located in the medulla , consisting of a set of neural networks with connections to the hypothalamus and the cortex .the control is realized via two branches of the nervous system : the parasympathetic ( vagal ) and the sympathetic branches .although many details of the control system are still missing , it is currently accepted that the vagal branch operates on faster time scales than the sympathetic one , and that each branch has a specific co - operative action on the heart rate and the dynamics of san cells .let us consider an integrate - and - fire ( if ) model as a model of a san cell in the leading pacemaker .these cells are responsible for initiating the activity of san cells and , consequently , that of the whole heart .the dynamics of the if model describes the membrane potential of the cell by the following equations here defines a slope of integration , is the threshold potential , is the resting ( hyperpolarization ) potential ; the time corresponds to the cell firing , and it is the difference between two successive firings that determines the instantaneous heart period or -interval , .it is known that increasing sympathetic activity with a combination of decreasing vagal activity leads to an increase in the heart period , and _vice versa_. direct stimulation of the sympathetic branch leads to an increase of the integration slope and a lowering of the threshold potential , whereas vagal activation has the opposite effects , and additionally , lowers the resting potential .thus , the neuronal activities can be taken into account as modulations of the parameters of the if model ( [ ifmodel ] ) . for reproducing hrv during apna , therefore ,it is enough to present any of the parameters , or as a stochastic variable of the form ( [ spsi ] ) , for example , , where are random numbers having the stable distribution .however , the use of more realistic ( than if ) models with oscillatory dynamics , for example fitzhugh - nagumo or morris - lecar models , makes the reproduction of the experimental results a more difficult but interesting task .currently it is unclear whether it is possible to obtain a stable distribution of increments by consideration of the gaussian type of fluctuations alone , or whether one should use fluctuations characterizing by a stable distribution .this point demands further investigation .in summary , our experimental modification of the respiration process reveals that the intrinsic dynamics of the heart rate regulatory system exhibits stochastic features and can be viewed as the integrated action of many weakly interacting components . even on a short time scale( less then half a minute ) the heart rate is non - stationary and exhibits diffusive dynamics with superimposed intermittent .1 hz oscillations .the intrinsic dynamics can be described as a stochastic process with independent increments and can be understood within the framework of many - body dynamics as used in statistical physics .the large number of independent regulatory perturbations produce a noisy regulatory background , so that the dynamics of the regulatory rhythm is close to classical brownian motion .however there are indications of non - gaussianity of increments and weak but important correlations on short time - scales .the reproduction of these features , especially the non - gaussianity property , is an open problem even in simple toy models .these results are important both for understanding the general principles of regulation in biological systems , and for modeling cardiovascular dynamics .furthermore , the results presented may possibly lead to a new clinical classification of states of the cardiovascular system by analysing the intrinsic dynamics of the heart control system as suggested in .the research was supported by the engineering and physical sciences research council ( uk ) and by the wellcome trust .some details of the measurements and calculations are summarized in this section .the ecg was measured by standard limb ( einthoven ) leads and the respiration signal was measured by a thoracic strain gauge transducer .the signals were digitized by a 16-bit analog - to - digital converter with a sampling rate of 2 khz .the ecg and respiration signals were recorded over 45 - 60 minutes and time locations of -peaks in the ecg signals were defined and time intervals between two subsequent -peaks ( the so called -intervals ) are used to form hrv signal .respiration - free intervals were produced by the _intermittent _ respiration , involving an alternation between normal breaths and apna intervals .the durations of both normal breaths and apna intervals were fixed at 30 sec .the respiration signal was used to identify apna intervals .finally , the set of time - series of -interval was formed for each subject ; here labels the -intervals , and labels the interval of apna . for each interval of apnoea , time series of the differential increments produced and they also form a set for each subject .the number of -intervals in each apna interval is different , depending on the heart rate of the subject .the total number of apna intervals also differ for each subject .the mean heart rate during apna intervals and the total number of intervals for each measured subject are presented in table [ t1 ] . subject & ( sec ) & & & & & & & + s1 & 1.01 & 45 & 1.39 & 1.47 & 1.86 & 0.81 & 0.17 & 1.83 + s2 & 0.77 & 46 & 1.46 & 1.44 & 1.83 & 0.21 & 0.09 & 1.95 + s3 & 1.10 & 47 & 1.43 & 1.53 & 1.96 & 1.01 & 0.22 & 1.79 + s4 & 0.75 & 47 & 1.58 & 1.60 & 1.91 & 0.15 & 0.08 & 1.90 + s5 & 0.91 & 60 & 1.42 & 1.48 & 1.82 & 0.28 & 0.13 & 1.86 + for the application of the dfa and aggregation analyses we adapted the approaches described in and , respectively , to treat the available sets of short time series .the dfa exponent was calculated in the following way .first , the initial set was transformed to another set by the following expression : where and is the number of -intervals for apnoea interval .for each length of time window a set of linear trends was calculated ( see for details ) , where , , is the floor function of .then a set of scaling function was calculated for each value of by use of the expression ^ 2,\ ] ] where .further the scaling functions were calculated as where is the number of apnoea intervals for the given subject , . finally , the scaling exponent was determined as a slope of the function \propto \beta \log ( n) ] was calculated in the third step ( see figure [ figa1 ] ( b ) ) .the values of for each subject are shown in table [ t1 ] .( a ) the scaling function ( circles ) and its approximation ( dashed line ) by ( ) are shown .( b ) the dependence ( circles ) of the variance on the mean value for and its approximation ( dashed line ) by ( ) are shown.,title="fig:",width=7 ] ( a ) the scaling function ( circles ) and its approximation ( dashed line ) by ( ) are shown .( b ) the dependence ( circles ) of the variance on the mean value for and its approximation ( dashed line ) by ( ) are shown.,title="fig:",width=7 ] to verify the robustness of the calculations of exponents and we have performed calculations with the same number of rr - intervals as well as the same structure of apna intervals but by using realizations of brown noise generated by computer . in other words , in the procedures described above we replaced by , where for , , and are random numbers having the normal distribution with mean zero value and unit variance ; the numbers are different for different intervals of apna .we performed 100 calculation of and for different sets for each subject .theoretical values of and for the brown noise are and correspondingly .the calculations with brown noise gave and .here data were merged for all subjects and are presented in the form of a mean value its standard deviation .it means that there is a systematic error related to the length and data structure , a general error of calculation in respect to the theoretical values for is and for is .however the standard deviations of the calculated values are rather small and , consequently , we can conclude that our calculations of and are robust .10 url # 1#1urlprefix[2][]#2 ott e 1993 _ chaos in dynamical systems _( cambridge , uk : cambridge university press ) dobrzynski h , li j , tellez j , greener i d , nikolski v p , wright s e , parson s h , jones s a , lancaster m k , yamamoto m , honjo h , takagishi y , kodama i , efimov i r , billeter r and boyett m r 2005 _ circulation _ * 111*(7 ) 846854
we discuss open problems related to the stochastic modeling of cardiac function . the work is based on an experimental investigation of the dynamics of heart rate variability ( hrv ) in the absence of respiratory perturbations . we consider first the cardiac control system on short time scales via an analysis of hrv within the framework of a random walk approach . our experiments show that hrv on timescales of less than a minute takes the form of free diffusion , close to brownian motion , which can be described as a non - stationary process with stationary increments . secondly , we consider the inverse problem of modeling the state of the control system so as to reproduce the experimentally observed hrv statistics of . we discuss some simple toy models and identify open problems for the modelling of heart dynamics . _ keywords _ : special issue ; regulatory networks ( experiments ) ; stationary states ; dynamics ( experiments ) ; dynamics ( theory )
many species are structured in space with dispersal and migration connecting local populations into metapopulations .the fundamental dynamics of metapopulations are determined by local extinction , dispersal from the local populations , and colonisation success leading to the establishment of new sub - populations .metapopulation dynamics may determine a range of ecological and evolutionary aspects including population size , persistence , spatial distribution , epidemic spread , gene flow , and local adaptation ( e.g. * ? ? ?* ; * ? ? ?much interest has focused on the effect of the spatial structure of metapopulations and how local populations are connected through dispersal .connectivity among subpopulations is also increasingly emphasized in management and conservation , e.g. to prevent fragmentation of landscapes and in the design of protected areas and nature reserves .early models ( e.g. * ? ? ?* ; * ? ? ?* ) assumed identical dispersal probability among habitat patches .the initial focus of spatially explicit metapopulation theory was to explore processes that generate spatial patterns in homogeneous landscapes .later , spatially explicit models were designed to let dispersal probability be a function of patch size or the distance between local habitat patches .one aspect of dispersal that only has been implicitly included in realistic models but not studied in isolation is when dispersal is asymmetric .asymmetric dispersal is expected for many metapopulations , e.g. where dispersal is dominated by wind transport of pollen and seeds , and for marine species with spores and larvae transported by ocean currents .consequently , it is important to understand how asymmetric dispersal may affect the dynamics and persistence of metapopulations with potential implications for the design of nature reserves .some studies have considered asymmetric dispersal ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) but have not analysed effects on metapopulation viability . in a recent contribution a conceptual model was developed to explore the effects of dispersal asymmetry on metapopulation persistence .the viability of metapopulations was investigated for different dispersal patterns randomly connecting pairs of patches through either unidirectional or bidirectional dispersal routes . concluded , that asymmetric dispersal leads to a distinct increase in the extinction risk of metapopulations . in a similar study investigated correlations between metapopulation viability and statistics of the dispersal network ; they also found that asymmetric dispersal links resulted in higher extinction risk .another very recently published work investigates metapopulation viability for a selection of asymmetric dispersal patterns and supports the findings of previous works .the main objective with this study is to isolate the effect of dispersal asymmetry from other properties of the metapopulation network . when changing the degree of symmetry of dispersal networks this generally may simultaneously influence the number of isolated patches and other aspects of the network such as the balance of dispersal in the individual patches ( see e.g. figure 4 in * ? ? ?* ) . since metapopulations are known to be sensitive in particular to the density of the dispersal network , the existence of closed cycles of dispersal , and the hierarchy of dispersal in directed networks these secondary implications could confound any effect of asymmetric dispersal .we resolve the problem by restricting our main analysis to _ regular _ networks . in this paper we in particular analyse the effect of asymmetric dispersal on metapopulation persistence in more detail , with an initial focus on regular dispersal networks .we employ models of synthetic dispersal patterns and demonstrate that asymmetric dispersal per se may not lead to an increase in metapopulation extinction risk .the significance of our results , their consequence for general dispersal patterns and their relations to previous works are addressed in detail in section [ sec : discussion ] .for ease of discussion we focus on the metapopulation model used in previous approaches .this stochastic patch occupancy model connects a number of patches through a complex dispersal matrix ; the model is detailed in section [ sec : metapopulation - model - vuilleumier ] . within the scope of this workthe viability of metapopulations exposed to dispersal patterns with different degree of symmetry is investigated .a consistent definition of the degree of symmetry and details on the dispersal patterns are provided in sections [ sec : symmetry - def ] and [ sec : viab - metap - conn ] .we consider a metapopulation consisting of patches of equal quality , where , at a given time , each patch is either empty ( ) or populated ( ) .interactions of the patches are specified by means of the connectivity matrix , where the elements determine whether patch is connected to patch ( ) or not ( ) . for ease of discussion we require for all implying that patches are not connected with themselves .building on previous works we used a stochastic discrete time model for a metapopulation of patches and tested metapopulation viability with respect to different connectivity matrices .the model , which is attractive in its simplicity , implements dispersal through the connectivity matrix . initially all patches are populated . at each time steptwo events occur in succession : first , populated patches go extinct at per patch probability .subsequently , empty patches can be colonised with probability by each incoming dispersal connection from a populated patch .newly populated patches can not give rise to colonisation of other patches at the same time step they have been colonised . in order to estimate the extinction risk of metapopulationsthe model is iterated times .if any populated patch is left after the iteration , the metapopulation is termed _viable _ and _ extinct _ otherwise .as we chose the parameters and , and discuss the probability of extinction of metapopulations consisting of patches as a function of the colonisation probability .for characterisation of the symmetry properties of dispersal patterns the connectivity matrix is divided into its symmetric and anti - symmetric contributions , and , by defining the matrix elements [ eq : def - a - s ] based on these matrices the degree of symmetry of dispersal patterns is defined as the ratio of symmetric connections among all connections : note that is related to the asymmetry discussed in . by means of equation ,the symmetry properties of dispersal patterns are put on a firm footing : dispersal patterns are called _ symmetric _ if and _ anti - symmetric _ if .generally connectivity matrices with intermediate are neither symmetric nor anti - symmetric .we term them _ asymmetric _ if corresponding to dispersal directed at least to some degree .examples of connectivity matrices generated by the algorithm described in section [ sec : viab - metap - conn ] and in [ sec : algor - gener - balanc ] for a reduced number of patches , , and different combinations of and .only non - zero entries are printed explicitly . for reasons of clarity symmetric connectionsare denoted by s and asymmetric connections by a. the colours indicate separated closed cycles of dispersal that can be identified in the matrices .while the connectivity matrices with ( upper row ) are degenerate into ( ) , ( ) , and ( ) clusters respectively , the clusters of all three matrices generated with ( lower row ) already extend to the entire metapopulation . in spite of the fact that the matrices displayed here are only _ examples _ of randomly generated matrices , this trend is representative . for instance all simulations performed for and resulted in dispersal matrices with a single cluster only .note that our results are based on much larger metapopulations consisting of patches . ]previous works demonstrated that changes in the symmetry of dispersal patterns in particular affect the local symmetry of migrant flow , since asymmetry can result in _ donor_- and _ recipient_-dominated patches not present in symmetric networks . in order to isolate the effect of the degree of symmetry from these secondary effects ,we focus on a specific set of dispersal patterns : we restrict our main analysis to dispersal patterns with the number of dispersal connections , , being an integer multiple of randomly distributed on the patches under the constraint , that each patch obtains exactly in- and outgoing connections with defined degree of symmetry .the random patterns considered , hence , are _ regular _ with the connections evenly distributed to all patches available .an algorithm efficiently generating regular random dispersal patterns for small and intermediate and arbitrary degrees of symmetry ( ) is detailed in [ sec : algor - gener - balanc ] .examples of random connectivity matrices generated for and different combinations of and are exhibited in fig.[fig : algorithm - examples ] .please regard that for the simulations metapopulations consisting of are used resulting in connectivity matrices of dimension instead .the regular dispersal patterns we use here restrict our analysis to metapopulations with all patches connected at a fixed density independent of the choice of . for largest cluster extends to the entire metapopulation independent from the degree of dispersal symmetry resulting in irreducible connectivity matrices . for a detailed discussion of the impact of regularity on our resultswe refer to section [ sec : discussion : regul ] .the viability of metapopulations exposed to these dispersal patterns was tested in the following manner : a sample of dispersal patterns connecting the patches was generated for each combination of and different values of . for any of these patternsthe viability of independent realisations of metapopulations was tested for different values of the colonisation probability according to the procedure outlined in section [ sec : metapopulation - model - vuilleumier ] , resulting in a statistics for a total of simulations on randomly generated connectivity matrices for every choice of , , and . for our main analysiswe record the number of viable metapopulations out of the simulations and prepare the results for graphical analysis .the sensitivity of this test procedure and its interpretation with respect to the statistics of extinction times is discussed in section [ sec : discussion : interpret ] .results on the viability of metapopulations exposed to dispersal patterns with regular dispersal randomly generated by the algorithm described in section [ sec : viab - metap - conn ] and [ sec : algor - gener - balanc ] . in the upper rowthe viability is plotted as a function of the effective number of connections per patch , , and the colonisation probability . at every combination of and the viability of different dispersal patterns has been investigated .green and red squares indicate parameters , where either all patterns either were viable or not .the intermediate region where some of the patterns were viable and others were not is coloured yellow .the three panels present the results for different degrees of symmetry , increasing from ( anti - symmetric dispersal patterns ) on the left to ( symmetric patterns ) on the right hand side . in the lower rowthe simulation results are presented accordingly as a function of the symmetry and the colonisation probability for three different number of connections per patch , and .only a vanishing impact of symmetry is observed for .,title="fig : " ] + results on the viability of metapopulations exposed to dispersal patterns with regular dispersal randomly generated by the algorithm described in section [ sec : viab - metap - conn ] and [ sec : algor - gener - balanc ] . in the upper rowthe viability is plotted as a function of the effective number of connections per patch , , and the colonisation probability . at every combination of and the viability of different dispersal patterns has been investigated .green and red squares indicate parameters , where either all patterns either were viable or not .the intermediate region where some of the patterns were viable and others were not is coloured yellow .the three panels present the results for different degrees of symmetry , increasing from ( anti - symmetric dispersal patterns ) on the left to ( symmetric patterns ) on the right hand side . in the lower rowthe simulation results are presented accordingly as a function of the symmetry and the colonisation probability for three different number of connections per patch , and .only a vanishing impact of symmetry is observed for .,title="fig : " ] for each scenario a total of simulations were performed . for straightforward statistical evaluation of the viability of metapopulations exposed to the respective conditions the simulation results were divided into three different groups , which are colour coded in the graphical presentation of the results : if all simulated metapopulations either went extinct or were viable the scenario is coloured red or green , respectively .otherwise , i.e. if the number of extinct simulations out of is greater than but smaller than , the scenario was coloured yellow .the results are illustrated in fig .[ fig : balanced ] .the three panels in the upper row show the viability of the metapopulation as a function of the number of dispersal connections per patch , , and the colonisation probability for different values of : anti - symmetric dispersal ( ) , asymmetric dispersal with intermediate degree of symmetry ( ) , and symmetric dispersal ( ) .the lower panels of fig.[fig : balanced ] contain the same results , but now analysed with respect to the effect of the degree of symmetry , , for three different values of .in fact , for no statistically significant impact of symmetry is observed .first of all the results depicted in fig .[ fig : balanced ] suggest that the impact of the degree of symmetry on metapopulation viability decreases with increasing . already at statistical significant impact of the degree of symmetry , i.e. no systematic differences depending on the degree of symmetry , can be detected on the basis of the scenarios and the statistical evaluation chosen . at a small number of dispersal connections per patch( ) metapopulation viability is significantly reduced for more symmetric dispersal ( fig . [fig : balanced ] , lower panels ) .the reason for this effect straightforwardly can be understood from considerations concerning the structure of the underlying dispersal patterns : let us first focus on patterns with . in this casea metapopulation with a symmetric dispersal pattern necessarily consists of a number of patches only pairwisely connected through dispersal ( figure [ fig : algorithm - examples ] ) .the largest closed dispersal cycle ( synonymous to the giant component of the dispersal network ) , hence , involves only two patches . forthe particular metapopulation model applied a lower bound for the extinction probability of a pair of patches per time step is . on the contrary the mean size of the largest closed dispersal cycle estimated from the dispersal patterns generated for the same conditions but antisymmetric dispersal ( ) was .for the mean size of the largest cycles was for the symmetric dispersal matrices generated , whereas for the asymmetric case all dispersal matrices already extended to the entire metapopulation ( i.e. their mean size was ) .hence we are faced with a percolation problem on random graphs , where the percolation threshold depends on the symmetry properties of the dispersal pattern .analysis of the eigenvalues of associated state transition matrices reveals , that the mean time to extinction of a set of patches participating in a closed cycle of dispersal increases with the size of the cycle .for this reason differences in viability at small are attributed to hierarchical differences of the generated matrices at only a few number of connections , namely .this density is much smaller then relevant cases discussed e.g.in as will be discussed in more detail in section [ sec : discussion : prev ] .extinction statistics for the metapopulations with different values of the colonisation probability connected through dispersal matrices with and .the individual lines indicate the number of non - extinct simulations ( out of ) as a function of the simulation time .the dashed line corresponds to the upper bound for the expectation value of the number of extinct simulation for cases where all simulations went extinct , , as derived in the manuscript text . from the figure it becomes evident , that the number of non - extinct simulations after an initial relaxation phase indeed decreases exponentially in time ( i.e. linear in this logarithmic plot ) .the upper bound approaches with , which is a general result for sufficiently large and as a taylor expansion of expression shows . for this reasonthe boundary line indeed exhibits the border between the cases marked red and yellow in figure [ fig : balanced ] . ]how meaningful is the statistical evaluation of the results with respect to the effect of the symmetry of dispersal patterns on expected extinction times of metapopulations ? in order to approach this question we aim to derive lower and upper bounds for extinction times in the red and green regions of the figures , which then help to evaluate the graphical presentation of the results in more detail .if we disregard the initial time period of relaxation of the metapopulation to a quasistationary state , we can assume that the statistics of extinction times is exponentially distributed .this exponential distribution complies with a constant risk of metapopulation extinction per time step , which we call .the chance , that a metapopulation has not gone extinct after time steps then is . for every combination of parameters we perform simulations with in our case of them share the same dispersal patternsthis assumption , however , is not expected to be too extensive as the investigation of the replicate statistics at the end of section [ sec : discussion : interpret ] suggests . ] .it is then straightforward to calculate the probability that all simulations are viable , accordingly the chance that a simulation goes extinct during the simulation steps is , resulting in the probability of observing viable simulations of more interesting , however , would be the expressions and , the probability distributions of the metapopulation extinction risk given the fact that either all or none of the simulations are viable .these expressions straightforwardly can be calculated using bayes theorem .using uniform prior distributions we obtain ^{-1}\left(1-(1-r)^{t}\right)^m\quad.\end{aligned}\ ] ] using a maximum likelihood approach confidence intervals for can be calculated .applying a confidence level of the upper bound for in cases where all simulations are viable is . as a lower bound for for cases where all simulations went extinctwe obtain .since the latter result strongly depends on the prior distribution we instead use the inflection point of the sigmoid function at ^{1/t}\ ] ] as a more conservative estimate , which for the case of our simulations is located at approximately .the inverse of corresponds to the mean time to extinction . from our considerationswe , hence , expect the mean time to extinction for the scenarios marked by red squares in figure [ fig : balanced ] to be below and the respective value for the conditions marked green to be in the order of or larger .intermediate values are expected for the conditions marked yellow in the individual plots .figure [ fig : e50-itime ] demonstrates , that assumptions we needed to make seem to hold and that the estimates indeed reflect the underlying extinction statistics to a great extent .obviously the classification of the conditions by the three scenarios to a meaningful extent reflects the extinction risks of the metapopulation in a sense , that figure [ fig : balanced ] succeeds to highlight the main results . from the bounds for the mean extinction times to extinction derived above for the respective classes we can conclude that metapopulations in the red regions almost surely go extinct within a short time , whereas metapopulations in the green regions are likely to be persistent .the yellow region decreases in range with increasing .that is , the transition between threatened and persistent metapopulation sharpens with increasing .extreme example of the variation in the number of viable replicates between the different samples ( here : , , ) .in particular sample deviates strongly from the general mean .since we can assume that the main source of variations is the stochastic simulation procedure rather than qualitative differences between the random dispersal patterns relevant for the present study , we do not investigate the within - sample variations further within the scope of this work .] the replicate simulations performed for each parameter set and each dispersal pattern in addition allow to investigate and to discuss the variability within the sample of dispersal patterns . in the regionsmarked red and green by definition all samples show the same behaviour .detailed analysis of the yellow regions shows only very few cases of large variability of the number of extinct replicates between the samples .one example of rather high variability is depicted in figure [ fig : e50-replicate ] .overall the differences between the random dispersal patterns generated for each scenario do not seem to be relevant for the present study , which is probably due to the decision of using regular dispersal patterns .so far we focused on regular dispersal patterns .this approach made it possible to investigate the impact of the degree of symmetry of connectivity matrices on metapopulation viability independently from other possibly confounding effects , which is important in order to assess the role of dispersal symmetry for metapopulations .our results on regular dispersal patterns show a remarkably low effect of symmetry ( ) on the viability of metapopulations at intermediate and high density of dispersal paths , . at low symmetric dispersaleven results in a slightly negative effects on the viability . how do these results relate to the more general case where the dispersal network is not regular ?results on the viability of metapopulations exposed to general dispersal patterns randomly generated by modification of the algorithm described in [ sec : algor - gener - balanc ] . the analysis and graphical presentation of the simulation results is accordant to the procedure described in the caption of figure [ fig : balanced ]. please note that now specifies the mean number of connections per patch , while the actual number of dispersal links now can vary between patches.,title="fig : " ] + results on the viability of metapopulations exposed to general dispersal patterns randomly generated by modification of the algorithm described in [ sec : algor - gener - balanc ] . the analysis and graphical presentation of the simulation results is accordant to the procedure described in the caption of figure [ fig : balanced ]. please note that now specifies the mean number of connections per patch , while the actual number of dispersal links now can vary between patches.,title="fig : " ] in order to follow up this question we repeated the simulations accordingly , but now without the constraint of having regular dispersal networks .technically this was implemented by skipping steps 4c and 4d of the pattern generation algorithm detailed in [ sec : algor - gener - balanc ] , which then controls for the desired degree of symmetry only .the parameter now should be understood in a statistical sense , such that dispersal connections randomly were distributed between the patches resulting in a mean density of connections per patch .the results are depicted in figure [ fig : nonreg ] .interestingly , the minor effect of symmetry at low density of dispersal connections now shifts to a slight advantage for metapopulations with a symmetric dispersal pattern . from significant differences with respect to the simulation results based on regular dispersal patterns ( figure [ fig : balanced ] ) are observed . in non - regular dispersal patternsthe existence of isolated patches not participating in dispersal has an impact on the effective density of dispersal connections in the metapopulations ( see also * ? ? ?moreover , in the case of asymmetric dispersal there exist patches that either only receive or only provide migrants , i.e._sinks _ or _ sources _ , and that can not actively take part in the metapopulation dynamics .since both of these effects are most distinct at small densities of the random dispersal networks , we assume that these differences basically drive the minor differences at low between our results on regular and the general case of random dispersal .arguments for _ not _ assigning this effects to asymmetry in dispersal but to examine them separately are made in section [ sec : conclusions ] . in general our results suggest essentially no direct negative effect of asymmetric dispersal on metapopulation viability at intermediate and high densities of the dispersal network , at least as far as the stochastic patch occupancy model applied in this work is concerned .this is in contrast to the findings in where it was concluded that extinction risk significantly increased when dispersal became asymmetric .the analysis in is not restricted to cases with regular dispersal only , although the relaxation of regular dispersal is not not sufficient to explain the qualitative differences in the results as shown in the previous section .the description of the random patterns investigated in does not provide all information necessary for an in - depth comparison with our results .in the number of dispersal connections was chosen randomly for each of the metapopulations .additional information provided on two particular patterns suggest that the densities are comparable or higher than the densities we investigated in our study . from our resultswe therefore do not expect a significant impact of dispersal asymmetry at these density of connections .the analysis of the results in is based on the number of connected patches in contrast to our analysis using the global mean number of connections .the statistics of the number of connected patches seems to differ significantly between the asymmetric and the symmetric connectivity matrices investigated , a phenomenon we were not able to reproduce . in particular the example of a symmetric random pattern with more than connections per patch but only connected patches raises questions , since the largest cycle of closed dispersal in non - regular connectivity matrices we generated always extended to at least patches for densities above connections per patch with a strong trend towards patches with increasing density . for this reason we assume , that the effects described in originate from differences in network topology between the investigated connectivity matrices rather than differences in dispersal asymmetry . investigated the same metapopulation model as in the present work in a slightly different setup ( , , and ) . instead of simulating individual realisations , the probability of metapopulations to go extinct within time stepswas calculated numerically for different dispersal patterns .this method restricts the analysis to rather small metapopulations of patches .extinction probabilities were calculated for metapopulations connected through different dispersal patterns generated by the small world algorithm ( see e.g. * ? ? ?* ; * ? ? ?* ) initiated with a particular symmetric dispersal pattern ( bode , pers.communication ) . concluded from qualitative graphical analysis of their simulation results time steps .] , that asymmetry reduces persistence and exhibits a distinct threat to metapopulations .the discussion of our results in section [ sec : discussion : interpret ] relates our graphical analysis to the extinction probability in a certain number of time steps time units extinctions probabilities below are expected , for the red regions an accordant calculation yields probabilities above almost . ] , which allows for a comparison of the results . from additional simulation data we received from bode it seems , that the negative effect in their approach is larger than what we would expect from our simulation for the general , non - regular case ( section [ sec : discussion : regul ] ) .additional simulations performed for metapopulations likewise subjected to non - regular dispersal patterns but reduced to the size of patches indicated a general increase in the probability of extinction but no significant impact of metapopulation size on the impact of symmetry .we therefore assume , that the differences related to symmetry observed by partly are owed to the fact , that the patterns in their study were generated from a particular symmetric starting configuration of the small world algorithm and that the similarity of patterns to this starting configuration correlates with the symmetry properties . recently another work was devoted to the effect of asymmetry on metapopulation viability .this work aims to cover different aspects of asymmetry simultaneously , which makes it difficult to ascribe the variety of effects observed to certain properties of dispersal matrices .one configuration , however , seems to be equivalent to the simulations we performed for general dispersal matrices in section [ sec : discussion : regul ] for anti - symmetric and symmetric dispersal , respectively ( * ? ? ?* , fig . 2 , right column ) .the results the authors obtain on these patterns are in agreement with our observations , that the degree of symmetry of dispersal matrices has no significant impact on metapopulation viability at intermediate density of dispersal connections ( cp .* , fig . 6 , difference between the plots in the right column ) .we investigated the consequences of the symmetry of dispersal patterns on the viability of metapopulations .our analyses are based on simulations of a stochastic patch occupancy model .first we define the degree of dispersal symmetry , , which is based on the symmetry of the connectivity matrix ( equation [ eq : def - deg - asymm ] ) . in order to be able to minimise possibly confounding effects we restrict our main analysis to regular dispersal patterns ,where asymmetry does neither affect the homogeneity of dispersal nor the local balances of incoming and outgoing dispersal connections .for these patterns we do not see any negative effect of dispersal asymmetry . for the more general case of non - regular dispersal patterns minor negative effects of asymmetric dispersal on metapopulation viabilityare confirmed , but only at rather weak densities of dispersal ( cp .section [ sec : discussion : regul ] ) . at these densities differences in dispersal symmetrygenerally are accompanied by other hierarchical differences of the dispersal network .this e.g. becomes evident from a neat example of a two patch metapopulation investigated in detail in ( * ? ? ?* , appendix a ) , where dispersal asymmetry by return results in a source - sink problem . from first instanceit is not self - evident whether these accompanying effects are the origin or a consequence of asymmetric dispersal , since their characteristic strongly depends on how the system of study was constructed and chosen . for realistic dispersal patternsthe solution proposed in , namely to investigate dispersal asymmetry independent from the discussion of sources and sinks , however does not seem to work out , since these effects in general are strongly connected to one another .these correlations in the past made the investigation of asymmetric dispersal highly dependent on the system of study , which was the main difficulty in understanding the role of dispersal asymmetry . in order to resolve this problemwe suggest to discuss the symmetry of dispersal patterns at large scales e.g. based on a definition similar to equation and the statistics of sources and sinks , the homogeneity of the dispersal network , and other features characterising the local flow of migrants _ jointly _ instead of in isolation .it was the aim of the present work , to clarify the role of asymmetric dispersal and its impact on metapopulation viability .in contrast to previous studies we see only weak effects of asymmetric connectivity on metapopulation extinction , which suggests that natural populations with asymmetric dispersal may not per se suffer from increased extinction risks .instead effects observed in simulations , real world data , or in the evaluation of management strategies ( see e.g. * ? ? ?* ) might be reflected more significantly by other features of complex dispersal patterns . a promising path towards a discussion of potentially important features is taken in the investigations of the viability of metapopulations connected through a variety of different dispersal patterns as provided in .we expect that eventually only a theoretical analysis of the stochastic metapopulation model applied can reveal the features relevant for metapopulation viability .we kindly acknowledge comments by bernt wennberg on an early version of the manuscript and suggestions by kerstin johannesson on a more recent version .we kindly appreciate that michael bode contributed simulation results and shared details on his 2008 work , .furthermore we are deeply indebted to kind and constructive comments of two anonymous reviewers .this work was supported by a linnaeus - grant from the swedish research councils , vr and formas ( http://www.cemeb.science.gu.se ) , by formas through contract 209/2008 - 1115 ( prj ) , and by the swedish research council through contract 275 621 - 2008 - 5456 ( prj ) .34 natexlab#1#1url # 1`#1`urlprefix armsworth , p. r. , 2002 .recruitment limitation , population regulation , and larval connectivity in reef fish metapopulations .ecology 83 ( 4 ) , 10921104 .artzy - randrup , y. , stone , l. , 08 2010 .connectivity , cycles , and persistence thresholds in metapopulation networks .plos comput biol 6 ( 8) , e1000876 .barabsi , a. , oltvai , z. , 2004 .network biology : understanding the cell s functional organization .nature reviews genetics 5 ( 2 ) , 101113 . berchenko , y. , artzy - randrup , y. , teicher , m. , stone , l. , 2009 .emergence and size of the giant component in clustered random graphs with a given degree distribution .physical review letters 102 ( 13 ) , 138701 . bode , m. , bode , l. , armsworth , p. , 2006 .larval dispersal reveals regional sources and sinks in the great barrier reef .marine ecology progress series 308 , 1725 .bode , m. , burrage , k. , possingham , h. , 2008 . using complex network metrics to predict the persistence of metapopulations with asymmetric connectivity patterns . ecological modelling 214 ( 2 - 4 ) , 201209 .brandes , u. , erlebach , t. ( eds . ) , 2005 . network analysis . vol .3418 of lecture notes in computer science .springer berlin .callaway , d. , newman , m. , strogatz , s. , watts , d. , 2000 .network robustness and fragility : percolation on random graphs .physical review letters 85 ( 25 ) , 54685471 .caswell , h. , 2001 .matrix population models : construction , analysis , and interpretation .second edition .sunderland , massachusetts , usa : sinauer associates .crooks , k. , sanjayan , m. , 2006 . connectivity conservation .cambridge univ pr .davis , s. , trapman , p. , leirs , h. , begon , m. , heesterbeek , j. , 2008 . the abundance threshold for plague as a critical percolation phenomenon .nature 454 ( 7204 ) , 634637 .gyllenberg , m. , hanski , i. , 1992 .single - species metapopulation dynamics : a structured model .theoretical population biology(print ) 42 ( 1 ) , 3561 .haight , r. , travis , l. , 2008 .reserve design to maximize species persistence . environmental modeling and assessment 13 ( 2 ) , 243253 .hanski , i. , 1994 . a practical model of metapopulation dynamics .journal of animal ecology 63 ( 1 ) , 151162 .hanski , i. , 1999 .metapopulation ecology .oxford university press .hanski , i. , 2002 .metapopulations of animals in highly fragmented landscapes and population viability analysis .population viability analysis , 86108 .hanski , i. , gilpin , m. , 1997 .metapopulation biology : ecology , genetics , and evolution .academic press , san diego .hanski , i. , gilpin , m. , 1998 .metapopulation dynamics .nature 396 ( 6706 ) , 4149 .joshi , j. , schmid , b. , caldeira , m. , dimitrakopoulos , p. , good , j. , harris , r. , hector , a. , huss - danell , k. , jumpponen , a. , minns , a. , mulder , c. , pereira , j. , prinz , a. , scherer - lorenzen , m. , siamantziouras , a. , terry , a. , troumbis , a. , lawton , j. , 2001 . local adaptation enhances performance of common plant species .ecology letters 4 ( 6 ) , 536544 .kawecki , t. , holt , r. , 2002 .evolutionary consequences of asymmetric dispersal rates . the american naturalist 160 ( 3 ) , 333347 .kininmonth , s. , death , g. , possingham , h. , 2009 .graph theoretic topology of the great but small barrier reef world .theoretical ecology .levins , r. , 1969 .some demographic and genetic consequences of environmental heterogeneity for biological control .bulletin of the entomological society of america 15 ( 2 ) , 237240 .malchow , h. , petrovskii , s. v. , venturino , e. , 2008 .spatiotemporal patterns in ecology and epidemiology .boca raton : chapman & hall / crc .mccallum , h. , dobson , a. , 2002 .disease , habitat fragmentation and conservation .proceedings of the royal society of london , series b : biological sciences 269 ( 1504 ) , 20412049 .nathan , r. , safriel , u. , noy - meir , i. , 2001 .field validation and sensitivity analysis of a mechanistic model for tree seed dispersal by wind .ecology 82 ( 2 ) , 374388 . pulliam , h. , danielson , b. , 1991 .sources , sinks , and habitat selection : a landscape perspective on population dynamics .the american naturalist 137 ( s1 ) , 50 .roy , m. , harding , k. , holt , r. , 2008 . generalizing levins metapopulation model in explicit space : models of intermediate complexity. journal of theoretical biology 255 ( 1 ) , 152161 .roy , m. , holt , r. , barfield , m. , 2005 .temporal autocorrelation can enhance the persistence and abundance of metapopulations comprised of coupled sinks .the american naturalist 166 ( 2 ) , 246261 . sultan , s. , spencer , h. , 2002 .metapopulation structure favors plasticity over local adaptation .the american naturalist 160 ( 2 ) , 271283 .van teeffelen , a. , cabeza , m. , moilanen , a. , 2006 .connectivity , probabilities and persistence : comparing reserve selection strategies .biodiversity and conservation 15 ( 3 ) , 899919 .vuilleumier , s. , bolker , b. m. , lvque , o. , 2010 . effect of colonization asymmetries on metapopulation persistence .theoretical population biology 78 , 225238 .vuilleumier , s. , possingham , h. , 2006 .does colonization asymmetry matter in metapopulations ?proceedings of the royal society b 273 ( 1594 ) , 1637 . wares , j. , gaines , s. , cunningham , c. , 2001 . a comparative study of asymmetric migration events across a marine biogeographic boundary .evolution 55 ( 2 ) , 295306 . watts , d. , strogatz , s. , 1998 .collective dynamics of small world networks .nature 393 , 440442 .since we intended to compare cases primarily differing in their symmetry properties , we focused on _ regular _ dispersal patterns with fixed number of in- and out - going dispersal routes for every patch . for the connectivity matrices is equivalent to the constraint that the sums over every column and every row are equal , that is for any and . here is the total number of activated dispersal routes .random matrices at arbitrary degree of symmetry that are complying with equation are generated by the following algorithm , that is repeated until a matrix with non - zero elements is obtained : 1 .set , generate a random matrix ^{n\times n}$ ] , where are random numbers drawn independently from an arbitrary distribution .for instance uniformly distributed random variables are suitable here .ensure that all elements of are unique .2 . set diagonal elements to for all .3 . calculate the desired number of symmetric connections , 4 .repeat until smallest element of is larger than or : 1 .identify row and column of the smallest value of 2 .set and ( * ) 3 . if set for every ( * ) 4 .if set for every ( * ) 5 .switch and 6 .if ( generate symmetric connection ) : * repeat the steps marked by ( * ) * reduce by + else : ( generate asymmetric connection ) * set 5 . reject result if .note that the value , of course , is arbitrary .any number greater than is suitable to ensure that the corresponding elements of are not selected by the algorithm .this algorithm randomly orders the elements of and activates them step by step .it generates random connectivity matrices with given degree of symmetry and it is sufficiently efficient for small and intermediate ..... program regular_connectivity ! = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = ! program regular_connectivity! generation of regular random dispersal patterns ! tested with gfortran 4.3.3 ! ( c ) 2010 by david kleinhans , university of gothenburg , sweden ! distributed under the creative commons attribution 3.0 license != = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = double precision::rand(n , n ) ! random matrix used for ordering of links double precision::remaining_sym ! no of remaining symmetric connections integer::rejections !count number of rejected dispersal patterns integer::loc(2 ) !location of the smallest element of rand integer::i , j ! auxiliary variables , used for loops only logical::gridok ! check if grid complies with constraints ! = = = request parameters = = = write(*,"(a , i4)")"regular dispersal matrix for metapopulation of size n=",n write(*,"(a)")"please enter parameters : " write(*,"(a)")"degree of symmetry , gamma ( double precision , > = 0 and < = 1 ) ? " read(*,*)gamma write(*,"(f8.5)")gamma write(*,"(a , i4,a)")"no of connections per patch , lbyn ( integer , > 0 and < " , & & ( n-1)/2 , " ) ? " read(*,*)lbyn write(*,"(i4 ) " )! = = starting configuration : = = ! all links inactive d=0 !calculate number of symmetric links to be generated remaining_sym = nint(gamma*lbyn*n ) !generate random number matrix for ordering of links ! ( exclude diagonal elements by assigning value of 10 )do i=1,n do j=1,n if(i.ne.j)then call random_number(rand(i , j ) ) else rand(i , j)=10.d0 endif enddo enddo !set random number of the element to 10 and activate corresponding link rand(loc(1),loc(2))=10.d0 d(loc(1),loc(2))=1 ! check whether number of desired incoming or outgoing links already ! has been reached for the patch of focus , prevent further links if so if(count(d(loc(1),:).eq.1).ge.lbyn)rand(loc(1),:)=10.d0 if(count(d(:,loc(2)).eq.1).ge.lbyn)rand(:,loc(2))=10.d0 ! if symmetric connections are remaining : make the current a symmetric one , !else ensure that the reverse direction is not activated if(remaining_sym.gt.0)then d(loc(2),loc(1))=1 rand(loc(2),loc(1))=10.d0 if(count(d(loc(2),:).eq.1).ge.lbyn)rand(loc(2),:)=10.d0 if(count(d(:,loc(1)).eq.1).ge.lbyn)rand(:,loc(1))=10.d0 remaining_sym = remaining_sym-2 else rand(loc(2),loc(1))=10 .endif enddo ! check whether the desired no of links has been generated !reject and restart if not , accept the pattern otherwise if ( count(d.eq.1).eq.lbyn*n)then gridok=.true .else rejections = rejections+1 if(rejections.lt.max_rejections)then write(*,"(a , i4,a)")"pattern " , rejections , & & " rejected , restarting grid generation ... " else write(*,"(a)")"grid generation not successfull . "write(*,"(a)")"please try lower lbyn or increase max_rejections . "stop endif endif enddo
metapopulation theory for a long time has assumed dispersal to be symmetric , i.e. patches are connected through migrants dispersing bi - directionally without a preferred direction . however , for natural populations symmetry is often broken , e.g. for species in the marine environment dispersing through the transport of pelagic larvae with ocean currents . the few recent studies of asymmetric dispersal concluded , that asymmetry has a distinct negative impact on the persistence of metapopulations . detailed analysis however revealed , that these previous studies might have been unable to properly disentangle the effect of symmetry from other potentially confounding properties of dispersal patterns . we resolve this issue by systematically investigating the symmetry of dispersal patterns and its impact on metapopulation persistence . our main analysis based on a metapopulation model equivalent to previous studies but now applied on regular dispersal patterns aims to isolate the effect of dispersal symmetry on metapopulation persistence . our results suggest , that asymmetry in itself does not imply negative effects on metapopulation persistence . for this reason we recommend to investigate it in connection with other properties of dispersal instead of in isolation . connectivity matrix , dispersal network , symmetry , metapopulation viability .
in recent years , entanglement has become an important resource for quantum communications .quantum computation , which is more efficient than classical computation for certain problems , could also potentially owe its efficiency to entanglement .though the precise role of entanglement in quantum computation is not yet well understood , entangled states are certainly generated during the course of certain quantum computations .a quantum computation , when halted at an appropriate point , can be regarded as a method of generating entanglement .typically , a quantum computation is a multiparticle interference experiment with different phases applied to distinct multiparticle states . in general , the phases applied to the multiparticle states during a quantum computation are _ global phases _ as they depend on the total state of a collection of qubits . in this paper, we will investigate the types of entanglement generated by such global phases and the conditions under which such phases do not generate any entanglement .the model of quantum computation which motivates our work is that presented by cleve , ekert , macchiavello and mosca .this model ( with a slight alteration which does not change its principal ingredient ) is illustrated in fig .each of the qubits , initially in the state , is first transformed according to a hadamard transformation .this is shown in the figure by the giant hadamard transformation acting on all the qubits and converts the total state of the qubits to labels the possible states of the type in which each or . is a disentangled state .a state dependent global phase is now applied to each state .this is shown as the second giant transformation in the figure .this converts the total state to where are real and ( is reassigned the value ) .this state , generated as a result of global phases , can be entangled .we propose to halt the quantum computation at this stage and investigate the amount of entanglement generated . a complete quantum computation , of course , consists of one more step in which another giant hadamard transformation is applied to all the qubits as shown in fig . [ fig1 ] .but in this paper we are interested in the entanglement of the state _ prior _ to this last transformation .the entanglement of comes from the global phase factors .first , we study conditions on the phase function for the state to be disentangled .next , we derive the entanglement of three - qubit pure states ( ) for the special case in which only one or two of the global phase parameters are nonzero .we study variation of the entanglement as a function of one global phase parameter for a mixed state of three qubits by numerical calculations . finally , we discuss the implications of this type of entanglement arising in deutsch - jozsa algorithm .in particular we show that for obtaining exponential advantage over its classical counterpart , entangled states must necessarily arise in deutsch - jozsa algorithm .we first derive the conditions on for to be disentangled , i.e. , in a case of two qubits ( ) , we can write the condition as follows , -[f(2)-f(3)]=2\pi n,\ ] ] where is an arbitrary integer .we now consider the case of three qubits , first , we derive condition that the qubit is disentangled from the qubits .the density matrix of the qubit is given by with taking a basis and }+e^{i[f(2)-f(3 ) ] } \nonumber \\ & & + e^{i[f(4)-f(5)]}+e^{i[f(6)-f(7)]}.\end{aligned}\ ] ] ( from now on , when we give a matrix representation of a density operator on a -dimensional space , we always take a logical basis of . )if and only if , the qubit is disentangled from the qubits . hence we obtain the following constraints , }-{[}f(2)-f(3){]}&=&2\pi n_{1 } , \label{3-qubit - constraint1 } \\ { [ } f(0)-f(1){]}-{[}f(4)-f(5){]}&=&2\pi n_{2 } , \label{3-qubit - constraint2 } \\ { [ } f(0)-f(1){]}-{[}f(6)-f(7){]}&=&2\pi n_{3}. \label{3-qubit - constraint3}\end{aligned}\ ] ] next , we consider the condition that the qubit is disentangled from the qubits . from similar considerations before , we obtain another constraint , -[f(4)-f(6)]=2\pi n_{4}. \label{3-qubit - constraint4}\ ] ] from these results , we obtain four constraints , eqs .( [ 3-qubit - constraint1 ] ) , ( [ 3-qubit - constraint2 ] ) , ( [ 3-qubit - constraint3 ] ) and ( [ 3-qubit - constraint4 ] ) , where are arbitrary integers , so that is disentangled perfectly .next consider the general case of qubits . before deriving the condition for to be disentangled ,we think how many constraints of do we need to disentangled completely . in eq .( [ n - qubit - phase - eq ] ) , the number of real parameters is equal to . on the other hand ,if is disentangled , we can describe it as where for and the number of real parameters is equal to .therefore , to disentangle to an -qubit product state , we need ] at and gets the minimum one of at . in fig .[ figure1 ] , we show a variation of entanglement as a function of . the physical reason for the entanglement peaking at can be understood if is rewritten in the following manner the state is essentially a mixture of the state , which is maximally entangled for , and , which is always disentangled .( if we apply hadamard transformation to the first qubit of the bell - singlet , we obtain . )hence it is only expected that the entanglement of the mixture will be maximum at .it is also clear that the entanglement can never be maximal in magnitude because an entangled and a disentangled state are always mixed in equal proportions in .next , we consider pure states with two phase parameters and .for example , consider the following state , we trace out the qubit and get } \nonumber \\ & = & \frac{1}{8 } \left [ \begin{array}{cccc } 2 & \bar{\zeta}\tau+1 & \tau+1 & \tau+1 \\ \zeta\bar{\tau}+1 & 2 & \zeta+1 & \zeta+1 \\ \bar{\tau}+1 & \bar{\zeta}+1 & 2 & 2 \\ \bar{\tau}+1 & \bar{\zeta}+1 & 2 & 2 \\ \end{array } \right ] , \label{2qubit - densitymatrix-2parameter1}\end{aligned}\ ] ] where and . writing an eigenvalue of as , we obtain and (\geq 0 ) , \label{eigenvalues - rhorho-2para1-pure}\ ] ] hence , the concurrence is and the entanglement can be written as , where }\}. \label{2para - eq - p - theta - sigma-1}\ ] ] from eqs .( [ concurrence-2para1-pure ] ) and ( [ 2para - eq - p - theta - sigma-1 ] ) , we find that and can take values in the ranges of eq .( [ c - p - ranges ] ) .because gets maximum at ( ) and gets minimum at ( ) , gets the maximum value at and gets the minimum one at . in fig .[ figure2 ] , we show a variation of entanglement as a function of and . again , in this case it is easy to see why the entanglement is minimum for .the whole state can be rewritten as }_{bc}\nonumber \\ & & + |1\rangle_a \otimes ( |0\rangle+|1\rangle)_{b}(|0\rangle+|1\rangle)_{c}.\end{aligned}\ ] ] this makes it clear that the state is a mixture of the state }_{bc} ] , whose entanglement will be zero when and maximum when .the entanglement between the qubits and will depend on the choice of the two kets from the set to which we decide to apply the global phases ( and ) ( it is different from the one - parameter case of eq . ( [ 3qubit-1phase - parameter - pure - state1 ] ) ) .imagine that we had applied the phases to and .then the reduced density matrix for would be }. \label{2qubit - densitymatrix-2parameter2}\end{aligned}\ ] ] because we can not transform the density matrix of eq .( [ 2qubit - densitymatrix-2parameter2 ] ) to that of eq .( [ 2qubit - densitymatrix-2parameter1 ] ) by local unitary transformations , the entanglement of eq .( [ 2qubit - densitymatrix-2parameter1 ] ) need not be equal to that of eq .( [ 2qubit - densitymatrix-2parameter2 ] ) in general .writing as ,\ ] ] and an eigenvalue of as , we obtain and ^{1/2}\ } , \label{eigenvalues - rts}\ ] ] where , , and .the concurrence is given by and . at , and the entanglement gets minimum . at , and maximum . in fig .[ figure3 ] , we show a variation of entanglement as a function of and . as in the previous cases ,the entanglement is entirely due to the entanglement of the first part of the density matrix .note that in both the cases of eqs .( [ 2qubit - densitymatrix-2parameter1 ] ) and ( [ 2qubit - densitymatrix-2parameter2 ] ) maximal entanglement between and can never be reached by varying and .however , one could get maximal entanglement if one applied the two phase parameters to two different global states .this is equivalent to applying same sets of phases as before , but examining the entanglement between the pair of qubits and or and .let us consider the three - qubit pure state of eq .( [ 3qubit-2phase - parameter - pure - state1 ] ) again and trace out the qubit ( in contrast to ) to get } \nonumber \\ & = & \frac{1}{8 } \left [ \begin{array}{cccc } 2 & \zeta+\tau & \zeta+\tau & \zeta+\tau \\ \bar{\zeta}+\bar{\tau } & 2 & 2 & 2 \\ \bar{\zeta}+\bar{\tau } & 2 & 2 & 2 \\ \bar{\zeta}+\bar{\tau } & 2 & 2 & 2 \\ \end{array } \right ] .\label{2qubit - rho - ab-2parameter - a}\end{aligned}\ ] ] if we write an eigenvalue of as , we obtain and }\}^{1/2}{]},\ ] ] where and are defined before and . in fig .[ 2paraenta1 ] , we show a variation of entanglement of as a function of and .we now compare the entanglement of for eq .( [ 2qubit - rho - ab-2parameter - a ] ) and for eq .( [ 2qubit - densitymatrix-2parameter1 ] ) with fixed . in fig .[ 2paraenta2 ] , we show the variation of entanglement of and with . from fig .[ 2paraenta2 ] , we notice the following facts . when the entanglement of decreases , of increases . becomes the maximally entangled state at . to understand this, we rewrite with as follows } , \label{rho - a - fixed - theta - pi}\end{aligned}\ ] ] where note that is the maximally entangled state and the phase parameter controls the entanglement of the second term in eq .( [ rho - a - fixed - theta - pi ] ) . as , and maximally entangled in the state ( with being completely disentangled from them ) for .in previous sections , we have studied the entanglement between two qubits on pure states with phase factors .the pure state of eq .( [ 3qubit-1phase - parameter - pure - state1 ] ) is prepared by taking a three - qubit states , and giving a phase on the ket vector .( in this section , we will often use the basis . ) here , instead of the pure state , we take a mixed state , ^{\otimes 3 } , \label{mixedstate - nophasepara}\ ] ] where . then, we consider the application of a single phase factor as follows tracing out any qubit out of the three qubits , we obtain the density matrix in the form of , \label{density - operator - mixed-1para}\ ] ] where and .now we proceed to derive the entanglement as a function of and .we already know that entanglement takes the maximum value at when we fix .the interesting question is whether that peak of entanglement remains in the same place for a nonzero . before evaluating explicitly ,we show that it gets a local stationary value at for arbitrary fixed ( ) .( it remains stationary locally along -axis at any fixed . )we first show that an infinitesimal variation of from does not affect an equation of eigenvalues of .the equation of eigenvalues of with and is given by {.05em}{3ex}\,}_{\theta=\pi+\delta } } \nonumber \\ & = & \mbox{det}|\rho\tilde{\rho } -\lambda\mbox{\bf }| { \:\rule[-1ex]{.05em}{3ex}\,}_{\theta=\pi } + \delta \frac{\partial}{\partial\theta } [ \mbox{det}|\rho\tilde{\rho } -\lambda\mbox{\bf }| ] { \:\rule[-1ex]{.05em}{3ex}\,}_{\theta=\pi } + o(\delta^{2 } ) \nonumber \\ & = & 0.\end{aligned}\ ] ] hence , if { \:\rule[-.5ex]{.05em}{2.5ex}\,}_{\theta=\pi}=0 $ ] , the equation is not affected by and the eigenvalues of get stationary around a neighborhood of for fixed . writing ,\ ] ] where , \nonumber \\ y&=&-(1/16)(1-\bar{\tau})\alpha ( 1 + 4\alpha^{2 } ) , \nonumber \\z&=&-(1/4)(1-\bar{\tau})\alpha^{2 } , \nonumber \\v&=&(1/16)(1-\bar{\tau})\alpha [ 2\alpha^{2}+\tau(1 + 2\alpha^{2 } ) ] , \nonumber \\w&=&(1/8)(1-\tau^{2})\alpha^{2 } , \nonumber \\l&=&-\lambda + ( 1/16)(1 + 2\alpha)^{2}(1 - 2\alpha)^{2 } , \nonumber \\\end{aligned}\ ] ] we can obtain the following result with some calculations , { \:\rule[-1ex]{.05em}{3ex}\,}_{\theta=\pi } } \nonumber \\ & = & [ \left | \begin{array}{cccc } \partial_{\theta}x & \partial_{\theta}v & \partial_{\theta}v & \partial_{\theta}w \\ y & -z+l & -z & -v \\ y & -z & -z+l & -v \\ z & -y & -y & x+l \\\end{array } \right | \nonumber \\ & & + 2 \left | \begin{array}{cccc } x+l & v & v & w \\\partial_{\theta}y & -\partial_{\theta}z & -\partial_{\theta}z & -\partial_{\theta}v \\ y & -z & -z+l & -v \\ z & -y & -y & x+l \\\end{array } \right| \nonumber \\ & & + \left | \begin{array}{cccc } x+l & v & v & w \\ y & -z+l & -z & -v \\ y & -z & -z+l & -v \\ \partial_{\theta}z & -\partial_{\theta}y & -\partial_{\theta}y & \partial_{\theta}x \\ \end{array } \right | ] { \:\rule[-1ex]{.05em}{3ex}\,}_{\theta=\pi } \nonumber \\ & = & 0.\end{aligned}\ ] ] therefore , remains stationary at for any fixed and we can expect that it gets maximum there along -axis . by numerical calculations , we get fig .[ figure4 ] .it is clear from this figure that the basic behaviour of entanglement with variation of a single phase parameter does not change for a mixed initial state and it is still maximum at .[ figure5 ] shows variation of as a function of for .this figure illustrates that the entanglement is lost rapidly as gets larger .this is also an expected result : the more mixed the initial state is , the harder it is to entangle it by global phase functions .we now present an application of our results on entangling by global phases to the question of necessity of entanglement in quantum computation . in the deutsch - jozsa algorithm , the following state appears , where for . if is constant for , is a uniform superposition , and we get by applying the quantum fourier transformation ( qft ) to . on the other hand ,if takes on values or randomly but in a balanced manner ( i.e. equal occurrences of and ) , is orthogonal to the uniform superposition and we get a state orthogonal to after qft .therefore , we can investigate whether is constant or balanced by a single application of the global phase function using a quantum computer . on the other hand , in the worst case scenario using a classical algorithm, one may have to evaluate this function for at least half the number of possible arguments .this implies ( exponential ) function evaluations .this is why deutsch - jozsa algorithm is regarded as having an exponential advantage over its classical counterpart . to see that entanglement is necessary for the exponential advantage of this algorithm , consider the following scenario .it is given that the global phase functions , apart from being constant or balanced and taking values or , are also restricted in such a manner that they never produce an entangled state in the course of the entire computation .this implies ( according to the conditions obtained in section [ disentanglement ] ) , if we know beforehand that can be written as eq .( [ product - state - condition ] ) , we can estimate completely with steps of classical algorithm , even in the worst case .we supply and strings where only one digit is and the others are , , , , as of inputs for , and we get and as outputs .hence , when we restrict the possible set of functions to those which are _ non - entanglement producing _ , a polynomial time classical algorithm exists . in other words , there is only a polynomial advantage of quantum computation over classical computation . to make the quantum algorithmhave an exponential advantage over its classical counterpart , we must remove the restriction of eq .( [ product - state - condition ] ) on the global phase functions , which implies that entanglement can not be prevented from arising any more during the course of the quantum computation . asno entanglement implies only polynomial advantage , to get exponential advantage , entanglement is necessary .in this paper , we have investigated the generation of entanglement through global phase functions .we have obtained necessary and sufficient conditions for the application of global phases to the pure product state to result in entanglement .we have then investigated the amount of two qubit entanglement that can be generated in three - qubit pure states when only one or two of the global phase parameters are nonzero .an interesting , though potentially difficult , future direction will be in the investigation of the quantity of entanglement when all phase parameters are present for an arbitrary number of qubits .while we have obtained the conditions for _ presence _ or _ absence _ of entanglement in the general case , it would be interesting to classify functions according to the _ degree _ of entanglement they can generate .we have also examined entanglement generation through a single global phase parameter for mixed initial states .the general problem of finding necessary and sufficient conditions for entanglement by global phases for mixed states remains open .one could expect counterintuitive results in that case as the same global phase function might entangle one pure component and disentangle another pure component of a mixture of two pure states .finally , we have applied our conditions to prove the necessity of entanglement in the deutsch - jozsa algorithm for the algorithm to have an exponential advantage over its classical counterpart .it would be interesting to apply similar techniques to the investigation of the role of entanglement in other quantum algorithms . c. h. bennett and d. p. divincenzo , nature * 404 * , 247 ( 2000 ) .d. deutsch , proc .a * 400 * , 97 ( 1985 ) .d. deutsch and r. jozsa , proc .lond . a * 439 * , 553 ( 1992 ) .w. shor , `` algorithms for quantum computation : discrete logarithms and factoring , '' _ in proc .35th ann . symp . on the foundations of computer science _( ieee computer society , los alamitos , 1994 ) , pp .124134 ; p.w. shor , siam j. comput . * 26 * , 1484 ( 1997 ) .l. k. grover , `` a fast quantum mechanical algorithm for database search , '' _ in proc .acm symp . on the theory of computing _( acm press , new york , 1996 ) pp .212219 ; l. k. grover , phys .* 79 * , 325 ( 1997 ) .r. jozsa , `` entanglement and quantum computation , '' _ the geometric universe : science , geometry , and the work of roger penrose , _ ed .s. a. huggett _ et al ._ , pp . 369379 , ( oxford university press , 1998 ) .a. ekert and r. jozsa , phil .a * 356 * , 1769 ( 1998 ) .n. linden and s. popescu , `` good dynamics versus bad kinematics .is entanglement needed for quantum computation ? '' , lanl eprint quant - ph/9906008 .r. cleve , a. ekert , c. macchiavello and m. mosca , proc .lond . a * 454 * , 339 ( 1998 ) . c. h. bennett , d. p. divincenzo , j. a. smolin and w. k. wootters , phys .a * 54 * , 3824 ( 1996 ) ; + c. h. bennett and p. w. shor , ieee trans . on information theory* 44 * , 2724 ( 1998 ) .s. hill and w. k. wootters , phys .* 78 * , 5022 ( 1997 ) ; w. k. wootters , phys . rev . lett .* 80 * , 2245 ( 1998 ) .s. barnett , _ matrices : methods and applications _ , + chapt . 4 ( clarendon press , oxford , 1990 ) .
we investigate the creation of entanglement by the application of phases whose value depends on the state of a collection of qubits . first we give the necessary and sufficient conditions for a given set of phases to result in the creation of entanglement in a state comprising of an arbitrary number of qubits . then we analyze the creation of entanglement between any two qubits in three - qubit pure and mixed states . we use our result to prove that entanglement is necessary for deutsch - jozsa algorithm to have an exponential advantage over its classical counterpart .
understanding the universal critical behavior observed at and near continuous transitions is one of the major achievements of statistical physics ; the subject has been studied in depth for many years .it is generally considered however that the formalism based on the elegant renormalization group theory ( rgt ) can only be applied over a narrow temperature range , the `` critical region '' , while outside this region correction terms proliferate so attempts to extend the analysis become pointless .in fact this pessimistic conclusion follows largely because the traditional choices of scaling variables and scaling expressions are poorly adapted to the study of wide temperature ranges .the expressions for critical divergencies of observables near a critical temperature and in the thermodynamic ( infinite size ) limit are conventionally written with the scaling variable defined as and where represents an infinite set of confluent and analytic correction terms the exponents q , the confluent correction exponent , and many critical parameters such as amplitude ratios and finite size scaling functions , are universal , i.e. they are identical for all members of a universality class of systems . when the rgt formalism is outlined in textbooks or in authoritative reviews such as those of privman , hohenberg and aharony or pelissetto and vicari scaling variable is defined as from the outset . however , because at infinite temperature , when is chosen as the scaling variable the correction terms in each individually diverge as temperature is increased .it indeed becomes extremely awkward to use the expressions in eq.([tcorrnf ] ) outside a narrow `` critical '' temperature region .a `` critical - to - classical crossover '' has been invoked ( e.g. refs . ) with the effective exponent tending to the mean field values as the high temperature gaussian fixed point is approached .the crossover appears as a consequence of the definition of the exponent in terms of the thermodynamic susceptibility and the scaling variable .there is no such crossover when the extended scaling analysis described below is used .although this is rarely stated explicitly , there is nothing sacred about the scaling variable ; alternative scaling variables can be legitimately chosen and indeed have been widely used in practice , see e.g. refs . .temperature dependent prefactors can also be introduced in the scaling expressions on condition that the prefactor does not have a critical temperature dependence at .an `` extended scaling '' approach has been introduced which consists in a simple systematic rule for selecting scaling variables and prefactors , inspired by the well established high temperature series expansion ( htse ) method .this approach is a rationalization which leads automatically to well behaved high temperature limits as well as giving the correct critical limit behavior .here we give a general discussion of this approach .we outline the relationship to the rgt scaling field formalism . as an illustration of the application of the rules , known analytic results on the historically important ising ferromagnet chain in dimension one ( for which the critical temperature is of course ) are cited .simple extended scaling expressions for the reduced susceptibility , the second moment correlation length , and the specific heat are exact over the entire temperature range from zero to infinity .an exact susceptibility finite size scaling function is exhibited .the nearest neighbor ising ferromagnet on the simple cubic lattice is then discussed in detail .this model is among the principal canonical examples of a system having a continuous phase transition at a non - zero critical temperature .in contrast to the two - dimensional ising model , in three dimensions no exact values are known for the critical temperature or the critical exponents .we analyze high quality large scale numerical data which have been obtained for sizes up to , covering wide temperature ranges both above and below the critical temperature .the numerical technique is outlined .an analysis using the extended scaling approach provides compact critical expressions with a minimum of correction terms , which are accurate ( if not formally exact ) over the entire temperature range from to infinity and not only within a narrow critical regime .( the ising , _ xy _ , and heisenberg ferromagnets have been discussed in ref .we study the nearest neighbor interaction ferromagnetic ising model on the chain and on the simple cubic lattices of size with periodic boundary conditions .the hamiltonian with nearest neighbor interactions of strength is with the sum over nearest neighbor bonds . as usualwe will use throughout the normalized inverse temperature .the observables we have studied are as follows : \(i ) the variance of the equilibrium sample moment , which is equal to the non - connected reduced susceptibility where is the magnetization per spin , .\(ii ) the variance of the modulus of the equilibrium sample moment , or the `` modulus susceptibility '' below , tends to the connected reduced susceptibility in the thermodynamic limit , and tends to the thermodynamic limit magnetization at large .\(iii ) the specific heat which is equal to the variance of the energy per spin where u = with the sum over nearest neighbor bonds . we can note that and have consistent statistical definitions in terms of thermal fluctuations .the experimentally observed susceptibility contains an extraneous factor . the thermodynamic limit second moment correlation length is defined by where the second moment of the correlation function is with the distance between spins and , summing to infinity .when the `` thermodynamic limit '' condition holds all properties become independent of and so are identical to the thermodynamic limit properties . for general , the privman - fisher finite size scaling _ ansatz _ for an observable can be written the functions , are universal . must tend to when , and must be proportional to when .we are aware of no generally accepted explicit expressions for valid over the entire range of .in the extended scaling approach a systematic choice of scaling variables and scaling expression prefactors is made in the light of the htse .basically , an ideal htse corresponds to the power series when a real physical htse has the form with a general structure similar to but not strictly equivalent to that of eq.([darboux ] ) and a prefactor which can be temperature dependent , the asymptotic limit is eventually dominated by the closest singularity to the origin ( darboux s first theorem ) leading to the critical limit .the appropriate critical scaling variable is , and deviations of the series in eq.([htse ] ) from the pure eq.([darboux ] ) form correspond to confluent and analytic critical correction terms .the extended scaling prescription consists in identifying scaling variables and prefactors such that each series is transposed to a form having the same structure as eq.([htse ] ) , with the prefactor defined so that the first term of the series is equal to .the htse spin expressions for the reduced susceptibility and the second moment of the correlation can be written generically in the form and where is a normalized variable which tends to as and to zero when . for ferromagnets ,( e.g. ) possible natural choices for are or .scaling variables for are or .the former is standard when is non - zero ; when , it is convenient to use ( as , ) .for the eq.([htse ] ) form with the same can be retrieved by extracting a temperature dependent prefactor so as to write the critical expressions for the reduced susceptibility and the second moment correlation length can then be written ( c.f .eq.([qq ] ) ) and from the relation eq.([ximu2def ] ) between and , with the temperature scaling variable and the standard definitions for the critical amplitudes and .the expression has been widely used ; the expression is specific to the extended scaling approach .the functions contain all the confluent and analytic correction to scaling terms it is important that tends to at infinite temperature ( whereas tends to infinity ) ; the thus remain well behaved over the entire temperature range .there are exact closure conditions for the infinite temperature limit : and ( or ) .one can define temperature dependent effective exponents ( introduced by ) : see refs . . for the correlation length, is the extended scaling definition for . for a spin ising ferromagnet on a lattice where each spin has neighbors , the high temperature limit of the effective exponents defined by eqns .[ gammaeff ] and [ nueff ] are and . a comparison between these values and the critical exponents and gives a good indication of the overall influence of the correction terms .if the leading confluent correction term in eq.([wegnertau ] ) dominates then , and .an analysis along these lines of for ising systems with large was sketched out in ref. .the case of general is discussed in appendix a. for all near neighbor ising ferromagnets on sc or bcc lattices covering the entire range of spin values to ( which are all in the same universality class ) , see ref . , and differ from the critical and by a few percent at most .for both observables , the total sum of the correction terms is weak over the entire temperature range .it should be noted that traditional and widely used finite size scaling expressions assume implicitly scaling with the scaling variable . as a general rule these expressions should not be used except in the limit of temperatures very close to ; they rapidly becomes misleading and can suggest incorrect values of the exponent if global fits are made to data covering a wider range of temperatures .the extended scaling fss expressions are valid at all temperatures above to within the weak corrections to scaling . for spin ising spins on a bipartite lattice ( such as the and sc lattices we will discuss below ) there are only even terms in the htse for the specific heat a natural scaling expression for the specific heat is the constant term is present in standard analyses and plays an important rle in ferromagnets because the exponent is small . the extended scaling expression eq.([cvextdef ] ) is not orthodox as it uses a scaling variable , , which is not the same as the used as scaling variable for and .the original ising ferromagnet consists of a system of spins with nearest neighbor ferromagnetic interactions on a one dimensional chain .because analytic results exist for many of the statistical properties of this system , it is often used as a `` textbook '' model in introductions to critical behavior .we will use it to illustrate the extended scaling approach ( see ref . ) .the model orders only at ( ref . ) ; when the critical exponents depend on the choice of the scaling variable .baxter states : `` [ in one dimension ] it is more sensible to replace by '' ; with this scaling variable the exponents are [ when ref. ] .expressions for and in the infinite - size limit are readily calculated following standard htse rules ( see e.g. ref .the reduced susceptibility htse can be written as and the htse for the second - moment of the correlation is the second - moment correlation length is then given by eq .( [ ximu2def ] ) with . using the power series sums and the exact expressions for reduced susceptibility and correlation length are thus and ( it can be noted that the `` true '' correlation length is .the two correlation lengths are essentially identical for but are quite different at higher temperatures . ) the internal energy per spin is just so the specific heat though not immediately recognizable these can all be re - written in precisely the form of the extended scaling eqns .[ chiextdef],[xiextdef],[cvextdef ] , with the choice so ; and and so with the same critical exponents , , together with critical amplitudes , , , .there are no analytic corrections to or to and there is only a single simple analytic correction to .there are no confluent corrections .note again that these expressions are valid for the _ entire _ temperature range from to .the finite size scaling function can also be considered . with periodic boundary conditionsthe finite size reduced susceptibility for a sample of size is the finite size scaling function is the simple principle expression is exact . the higher order term in eq.([fss1d ] )is numerically tiny even for small .we have not found an analytic expression but it can be fitted rather accurately by the ising ferromagnet in the high dimension hypercubic lattice limit , with the reduced susceptibility and the correlation length are and exactly over the entire temperature range above ; the exponents are of course the mean field exponents .these expressions again follow the extended scaling form given above eqns .[ chiextdef],[xiextdef ] including the square root prefactor in , with no correction terms . in this high dimensionlimit the specific heat above is zero ( ) . in dimensions above the upper critical dimension but not in the extreme high dimension limit the extended scaling approach has been used successfully to identify the main correction terms in the reduced susceptibility .thus analytic expressions for models both in the low ( ) and high ( ) dimension limits follow the extended scaling forms .this reinforces the argument that these forms can be considered to be generic and should be used at leading order also for intermediate dimensions , where confluent correction terms and small analytic correction terms must be allowed for . in practice ( e.g. refs . ) analyses of have long been carried out using as the scaling variable rather than .there are analogous advantages in scaling with eq.([xiextdef ] ) , which contains the generic ( or ) prefactor .we suggest that this form of scaling expression for could profitably become equally standard .in the standard rgt finite size scaling formalism the free energy is written where the singular part encodes the critical behavior and the regular part is practically independent .then with the scaling fields and having temperature dependencies and where is the magnetic field .the two series in are analytic . ignoring for the momentthe confluent correction series , for phenomenological couplings and for analyses using this formalism are carried out by introducing a series of analytic terms in powers of , adjusting for each particular case the constants and and truncating at some power of .now consider the extended scaling scheme . as a first step is replaced in the formalism by just as for instance in .the variable is replaced by everywhere .this leaves the generic form of the equations unchanged but modifies the individual factors in the series for the temperature dependencies of the scaling fields . in the extended scaling approach a second stepmust then be made due to the prefactor in .the extended scaling fss expressions and can be translated into the rgt fss formalism in terms of explicit built - in leading expressions for the temperature variation of the scaling fields .the extended scaling expressions without correction terms are strictly equivalent to leading expressions for the scaling fields and containing specific infinite analytic series of terms in : and in the extended scaling approach these leading expressions are common to all ferromagnets .the confluent correction contributions will of course still exist with the confluent correction terms expressed using .finally , fine tuning through minor modifications of the analytic scaling field temperature dependence series will usually be necessary to obtain higher level approximations to the overall temperature variation of the observables . not only at temperatures well above but already at criticalitythe extended scaling scheme can aid the data analysis .for instance , quite generally the critical size dependence of the ratio of the derivative of the susceptibility to the susceptibility is of the form an explicit leading order value of the -independent term can be derived from the leading order extended scaling fss eq.([chiextscal ] ) : in a ferromagnet .this value will be slightly modified by a correction to scaling term .the extended scaling scheme can thus be translated unambiguously into the standard rgt fss formalism .it can be considered as providing an _ a priori _ rationalization giving explicit leading analytical temperature dependencies of the scaling fields . at this levelthe extended scaling scheme provides compact baseline expressions which cover the entire temperature region from to infinity , accurate to within confluent correction terms and residual model dependent analytic correction terms .the equilibrium distributions of the parameters energy for finite size samples from up to ( spins ) were estimated using a density of states function method .when studying a statistical mechanical model complete information can in principle be obtained through the density of states function . from complete knowledge of the density of statesone can immediately work with the microcanonical ( fixed energy ) ensemble and of course also compute the partition function and through it have access to the canonical ( fixed temperature ) ensemble as well .the main problem here is that computing the exact density of states for systems of even modest size is a very hard numerical task .however , several sampling schemes have been given for obtaining approximate density of states , of which the best known are the wang - landau and wang - swendsen methods .in the various methods are described along with an improved histogram scheme . for work in the microcanonical ensemblethe sampling methods give all the information needed .using them one can find the density of states in an energy interval around the critical region and that is all that is required for most investigations of the critical properties of the model .for the present analysis a density of states function technique based upon the same method as in was used though with considerable numerical improvements for all studied here ( adequate improvements to the data set would unfortunately have been too time - consuming ) .the microcanonical ( energy dependent ) data were collected as described in .we use standard metropolis single spin - flip updates , sweeping through the lattice system in a type - writer order .measurements take place when the expected number of spin - flips is at least the number of sites . for high temperaturesthis usually means two sweeps between measurements and three or four sweeps for the lower temperatures we used .note that in the immediate vicinity of the spin - flip probability is very close to for the simple cubic lattice .we report here data on the simple cubic ising model with periodic boundary conditions . for , the largest lattice studied here, we have now amassed between 500 and 3500 measurements on an interval of some 450000 energy levels , where most samplings are near the critical energy .for we have between 5000 and 50000 measurements on some 150000 energy levels . for number of samplings are of course vastly bigger . our measurements at each individual energy level include local energy statistics and magnetization moments .the microcanonical data were then converted into canonical ( temperature dependent ) data according to the technique in .this gave us energy distributions from which we obtain energy cumulants ( e.g. the specific heat ) and together with the fixed - energy magnetization moments we obtain magnetization cumulants ( e.g. the susceptibility ) .typically around 200 different temperatures were chosen to compute these quantities , with a somewhat higher concentration near particularly for the larger so that one may use standard interpolation techniques on the data to obtain intermediate temperatures .below the variance of the distribution of in zero field , eq.([chidef ] ) , represents the non - connected susceptibility ; the physical susceptibility in the thermodynamic limit is the connected susceptibility for finite the distribution of below is bimodal but always symmetrical so in zero applied field , which would suggest that supplementary measurements are needed using small applied fields in order to estimate .however under the condition , where is the second moment correlation length below , the two peaks in the distribution of become very well separated and the variance of the distribution of the absolute value can be taken as essentially equal to the connected susceptibility , the explicit expression for is complicated , see , but the onset of thermodynamic limit conditions can judged by inspection of the finite size data . to estimate the ordering temperature we have used the size dependence of the kurtosis of the distribution of , frequently expressed in terms of the binder parameter .we have introduced an alternative parameter with the same formal properties as which involves .the normalized parameter is defined by or the normalization has been chosen such that , as for the binder parameter , in the high temperature gaussian limit and in the low temperature ferromagnetic limit .as is also a parameter characteristic of the shape of the distribution , it can be considered to be another `` phenomenological coupling '' .it turns out that at least for the ising model the corrections to scaling for are much weaker than those for , allowing accurate estimates of and from scaling at criticality .the values estimated for the critical parameters and are in good agreement with the most accurate values from rgt , htse , and monte carlo methods .the ising ferromagnet in dimension three is a canonical example of a system having a continuous phase transition at a non - zero critical temperature . in are no observables which diverge logarithmically in contrast to the and models .though there are no exact results for this universality class , rather precise estimates of the critical exponents ( and the critical temperatures ) have been obtained and improved over the years thanks to extensive analytical , htse , and fss monte carlo studies .the essential aim has been to determine as accurately as possible the universal critical parameters .consider first the finite size scaling results at and very close to the critical temperature .the numerical work provided an estimate from intersections between curves for phenomenological couplings at different sizes , using data on the binder cumulant and on the phenomenological coupling .this value is consistent with the monte carlo estimates and , the htse estimate , and . at criticality ,the standard fss expression for is for the ising ferromagnet , or so .the subleading irrelevant exponent is so the and terms can be treated together as a single effective term . in what followswe will assume for convenience .against at adopting .the large black points are measured ; the small red points are the fit , eq.([chicrit2]),width=336 ] [ fig:1 ] fig .1 shows against adopting ; the finite size scaling corrections in the present data can be fitted by the analysis is consistent with that of , because of the introduction of a next to leading term , the fit extends to lower . _{ \beta_c}$ ] against .the extended scaling value for the intercept is to leading order ( red arrow),width=336 ] [ fig:2 ] fig .2 shows partial data for the ratio {\beta_c}\ ] ] against . on this scalethe data can be well represented by with , , , and a constant , see the extended scaling expression eq.([dchidbeta ] ) .this form of plot provides an independent estimate for consistent with the values given in . to obtain an accurate value for is important to include the non - zero intercept . combining and estimates from fss at criticality ,the present data are almost consistent with the mc and htse estimates and .both of these are from meta - analyses on many systems in the same universality class , the latter relying principally on bcc data .a recent very precise study of the 3d ising universality class gave and so together with so .leaving the pure fss regime , now consider the overall temperature and size dependence of .assuming known , the critical exponent can be estimated directly and independently from an extrapolation to of the derivative in the thermodynamic limit conditions i.e. down to -dependent crossover temperatures above which the are independent of .the crossover occurs when , below which the correlation length is no longer negligible compared to the sample size .( as below this crossover , then tends to a constant for each ) .there is obviously no `` critical - to - classical crossover '' as a function of temperature .the crossover would appear automatically if the effective exponent were defined ( e.g. ref . ) in terms of the thermodynamic susceptibility {h \to 0 } \equiv \beta\chi(\beta ) \label{chith}\ ] ] and the traditional scaling variable through because at high temperatures and .the present data for , and are of very high statistical accuracy . again assuming , values in the thermodynamic limit conditions ( which are in excellent agreement with htse data for ) , can be extrapolated satisfactorily to assuming , fig 3 .the fit provides an estimate , almost compatible with the htse and fss estimates .fixing and for sizes from top to bottom ( black , blue , green ) .the thermodynamic limit envelope curve is clearly seen .the red line corresponds to an htse data analysis , in full agreement with the present results over the entire temperature range except for a marginal difference near .the red arrow indicates the consensus value for .,width=336 ] [ fig:3 ] region ., width=336 ] [ fig:4 ] the fluctuations in the plot for in fig .4 are an indication of how sensitive these plots are to the slightest noise in the original data .the temperature region in the far right of fig . 4 for corresponds to a region of energy levels measured at least 500,000 times . at the other endthe energy levels were measured more than 1,000,000 times .data for still higher are not shown as the fluctuations become more marked ; unfortunately these higher data can not be used to refine the estimate of .the estimate with the present method is sensitive to the value assumed for .the estimate would become incompatible with the consensus value if one assumed significantly higher values for , such as ( estimates of are reviewed in ) .an advantage of this technique is that it is free from the problem of finite size corrections to scaling , although the wegner thermal corrections to scaling must be taken into account as above .it can be noted also that this is a direct measurement of rather than an indirect estimate through a combination of and estimates as is the case for fss . against assuming and .sizes from top to bottom ( black , red , green , blue , olive , orange ) .the excellent fit ( yellow ) to the thermodynamic limit envelope data corresponds to eq.([chiinfnorm ] ) ., width=336 ] [ fig:5 ] against in the thermodynamic limit assuming and .raw htse data provided by p. butera .,width=336 ] [ fig:6 ] fig .5 shows the data for to in the form of a normalized plot , against assuming .again it can be seen by inspection at which point for each the curves leave the thermodynamic limit envelope curve which is independent . with the scaling expression eq.([chiscal ] ) and using the data at the various but only in the thermodynamic limit , the fit gives the values of the critical amplitude , , and the coefficient of the leading conformal and analytic correction terms , and , read directly off the plot in fig .5 . these values are fully consistent with but more precise than earlier estimates from htse , and , see .it can be seen that the extended scaling expression with only two leading wegner correction terms gives a very accurate fit to the data over the whole temperature range above the critical temperature . if exactly the same data were expressed using as the scaling variable rather than , because one would have to write remembering that diverges at infinite , each of the correction terms in the sums is individually diverging at high temperatures .manifestly it is considerably more efficient to scale with rather than with .we have made no correlation length measurements .however we have carried out an extended scaling parametrization of htse thermodynamic limit second moment correlation length data supplied by p. butera . against in the thermodynamic limit assuming and .raw htse data provided by p. butera , width=336 ] [ fig:7 ] fig .6 shows a plot of the normalized correlation length against assuming and .the data can be fitted well by the extended scaling wegner expression with two leading terms only ( note that here the critical amplitude is ) .the same equation provides the temperature dependence of the effective exponent defined by see fig .the effective exponent varies only by a few percent over the whole range from to .it is clear that the prefactor is an essential part of the temperature dependence of the correlation length .the compact relation eq.([xiinfnorm ] ) is very useful as it allows finite size scaling analyses of the entire data set for .the extrapolation in fig .5 concerns only data in the thermodynamic limit condition for each . with eq.([xiinfnorm ] ) in hand we can plot all the data and not just the points in the thermodynamic limit condition by appealing to the privman - fisher relation , eq.([pffss ] ) . as a first step we ignore corrections to scaling anddraw , fig .8 , the leading order extended scaling fss plot for the susceptibility on the scale of the plot the scaling is already reasonable for all above . against , width=336 ] [ fig:8 ] the conformal correction can then be introduced : the function must have limits at large and for small .an explicit compact _ ansatz _ which gives these limits automatically is where . in the critical limit , by convention . fig .9 uses the temperature dependence of the thermodynamic limit correlation length , eq.([xiinfnorm ] ) , and the thermodynamic limit susceptibility , eq.([chiinfnorm ] ) , to scale the data for all and all using eq.([fishercorrchi ] ) for . against . , ( black , red , green , blue , cyan ) , width=336 ] [ fig:9 ] .black squares : measured ; red circles : fit , width=336 ] [ fig:10 ] the principle scaling function and the leading correction scaling function were extracted from the data . with the numerical constant fixed at ,an accurate effective functional form for the principal scaling function is ^{1.262 } \label{fssfchi}\end{aligned}\ ] ] on the scale of the figure with these fit values ( ) is indistinguishable from the overall curve in fig .9 . by comparing data at small with data at large the correction to scaling function can also be estimated .a fit gives and fig .10 shows the correction scaling function together with the _ ad hoc _ gaussian fit .these fss functions are universal to within metric constants . in the same critical limit , from the definitions above and with in the large limit , the amplitudes and are known from critical and thermodynamic limit measurements respectively , so the scaling form eq.([fchicrit ] ) has in principle only one free parameter , .remarkably , when the other parameters are known , the fss crossover function can be encapsulated in one single parameter .the overall scaling function expression covers all and all above .the principle scaling function eq.([fchiform ] ) contains only one free parameter ; it resembles the finite size scaling form which has been used for the villain model .previous expressions for principle finite scaling functions , in particular for the ising model , were in the form of infinite series in and so contained many fit parameters .it would be of interest to study other members of the same family of models in order to see if the compact form of scaling function eq.([fchiform ] ) is generally valid , and how the universality is expressed in the parameters and .even below it has been noted that there should be a relationship between the non - connected reduced susceptibility and the non - connected correlation length .the extended scaling gives explicit leading order predictions for the asymptotic relations both above and below between the finite size non - connected reduced susceptibility and the finite size non - connected correlation length . as we have seen , in the limit while in the opposite limit the predicted relation is for the case of the square lattice ising model the data confirm both these relationships .unfortunately , as we have no data here for the finite size either above or below we can not check the relationship .the ratios of susceptibility amplitudes and of leading correction factors above and below are universal .the standard reduced susceptibility for the region above has been discussed ; for above and below we will plot the modulus susceptibility eq.([chimoddef ] ) multiplied by as a function of with exponent values fixed at , fig .11 . as a function of .the upper set of curves corresponds to and the lower set to . in both casesthe sizes are ( black , red , blue).,width=336 ] [ fig:11 ] by definition becomes equal to the connected reduced susceptibility below in the thermodynamic large limit . extrapolating the data corresponding to this limit to we find to leading order with and .taking into account the normalization factor for , the present estimates for the amplitude ratio and the correction amplitude ratio are and .the amplitude ratio is consistent with previous monte - carlo estimates , and , .the present correction amplitude ratio estimate is however significantly lower than a field theory value .the specific heat is intrinsically difficult to analyze because of the strong regular term and the small value of the critical exponent ( see eq.([cvextdef ] ) ) .it turns out in addition that there are strong and peculiar finite size corrections . on the other handthe statistical precision of the specific heat data is very high ; data for were included in this analysis .the general leading form of the envelope data in the thermodynamic limit condition is assumed to be where .the amplitudes are and above and below respectively . here is fixed at , which is the expected value from the relation with .the regular term is assumed to be temperature independent ; the estimate is obtained from the overall fit discussed below .it should be underlined that the extended scaling variable is not . in the high temperature range ( down to ) the datacan be compared to data points derived by directly summing the htse terms up to from ref .point by point agreement is better than to part in . as a first step we plot the raw above against , fig .12 . the thermodynamic limit data for different can be clearly observed but the points fall on a curve rather on a straight line even down to very small ; this is because no term has been allowed for .next , we plot against , as in fig . 13 for various trial values of . in fig . 13 with the envelope data now lie on a straight line of slope for the lower range of ( and the larger ) .we make a privman - fisher finite size scaling plot of against with taken from the extrapolated envelope for small and the measured envelope curve for higher , fitted to an explicit function for , fig .the thermodynamic limit correlation length is taken from eq.([xiinfnorm ] ) . in the finite size limited regionthe normalized specific heat shows a strong peak , in contrast to the regular fss crossover observed for the susceptibility .the quality of the global fit is sensitive to the value chosen for the regular term as the correct choice for this parameter is essential to obtain an -independent peak height in fig .once is fixed fine adjustments are made to the correction terms so as to obtain an - and -independent flat plateau in the left hand side thermodynamic limit region .an excellent global eq.([pffss ] ) fss fit is obtained taking the optimal value can be compared with previous estimates : and .the normalized is shown in fig .15 where the nearly linear thermodynamic limit envelope is obvious . against .the sizes are from left to right , ( black , red , green , blue , cyan ) , width=336 ] [ fig:12 ] against .the sizes are from left to right , ( black , red , green , blue , cyan , magenta ) .the dashed line has the slope .,width=336 ] [ fig:13 ] against with .sizes are , ( black , red , green , blue),width=336 ] [ fig:14 ] against with and for all temperatures .sizes , ( black , red , green , blue).,width=336 ] [ fig:15 ] against with and . the lower set of curves corresponds to and the upper set to .sizes , ( black , red , green , blue).,width=336 ] [ fig:16 ] the from eq.([3dcv ] ) together with the peaked fss curve ( for which we have no explicit algebraic expression ) provide an accurate representation of the specific heat at all temperatures above and for all sizes .this is in contrast to previous analyses of mc data which were made in terms of truncated series of terms. the ratios of critical amplitudes and of leading correction amplitudes above and below are universal .the data show critical amplitudes and above and below , fig .with the extended scaling definition , where is the amplitude using the standard definition .the present result is in very good agreement with the htse estimate given in ref . which corresponds to .the present estimate for the amplitude ratio ( which is definition independent ) is , consistent with -expansion and field theory values of and respectively , and with the most recent mc values and . for the correctionamplitudes the data indicate ( fig .15 and 16 ) and , so and .these values can be compared with field theory estimates , and respectively .( it should be noted that the values in our notation correspond to in the notation of refs . . )we can not carry out a full fss analysis below as we lack information on the correlation length .we have applied the extended scaling approach to the analysis of two canonical ising ferromagnet models : the historic ferromagnet on a chain , and the ferromagnet on the simple cubic lattice . for the model , with the scaling variables for the susceptibility and the correlation length and , all the analytic thermodynamic limit expressionsare of precisely the extended scaling form over the entire temperature range from zero to infinity , with no confluent corrections , eqns .[ chiext1d ] , [ xiext1d ] , [ cvext1d ] .an appropriate scaling variable for reduced susceptibility and second moment correlation length in a ferromagnetic ising model with a non - zero ordering temperature is , not the traditional .an exhaustive analysis of high quality numerical data for the ising model demonstrates that the reduced susceptibility and the second moment correlation length can be represented satisfactorily over the entire temperature range above by compact expressions containing two leading wegner correction terms only : and for the specific heat on a bipartite lattice ( such as the sc lattice ) the appropriate extended scaling variable is .the data from to infinite temperature can be fitted accurately by we give explicit finite size susceptibility scaling functions for the two models .the principle susceptibility scaling function is exact .the principle susceptibility scaling _ ansatz _ ^a\ ] ] with , fits the data to high precision .this form where two parameters encapsulate the finite size scaling crossover from the region to the region might well be of generic application .the critical parameters can be estimated by combining the data in the thermodynamic limit with the data in the finite size scaling region .the results provide complementary estimates for critical amplitudes and critical amplitude ratios .the aim of this work is however not so much to improve on the already very accurate existing estimates for universal critical parameters in the intensively studied ferromagnetic ising model , but to explain the rationale leading to an optimized choice of scaling variables and scaling expressions for covering the whole temperature range up to infinite temperature .here we spell out in detail for two canonical examples , the and ising ferromagnets , an `` extended scaling '' methodology for studying numerical data taken over the entire temperature range without restricting the analysis to a narrow `` critical '' temperature region near .scaling variables and scaling expressions are chosen following a simple unambiguous prescription inspired by the well established htse approach . using these and allowing for small leading wegner correction terms where necessary , critical scaling expressions for , and remain valid to high precision from right up to infinite temperature .residual analytic correction terms are either strictly zero ( in ) or very weak ( in ) . the approach can readily be generalized to other less well understood systems .standard expressions for the reduced susceptibility and the correlation length for ferromagnets as defined in are for general spin and the extended scaling prescription consists in transposing each htse expression such that it takes the form of a series in a variable , having leading term and multiplied by a prefactor . in the case of a finite critical temperature ising ferromagnet with ,the critical amplitudes are then defined through ( c.f .eq.([qq ] ) ) and for general ising spin , dimension , and a lattice with nearest neighbors the extended scaling critical amplitudes are and the definitions of the effective exponents are unaltered . with these normalizationsthe physical significance of the critical amplitudes becomes much more transparent . lists the standard critical amplitudes as functions of for sc and bcc lattices . in tableai we compare these values with those obtained using the above definitions .the extended scaling values are close to for all ; the differences which can be read directly from the table are a quantitative indication , model by model , of the amplitude of the -dependent correction terms within ..[table : ai ] values of the critical amplitudes for spin with the standard definitions , eqns .[ cchibutera ] and [ cxibutera ] ref . compared with values using the extended scaling definitions eqns .[ cchiext ] and [ cxiext ] [ cols="^,^,^,^,^,^,^,^ " , ] if the corrections to scaling up to infinite temperature are dominated by the leading ( confluent ) term then and . the universal ratio . from fig .13 which shows the data from the table , we can estimate ( with a small offset corresponding to the next - to - leading correction ) .this compares favorably with the estimates from htse , obtained by the rg in the perturbative fixed - dimension approach at sixth order , and from the expansion to second order .plotted against , where and are spin dependent extended scaling susceptibility and correlation length critical amplitudes .black points sc lattice , red points bcc lattice .see text , ref . , and table ai ., width=336 ] [ fig:17 ]it can be noted that in the case of the ising spin glass the energy scale of the interactions is fixed by not by as in the ferromagnetic case ( is zero in a symmetric interaction distribution spin glass ) . from an obvious dimensional argumentthe normalized spin glass `` temperature '' should be .it has long been recognized that for the spin glass the htse expressions contain even terms only ( i.e. an expansion in or rather than in ) so the appropriate scaling variable is or . the argument presented above for the ferromagnet can be repeated _ mutatis mutandis _ on this basis ; the extended scaling expressions for and in spin glasses are the same as those for the ferromagnet ( eqns .[ chiextdef ] and [ xiextdef ] ) but with substituted for everywhere .unfortunately the great majority of publications on spin glasses have used as the scaling variable which is quite inappropriate except for a very restricted range of temperatures near .one consequence is that many published estimates of the exponent in spin glasses are low by a factor of about ( see the discussion in ) .we would like to thank paolo butera for generously providing us with tabulated data sets and for helpful comments .we thank v. privman for an encouraging comment .this research was conducted using the resources of high performance computing center north ( hpc2n ) .99 f. j. wegner , phys .b * 5 * , 4529 ( 1972 ) .v. privman , p.c .hohenberg and a. aharony , in phase transitions and critical phenomena , edited by c. domb and j.l .lebowitz , vol .14 ( academic press , new york , 1991 ) .a. pelissetto and e. vicari , phys. rep . * 368 * , 549 ( 2002 ) .e. luijten , h. w. j. blte , and k. binder , phys .* 79 * , 561 ( 1997 ) .y. garrabos and c. bervillier , phys .e * 74 * , 021113 ( 2006 ) .m. fhnle and j. souletie , j. phys .c * 17 * l469 ( 1984 ) .s. gartenhaus and w. s. mccullough , phys .b * 38 * , 11688 ( 1988 ) .kim , a. j. f. de souza and d. p. landau , phys .e * 54 * , 2291 ( 1996 ) .p.butera and m. comi , phys .b , * 65 * 144431 ( 2002 ) . y. deng and h. w. j. blte , phys . rev .e * 68 * , 036125 ( 2003 ) .m. caselle , m. hasenbusch , j. phys .a * 30 * , 4963 ( 1997 ) [ hep - lat/9701007].4.75(3 ) .i. a. campbell , k. hukushima , and h. takayama , phys .lett . * 97 * , 117202 ( 2006 ) .i. a. campbell , k. hukushima , and h. takayama , phys .b * 76 * , 134421 ( 2007 ) . h. g. katzgraber , i. a. campbell and a. k. hartmann , phys . rev .b * 78 * , 184409 ( 2008 ) . i. a. campbell and p. butera , phys . rev .b * 78 * 024435 , ( 2008 ) .k. hukushima , i.a .campbell and h. takayama , int .c * 20 * , 1 ( 2009 ) .r. hggkvist , a. rosengren , d. andrn , p. kundrotas , p. h. lundow and k. markstrm , j. stat . phys . * 114 * 455 ( 2004 ) .r. hggkvist , a. rosengren , p. h. lundow , k. markstrm , d. andrn and p. kundrotas , adv . phys . * 56 * 653 ( 2007 ) .m. e. fisher and r. j. burford , phys . rev . *156 * , 583 ( 1967 ) .e. brzin , j. phys .( paris ) * 43 * 15 ( 1982 ) .p. calabrese , v. martin - mayor , a. pelissetto , and e. vicari , phys .e * 68 * , 036136 ( 2003 ) .j. g. darboux , j. math .pures appl . * 4 * , 377 ( 1878 ) .p. butera and m. comi , arxiv : hep - lat/0204007 .f. j. wegner , in phase transitions and critical phenomena , vol 6 , ed c domb and m s green ( new york : academic press ) ( 1976 ) .j. kouvel and m. e. fisher , phys .a * 136 * , 1626 ( 1964 ) .g. orkoulas , a. z. panagiotopoulos , and m. e. fisher , phys . rev .e * 61 * , 5930 ( 2000 ) .e. ising , z. der physik * 31 * , 253 ( 1925 ) .r. j. baxter , exactly solved models in statistical mechanics , academic press ( 1982 ) .g. a. baker and j. c. bonner , phys .b * 12 * , 3741 ( 1975 ) .b. berche , c. chatelain , c. dhall , r. kenna , r. low , and j. -c .walter , j. stat .p11010 ( 2008 ) .m. hasenbusch , a. pelissetto , and e. vicari , phys .b * 78 * , 214205 ( 2008 ) .h. g. katzgraber , m. krner and a. p. young , phys . rev .b * 73 * , 224432 ( 2006 ) .f. wang and d. p. landau , phys .lett . * 86 * 2050 ( 2001 ) .wang and r. h. swendsen , j. stat .phys . * 106 * 245 ( 2002 ) .p. h. lundow and k. markstrm , cent .* 7 * 490 ( 2009 ) .k. binder , z. phys .b : condens . matter * 43 * , 119 ( 1981 ) .h. arisue and k. tabata , nucl .b * 435 * , 555 ( 1995 ) .p. h. lundow and i. a. campbell , phys .b * 82 * , 024414 ( 2010 ) .m. hasenbusch , phys .b * 82 * , 174433 ( 2010 ) . j. salas and a. d. sokal , j. stat . phys .* 98*,551 ( 2000 ) .r. guida and j. zinn - justin , j. phys .gen . * 31 * , 8103 ( 1998 ) .k. e. newman and e. k. riedel , phys .b * 30 * , 6615 ( 1984 ) .p. butera , private communication .p. butera and m. comi , phys .b * 58 * , 11 552 ( 1998 ) . v. privman and m. e. fisher , phys . rev .b * 30 * 322 ( 1984 ) . m. e. fisher , in critical phenomena , proceedings of the 51st enrico fermi summer school , edited by m.s .green ( academic press , new york , 1972 ) .s. caracciolo , r. g. edwards , s. j. ferreira , a. pelissetto , and a. d. sokal , phys .* 74 * , 2969 ( 1995 ) .j. engels , t. scheideler , nucl .b * 539 * , 557 ( 1999 ) .m. hasenbusch , phys .b * 82 * , 174433 ( 2010 ) . c. bagnuls and c. bervillier , phys . rev .b * 24 * 1226 ( 1981 ) . c. bagnuls , c. bervillier , d. i. meiron and b. g. nickel , phys .b * 35 * 3585 ( 1987 ) . h. arisue and t. fujiwara , phys . rev .e * 67 * , 066109 ( 2003 ) .m. hasenbusch and k. pinn , j. phys .a * 31 * , 6157 ( 1998 ) .x. feng and h. w. j. blte , phys .e * 81 * , 031103 ( 2010 ) . c. bagnuls and c. bervillier , j. phys .a * 19 * , l85 ( 1986 ) .m. c. chang and j. j. rehr , j. phys .a * 16 * , 3899 ( 1983 ) .r. fisch and a. b. harris , phys .38*,785 ( 1977 ) .r. r. p. singh and s. chakravarty , phys .lett . * 57 * , 245(1986 ) . l. klein , j. adler , a. aharony , a. b. harris and y. meir , phys . rev .b * 43 * 11249 ( 1991 ) .d. daboul , i. chang , and a. aharony , eur .. j. b * 41 * , 231 ( 2004 ) .
it is often assumed that for treating numerical ( or experimental ) data on continuous transitions the formal analysis derived from the renormalization group theory can only be applied over a narrow temperature range , the `` critical region '' ; outside this region correction terms proliferate rendering attempts to apply the formalism hopeless . this pessimistic conclusion follows largely from a choice of scaling variables and scaling expressions which is traditional but very inefficient for data covering wide temperature ranges . an alternative `` extended scaling '' approach can be made where the choice of scaling variables and scaling expressions is rationalized in the light of well established high temperature series expansion developments . we present the extended scaling approach in detail , and outline the numerical technique used to study the three - dimensional ising model . after a discussion of the exact expressions for the historic ising spin chain model as an illustration , an exhaustive analysis of high quality numerical data on the canonical simple cubic lattice ising model is given . it is shown that in both models , with appropriate scaling variables and scaling expressions ( in which leading correction terms are taken into account where necessary ) , critical behavior extends from up to infinite temperature .
let us consider the following global optimization problem \},\ ] ] where ] , i.e. , they pass through every point of ] .then it follows from the lipschitz condition that , \,\ , 1 \le i \le k,\ ] ] , \\ c_i^+(x ) = f(m_i ) -l ( x - m_i ) , \hspace{1.5 cm } x\in [ m_i , b_i ] , \end{array } \right.\ ] ] where is ( see fig .[ fig.1 ] , left ) a piece - wise linear discontinuous minorant ( called often also _ support function _ ) for over each subinterval ] we have if a point ] .then , analogously to ( [ sup_lip_1])([m_i ] ) , we obtain that the function , \,\ , 1 \le i \le k,\ ] ] , \\l_i^+(x ) = f(m_i ) -h ( x - m_i)^{1/n } , \hspace{1.5 cm } x\in [ m_i , b_i ] , \end{array } \right.\ ] ] is a discontinuous nonlinear minorant for ( see fig .[ fig.1 ] , right ) .the values , are lower bounds for the function over each interval , and can be calculated as follows if an overestimate of the hlder constant is given } l_i(x ) = f(m_i ) -h_1 |(b_i - a_i)/2|^{1/n}.\ ] ] = 14.5 cm as it was discussed above , the direct algorithm works simultaneously with several estimates of the lipschitz constant at each iteration .one of the key features that allow it to do this is a smart representation of intervals ] , and the vertical coordinate is equal to where is from ( [ m_i ] ) ( see , e.g. , the dot and its coordinates ) . let us consider now the intersection of the vertical coordinate axis with the line having the slope and passing through each dot representing subintervals in the diagram shown in fig . [ direct ] .it is possible to see that this intersection gives us exactly the characteristic from ( [ lowb_lip ] ) , i.e. , the lower bound for over the corresponding subinterval ] by a dot with coordinates , where is the current partition of the one - dimensional search interval during the iteration and coordinates of the point are calculated as follows where is from ( [ m_i ] ) , i.e. , is the central point of the interval ] be represented by a dot with horizontal coordinate and vertical coordinate defined in ( [ xdot ] ) , ( [ dot ] ) .then , intervals that are nondominated in the sense of definition [ def3 ] are located on the lower - right convex hull of the set of dots representing the intervals .* proof . *the proof of the theorem [ th0 ] is analogous to the proof of theorem 2.2 from . in practice , nondominated intervals can be found by applying algorithms for identifying the convex hull of the dots ( see , e.g. , the algorithm called jarvis march , or gift wrapping , see ) .= 14.5 cm we describe now the partition strategy adopted by the new algorithm for dividing subintervals in order to produce new trial points .when , at the generic iteration , we identify the set of nondominated intervals , we proceed with the subdivision of each of these intervals only if a significant improvement on the function values with respect to the current minimal value is expected .once an interval becomes nondominated , it can be subdivided only if the following condition is satisfied : where the lower bound is from ( [ lowb ] ) .this condition prevents the algorithm from subdividing already well - explored small subintervals .let us suppose now that at the current iteration of the new algorithm a subinterval ] and ] inherits the point at which the objective function has been evaluated when the original interval ] given the value .[ th1 ] let be a lower bound along the space - filling curve for a multidimensional function , \subset r^n ] .* proof . *see or the recent monograph . by using the space - filling curves we are able to work with a one - dimensional function in the interval \subset r^1 ] and that we use the metric of hlder , it happens that the width of the nondominated interval to be partitioned at a generic iteration can become very small .when the dimension increases , the width of the subintervals can reach the computer precision . in order to avoid this situation another condition in addition to ( [ cond ] )is required .namely , when an interval \in \{ d^k\ } ] in three equal parts and set , , and compute the values of the function , , where is the -approximation of the peano curve .set the current partition of the search interval , [ 1/3,2/3 ] , [ 2/3,1 ] \} ] at iteration .( subdivision of nondominated intervals ) set , and perform the following steps 2.1 - 2.3 : ( interval selection ) .select a new interval ] and any there exist an iteration number and a point , , such that ._ proof_. the interval partition scheme ( [ partit1 ] ) , ( [ partit2 ] ) used for each subdivision of intervals produces three new subintervals of the length equal to a third of the length of the subdivided interval .since , to prove the theorem it is sufficient to prove that for a fixed value , after a finite number of iterations , the largest subinterval of the current partition of the domain will have the length smaller than .in such a case , in the -neighborhood of any point of there will exist at least one trial point generated by the algorithm . to see this , let us fix an iteration number and consider the group of the largest intervals of the partition having the horizontal coordinate ( in the diagram of fig .[ fig.4 ] this group consists of two points : the dot and the dot above it ) . as can be seen from the scheme of the algorithm mgas , for any this group is always taken into account when nondominated intervals are looked for .in particular , an interval from this group , having the smallest value , must be partitioned at each iteration of the algorithm .this happens because there always exists a sufficiently large estimate of the hlder constant for the function such that the interval is the nondominated interval with respect to and condition ( [ cond ] ) is satisfied for the lower bound .three new subintervals having the length equal to a third of the length of are then inserted into the group with a horizontal coordinate . since each group contains a finite number of intervals , after a sufficiently large number of iterations all the intervals of the group be divided and the group will become empty . as a consequence , the group of the largest intervals will now be identified by , where the difference is finite .the same procedure will be repeated with this new group of the largest intervals , and the next new group , etc .this means that there exists a finite iteration number such that after performing iterations of the algorithm mgas , the length of the largest interval of the current partition is smaller than and , therefore , in the -neighborhood of any point of the search region there will exist at least one trial point generated by the algorithm . in fig .[ fig.5 ] , an example of convergence of the sequence of trial points generated by the algorithm mgas in dimension using the approximation of the level to the peano curve is given .the zone with the high density of the trial points corresponds to the global minimizer .[ fig.51 ] shows how this problem was solved in the one - dimensional space . in the upper part of fig .[ fig.51 ] , the one - dimensional function corresponding to the curve shown in fig .[ fig.5 ] and the respective trial points produced by the mgas at the interval ] and therefore to the global minimum points of the one - dimensional function .the peano curves used for reduction of dimensionality establish a correspondence between subintervals of the curve and the -dimensional subcubes of the domain \subset r^n ] , i.e. , the points in the -dimensional domain may be approximated differently by the points on the curve in dependence on the mutual disposition between the curve and the point in ] we mean the set of points ( called _ images _ ) on the curve minimizing the euclidean distance from .it was shown in that the number of the images ranges between and .these images can be located on the curve very far from each other despite their proximity in the -dimensional space .thus , by using the space - filling peano curve , the global minimizer in the -dimensional space can have up to images on the curve , i.e. , it is approximated by , , points such that where is defined by the space - filling curve . obviously , in the limiting case , when and the iteration number , all global minimizers will be found .but in practice we work with a finite and , i.e. , with a finite trial sequence , then to obtain an -approximation of the solution it is sufficient to find _ only one _ of the images on the curve .this effect may result in a serious acceleration of the search ( see for a detailed discussion ) .in this section , we present results of numerical experiments performed to compare the new algorithm mgas with the original direct algorithm proposed in and its locally biased modification lbdirect introduced in .these methods have been chosen for comparison because they , just as the mgas method , do not require the knowledge of the objective function gradient and work with several lipschitz constants simultaneously .the fortran implementation of the two methods described in and downloadable from have been used in both methods . to execute numerical experiments with the algorithm mgas , we should define its parameter from ( [ cond ] ) . in direct ( see ) , where a similar parameter is used , the value is related to the current minimal function value and is fixed as follows : the choice of between and has demonstrated good results for direct on a set of test functions ( see ) .since the value has produced the most robust results for direct ( see , e.g. , ) , exactly this value was used in ( [ csi ] ) for direct in our experiments . the same formula ( [ csi ] ) and the same value were used in the new algorithm , too .the series of experiments involves a total of 800 test functions in the dimensions generated by the gkls - generator described in and downloadable from http://wwwinfo.deis.unical.it//gkls.html .more precisely , eight classes of 100 functions have been considered .the generator allows one to construct classes of randomly generated multidimensional and multiextremal test functions with _ known _ values of local and global minima and their locations .each test class contains 100 functions and only the following five parameters should be defined by the user : + problem dimension ; + number of local minima ; + value of the global minima ; + radius of the attraction region of the global minimizer ; + distance from the global minimizer to the vertex of the paraboloid .the generator works by constructing in a convex quadratic function , i.e. , a paraboloid , systematically distorted by polynomials . in our numerical experimentswe have considered classes of continuously differentiable test functions with local minima .the global minimum value has been fixed equal to for all classes .an example of a function generated by the gkls can be seen in fig .[ fig.6 ] . by changing the user - defined parameters , classes with different properties can be created .for example , a more difficult test class can be obtained either by decreasing the radius of the attraction region of the global minimizer or by increasing the distance , , from the global minimizer to the paraboloid vertex . in this paper , for each dimension , two test classes where considered : a simple one and a difficult one , see table [ table1 ] that describes the classes used in the experiments . since the gkls - generator provides functions with known locations of global minima , the experiments have been carried out by using the following stopping criteria ..[table1 ] _ description of 8 classes of test functions used in experiments _ [ cols="^,^,^,^,^,^,^",options="header " , ] figure [ fig.7 ] shows a comparison of the three methods using the so called _ operating characteristics _ introduced in 1978 in ( see , e.g. , for their english - language description ) .these characteristics show very well the performance of algorithms under the comparison for each class of test functions . on the horizontal axiswe have the number of function evaluations and the vertical coordinate of each curve shows how many problems have been solved by one or another method after executing the number of function evaluations corresponding to the horizontal coordinate .for instance , the first graph in the right - hand column ( , class 2 ) shows that after 1000 function evaluations the lbdirect has found the global solution at 33 problems , direct at 47 problems , and the mgas at 84 problems .thus , the behavior of an algorithm is better if its characteristic is higher than characteristics of its competitors . in figure [ fig.7 ] ,the left - hand column of characteristics , the behavior of algorithms mgas , direct , and lbdirect on the classes 1 , 3 , 5 , and 7 is shown .the right - hand column presents the situation when the more difficult classes 2 , 4 , 6 , and 8 have been used .the global optimization problem of a multi - dimensional , non - differentiable , and multiextremal function has been considered in this paper . it was supposed that the objective function can be given as a ` black - box ' and the only available information is that it satisfies the lipschitz condition with an unknown lipschitz constant over the search region being a hyperinterval in .a new deterministic global optimization algorithm called mgas has been proposed .it uses the following two ideas : the mgas applies numerical approximations to space - filling curves to reduce the original lipschitz multi - dimensional problem to a univariate one satisfying the hlder condition ; the mgas at each iteration uses a new geometric technique working with a number of possible hlder constants chosen from a set of values varying from zero to infinity evolving so ideas of the popular direct method to the field of hlder global optimization .convergence conditions of the mgas have been established .numerical experiments carried out on 800 of test functions generated randomly have been executed .it can be seen from the numerical experiments that the new algorithm shows quite a promising performance in comparison with its competitors .moreover , the advantage of the new technique becomes more pronounced for harder problems .casado l.g . , garca i. , and sergeyev ya.d . ,_ interval algorithms for finding the minimal root in a set of multiextremal non - differentiable one - dimensional functions _ , siam j. on scientific computing , 24(2 ) , 359376 ( 2002 ) .gablonsky m.j . , _ direct v2.04 fortran code with documentation _ , 2001 , http://www4.ncsu.edu/ ctk / software / directv204.tar.gz .gablonsky m.j . , _ modifications of the direct algorithm _ , ph.d thesis , north carolina state university ,raleigh , nc , 2001 .gablonsky m.j . andkelley c.t . , _ a locally - biased form of the direct algorithm _ , j. of global optimization , 21 , 2737 ( 2001 ) . gaviano m. , kvasov d.e . , lera d. , and sergeyev ya.d .software for generation of classes of test functions with known local and global minima for global optimization _, acm toms , 29(4 ) , 469480 ( 2003 ) .horst r. and pardalos p.m. , _ handbook of global optimization _ , kluwer , doordrecht ( 1995 ) .horst r. and tuy h. , _ global optimization : deterministic approaches _ , springer , berlin , ( 1996 ) .gourdin e. , jaumard b. , and ellaia r. , _ global optimization of hlder functions _ , j. of global optimization , 8 , 323 - 348 ( 1996 ) .jones d.r . , perttunen c.d . , and stuckman b.e . , _lipschitzian optimization without the lipschitz constant _, j. of optimization theory and applications , 79 , 157181 ( 1993 ) .kvasov d.e ., pizzuti c. , and sergeyev ya.d . ,_ local tuning and partition strategies for diagonal go methods _ , numerische mathematik , 94(1 ) , 93106 ( 2003 ) .kvasov d.e . and sergeyev ya.d . , _ a univariate global search working with a set of lipschitz constants for the first derivative _ , optim .letters , 3 , 303318 ( 2009 ) .lera d. and sergeyev ya.d ., _ global minimization algorithms for hlder functions _ , bit , 42(1 ) , 119133 ( 2002 ) .lera d. and sergeyev ya.d ., _ an information global minimization algorithm using the local improvement technique _ , j. of global optimization , 48 , 99112 ( 2010 ) .lera d. and sergeyev ya.d ., _ lipschitz and hlder global optimization using space - filling curves _ ,numer . maths . , 60 , 115129 ( 2010 ) .lera d. and sergeyev ya.d ., _ acceleration of univariate global optimization algorithms working with lipschitz functions and lipschitz first derivatives _ , siam j. optim . , 23(1 ) , 508529 ( 2013 ) .martnez j.a . ,casado l.g ., garca i. , sergeyev ya.d . , toth b. _ on an efficient use of gradient information for accelerating interval global optimization algorithms _ , numerical algorithms , 37 , 6169 ( 2004 ) .pintr j.d . , _global optimization in action _ , kluwer , dordrecht ( 1996 ) .piyavskii s.a . , _an algorithm for finding the absolute extremum of a function _ , ussr comput . math . and math ., 12 , 5767 ( 1972 ) .sergeyev ya.d . ,daponte p. , grimaldi d. , and molinaro a. , _ two methods for solving optimization problems arising in electronic measurements and electrical engineering _, siam j. optim . , 10(1 ) , 121 ( 1999 ) .sergeyev ya.d . and kvasov d.e ._ global search based on efficient diagonal partitions and a set of lipschitz constants _ , siam j. optim .16(3 ) , 910937 ( 2006 ) .sergeyev ya.d . and kvasov d.e . , _ diagonal global optimization methods_ , fizmatlit , moscow ( 2008 ) , ( in russian ) .sergeyev ya.d ., strongin r.g . , and lera d. , _ introduction to global optimization exploiting space - filling curves _ , springer , new york ( 2013 ) .strongin r.g . , _ numerical methods in multiextremal problems : information - statistical algorithms _ , nauka , moscow ( 1978 ) , ( in russian ) .
in this paper , the global optimization problem with being a hyperinterval in and satisfying the lipschitz condition with an unknown lipschitz constant is considered . it is supposed that the function can be multiextremal , non - differentiable , and given as a ` black - box ' . to attack the problem , a new global optimization algorithm based on the following two ideas is proposed and studied both theoretically and numerically . first , the new algorithm uses numerical approximations to space - filling curves to reduce the original lipschitz multi - dimensional problem to a univariate one satisfying the hlder condition . second , the algorithm at each iteration applies a new geometric technique working with a number of possible hlder constants chosen from a set of values varying from zero to infinity showing so that ideas introduced in a popular direct method can be used in the hlder global optimization . convergence conditions of the resulting deterministic global optimization method are established . numerical experiments carried out on several hundreds of test functions show quite a promising performance of the new algorithm in comparison with its direct competitors . * key words*. global optimization , lipschitz functions , space - filling curves , hlder functions , deterministic numerical algorithms , direct , classes of test functions .
even more than simply continuing the empirical research of the past , we are at the threshold of a new era , with a new leap beyond the current energy frontier . following the excellent presentations at this symposium ,it is perhaps worthwhile to pause a moment and consider our most recent leaps of energy frontiers .what do they suggest ? what happened when the isr and the `` 200 gev '' machine turned on ?available center - of - mass energy jumped from 8 gev to 20 - 50 gev .new energy territory opened to us .we were surprised , even shocked by how different the world seemed .almost immediately , we saw the advent of high- events at both the cern isr and at fermilab .pions were observed with cross sections no longer dropping exponentially with . rather , the drop with was more like a power - law , eventually reaching that for hard point - like scattering ! backgrounds for many planned experiments were orders of magnitude larger than expected .more fundamentally , we observed ( as we now understand it ) the effects of the quark substructure of hadrons .we also started to produce particles essentially undreamed of before - well , dreamed of by only a few foolhardy visionaries .in addition to the pions at high coming from hadronic interactions , a plethora of leptons appeared .their numbers could not be explained by the decay of known strongly - produced particles. eventually , these leptons were seen to come from the semileptonic decays of the previously - unknown heavy quarks .perhaps the excess of leptons reminds you of the apparent excesses of heavy quarks seen in hadronic interactions today ( especially of b mesons and and onia ) .it may be that what mary bishai referred to as a -production excess of 1.2 - 1.9 times theory, will continue to fall as theoretical models of production are refined .the fractional excess does seem to be coming down with time .however , it is possible that we are already seeing the effects of something which we will only understand once we have data from the lhc . what happened when the big cern and fermilab hadron colliders turned on ?available energy jumped from a few tens of gev to 630 and 2,000 gev .again , new energy territory opened for exploraton .we were again surprised - maybe not so much by a new energy scale which was predicted ( and masses ) , but by the very large mass of the top quark .i remember well , how upon seeing evidence for the bottom quark , we immediately expected to see the top quark at times the mass of the bottom quark , just like the factor between the bottom quark and the strange quark .the ratio of top to bottom quark masses is more like 40 than 3 !we do not understand why the top quark is so heavy to this very day .we have seen no direct evidence of any of the suggested new particles : not sequential or bosons , not higgs , not susy , nor techni - particles .we have not seen a break in spectra , nor the onset of a new level in the hierarchy of matter , nor any suggestion of something more fundamental than quarks and leptons .as you have shown at this symposium , you are building detectors , and solving technical and managerial problems .you are also building expanded collaborations and new tools to deal with the new sociology : learning how to live with larger and increasingly internationalized collaborations , learning new techniques and tools for ever larger projects , and beginning to experiment with new computing paradigms like grid .i have been impressed by the trigger tables shown and the expanding physics goals shown by experiments .we had talks on heavy - ion collision measurements in the big detectors , atlas and cms , detection of quark jets in alice , and the appearance of b physics everywhere .detectors have had design and engineering updates , and simulations continue to include more complete detector modeling . as usual ,the results suggest somewhat less capability , but hopefully more realistic expectations . at the same time , perhaps motivated in part by the new understanding , better algorithms have been developed to compensate for somewhat reduced detector expectations ; e.g. , in tracking and and heavy quark tagging algorithms . in order to continue this progress , mock data challenge efforts can not be over - valued , both for improving the physics reach of experiments and and for debugging the computing environment of the future .even more , better motivation will come from the data itself once you have the real thing . as an example of how time with actual physics data helps , let me cite the work reported by juan estrada at a seminar at fermilab just the week before the symposium , and referred to by jianming qian. unlike previous cdf and dzero top - quark mass analyses that used templates , this new dzero analysis uses lepton plus jets events and makes a direct calculation of the signal and background probability for each event . that probability depends on all measured momenta of the final state lepton and jets , and each event s contribution depends on how well it is measured . the quoted preliminary result forthe top quark mass is .the improvement in statistical error is equivalent to a factor of 2.4 in the size of the data sample .the relative error in this one decay channel alone is 3% , compared to 2.9% from the previous combined cdf and dzero average for all analysed decay channels .you have shown real , substantial progress from the past year at this symposium .it has been very good to see the progress on the lhc itself , and on the detectors , software , and physics planning .we can all be happy that civil construction is now going well , and magnet production is getting better .roger cashmore spoke of the nightmare " of the civil construction problems that are now behind us .we are also happy to see so many detector components getting into construction .we have heard about facing real challenges .some have been technical ; e.g. , in military radiation - hard electronics , some electronic noise and yield issues , material budgets , and radiation damage effects .some challenges have been financial in origin , leading to scope changes and , sometimes , to additional funding .other challenges have been with schedules , requiring continuous review and adjustments ( e.g. , lack of test - beam availability ) .personally , i am happy to see some full system tests , and indications that planning for commissioning is getting serious attention .we are all happy to see solutions over the last year to these and other problems .your progress is important to us at fermilab .first , it is important for our physics program ( cms ) and our super - conducting magnet program .mike witherell , in his director s welcome to you , noted that only fermilab s tevatron collider and neutrino programs are larger here at the lab .second , your progress is important for the planning of much of the rest of our program as well . for the tevatron collider , currently the energy frontier machine ,the importance is obvious .however , in fact , your progress is important to all of hep .consider the implications for b factories !nevertheless , even as an lhc outsider , i have concerns .the scale of the industrial technology needed for the machine and for the detectors is still new to our community .it is not obvious that accelerator components will stay ahead of the `` just in time '' schedule .we have all been invited to look at the cern lhc `` dashboard '' on the web. there is no real evidence yet of the rapid change in delivery slopes needed to meet the schedule for beams .timing can be everything .staging of detector components may get us to the point where we can not buy needed components later .this is already a problem .some commercial technologies may not last long enough for our development and construction schedules .dmill radiation hard asic technology is going away already ; will 0.25 micron asic technology be far behind ?even networking and computing components are a worry .consider the `` objectivity '' software suite . at the same time , there are more technology decisions yet to be made than is healthy at this stage .i would mention the cms pixel size , the atlas b - layer pixel size , the cms electromagnetic calorimeter electronics , and the lhcb rich photon - detection decision especially .common computing approaches can save duplication , and help by stressing systems in more than one environment .this can head off problems later , as the environments evolve for all those using a given approach . yet, this commonality of approach is just starting .i was surprised in the talks of lothar bauerdick and nick brook to see so far how little integrated into the big experiments are the grid projects .testing and commissioning times are getting squeezed almost everywhere - already !the main message from the experiments at this symposium has been that discoveries may be made very early , if nature is as expected by most . in my summary ,i have chosen to show some of the most frequently referenced transparencies . sincethese plots have been shown so often , i wo nt even have to tell you what they are , even if none of us remembers or cites where the plots were first shown !the famous higgs sensitivity plot is made to show the early discovery expected over the whole range of likely mass .recently , including the vector - boson fusion process helps in the previously - difficult low - mass region .for susy , again we are assured of rapid discovery , the plots showing possible discovery up to even with only `` 1 day '' of data ok , it s a good day . in the area of heavy ions ,i d like to mention the particularly good review of heavy ion physics at rhic by gunther roland. he showed a consistent _ description _ of the final state , `` but noted that we re missing a picture of [ the ] dynamical evolution '' that gets us there from the initial conditions .many speakers showed where the lhc sits on the phase diagram of temperature vs baryonic chemical potential .this plot does not do justice in my eyes to the role of the lhc .the lhc is shown in a tiny corner of the plot . yet, the missing picture of dynamical evolution " may require : * more dynamic range in kinematic variables * longer time for escaping partons to feel effects of quark - gluon plasma * larger samples of charm , bottom , and onium all these features should be available at the lhc .the table from the talk of russell betts shows quantitatively the much higher energy densities , multiplicities characteristic of more quark - gluon plasma , and the longer times available for the plasma to influence the outgoing states .all these should make the anticipated effects much easier to see at the lhc and to understand ..pb+pb collistions at the sps , rhic , and lhc [ cols="^,^,^,^",options="header " , ] [ table1 ] finally , i d like to focus on two personal - favorite physics topics : compossitness and extra dimensions .signatures for both of these topics may come to be manifest in the same way that high- events came to us at the isr and at fermilab .moreover , such signals can appear quite early , with subsets of working detectors and the simplest of analyses .our keynote speaker , scott willenbrock , gave us a similar message to mine .in many ways , physics has never been more exciting .* we are about to extend the energy frontier by a factor of 7 .* we have an excellent model of what we have seen already .* however , we know that our model is incomplete , and we have detailed predictions which soon can be tested definitively . we are not at the end of science , " but hopefully at the threshold of exciting new science. what will the new science be ?i really do nt know .however , personally , i expect we will have major surprises .i expect surprises comparable to those when the cern isr and fermilab began . in the face of the new energy frontier ; * be prepared to read out all working detectors ; * be prepared for analysis of early , imperfect data ; * be prepared for discovery ; * be prepared for surprises in signal and in backgrounds ; * and be prepared to think new thoughts ! good luck !i would like to end with a word of thanks
this summary talk reviews the lhc 2003 symposium , focusing on expectations as we prepare to leap over the current energy frontier into new territory . we may learn from what happened in the two most recent examples of leaping into new energy territory . quite different scenarios appeared in those two cases . in addition , we review the status of the machine and experiments as reported at the symposium . finally , i suggest an attitude which may be most appropriate as we look forward to the opportunities anticipated for the first data from the lhc . as we contemplate the three days of excellent talks we have just experienced , we are invited to think about how to convey our science and its goals to the public . in that context , we should understand where the public perceptions are . i am reminded of a recent discussion among knowledgeable people , motivated by the book the end of science `` by john horgan . in it , horgan says and now that science - true , pure , empirical science - has ended , what else is there to believe in ? '' it is too bad that anyone thinking this was not here at this symposium ! we are here reaffirming that empirical science is alive and well .
for a number of reasons , the present evaluation of charged - particle thermonuclear reaction rates represents a significant step forward compared to previous work .first , we developed a new method of computing reaction rates , which is based on monte carlo techniques and assigns to each nuclear input quantity a physically motivated probability density function .the method is described in the first paper of this series ( paper i ) and the numerical results for reaction rates and rate probability density functions are presented in the second paper of this series ( paper ii ) .second , a number of years have passed since the last two evaluations of similar scope as the present work have been published .the rapid progress seen in the field of nuclear astrophysics over the past few years clearly warrants a new charged - particle reaction rate evaluation .third , thermonuclear reaction rates are not directly measured quantities , but are derived from a multitude of different nuclear physics input quantities .consequently , the quality and reliability of a reaction rate evaluation hinges directly on the transparency and reproducibility of the input data .the last two aspects will be addressed in the present paper .we present here the input files to the monte carlo code ` ratesmc ` containing the nuclear physics data used to compute our reaction rates . for each reaction we list nuclear properties ; nonresonant s - factors ; recommended resonance energies , strengths and partial widths ; upper limits of resonance contributions ; numerical rate integrations ; and interferences between resonances of same spin and parity .section 2 contains a brief discussion of general procedures , literature sources and the status of data . in sec .3 we focus attention on a number of important issues . a brief summary is given in sec .the meaning of each input row in a sample input file to ` ratesmc ` is explained in app .[ sec : inptfiles ] .the actual nuclear physics input data for each of the reactions evaluated in the present work are listed in app .[ sec : nuclinput ] .we will briefly describe our procedures for nuclear data analysis and evaluation .the details are too numerous to list here and depend on a case - by - case basis , but some general principles can be outlined .all the available sources of literature have been consulted .we focus our attention on refereed journals , but in exceptional circumstances conference proceedings and ph.d .theses are also taken into account .if a particular quantity has been measured more than once , we adopt a weighted average ( except for resonance strengths ; see below ) , unless there was reason to exclude unreliable measurements . in some cases we succeeded in correcting original data for systematic effects , for example ,improved stoichiometries , stopping powers , coincidence summing corrections , and so on .the reader may find the discussions in refs . illuminating .for many resonances we use the evaluated energies of endt . however , for papers published after 1998 we are compelled to perform our own evaluation .resonance energies can be measured directly using thick - target excitation functions or are derived by using measured excitation energies and the reaction q - value ( ; sec .5.1.1 in paper i ) .we adopted in general the method that resulted in the smallest uncertainties . for reaction q - valueswe use the evaluated results of audi and collaborators , unless more recently ( after 2003 ) measured masses have been reported in the literature ( see tab . 1 of paper ii ) .assignments of nuclear level spins and parities are especially precarious in the recent literature . as pointed out in the introduction to his 1990 evaluation , endt carefully distinguished between strong ( that is , model - independent ) and weak arguments when assigning values .assignments based on weak arguments were placed in parenthesis by refs . . in most papers publishedafter 1998 this important distinction between strong and weak arguments is blurred and it requires now significant efforts by a reviewer to disentangle the arguments for assignments from different reaction and decay studies .we can hardly overstate to our colleagues the importance of strictly following the established rules for assigning spins and parities ( see introduction of ref . and references therein ) . regarding resonance strengths, we did not follow in general the procedure of the nacre reaction rate evaluation , where for a given resonance the strength is found from a weighted mean of values obtained in different measurements . instead ,whenever possible we normalize literature results to a set of carefully measured standard " resonance strengths ( see tab . 4.12 in iliadis ) . note that no standard resonance strengths exist at present , for example , for ( , ) reactions or for ( p, ) reactions on any of the ne isotopes .partial widths can be derived from measured resonance strengths , mean lifetimes or spectroscopic factors . for details .we prefer , whenever feasible , to calculate reaction rates by numerical integration ( see eq .( 1 ) of paper i ) using partial widths as input instead of computing them analytically using the resonance strength ( see eq . (10 ) of paper i ) .the former procedure automatically accounts for the low- and high - energy tails of a resonance and makes any artificial corrections ( for example , for nonresonant " s - factor tails ) obsolete . in most cases involving short - lived targets ,we compute the partial widths by using measured spectroscopic factors and mean lifetimes of the corresponding levels in the mirror nucleus .this method and its justification has been described in detail by iliadis et al . .uncertainties are treated in the following manner . for measured resonance energiesthe reported or derived mean value and the corresponding ( 1 " ) uncertainty is associated with the parameters and , respectively , of a gaussian probability density function ( sec .5.1.1 of paperi ) . for measured resonance strengths or partial widths we associate the mean value and corresponding uncertainty with the expectation value and the square root of the variance , respectively , of a lognormal distribution ( sec .5.1.2 of paper i ) . the lognormal parameters and are then computed from eq .( 27 ) of paper i. if uncertainties are not available from the literature , we use certain global values to the best of our judgement : ( i ) the direct capture reaction rate is usually calculated using the method described in secs . 2.1 and 5.1.3 of paper i. it is sometimes assumed that this procedure represents a purely theoretical approach .however , this assumption is incorrect since the absolute magnitude of the direct capture s - factor is determined by using _ experimental _ spectroscopic factors in eq .( 35 ) of paper i. we adopt in most of these cases an uncertainty ( square root of the variance ) of 40% for the direct capture s - factor ( see eq .( 5 ) of paper i ) .this value is based on a systematic comparison of experimental spectroscopic factors from direct capture and from transfer reaction studies ; ( ii ) when particle and -ray partial widths are calculated from measured spectroscopic factors and -ray transition strengths , we assume uncertainties of 40% and 50% , respectively. the choice of these values is supported by a systematic comparison of partial widths and by an uncertainty analysis of measured spectroscopic factors ; ( iii ) in exceptional cases we adopt spectroscopic factors and -ray transition strengths from the nuclear shell model . for the sake of consistency, we assume values of 40% and 50% for the uncertainties of shell model based particle and -ray partial widths , respectively . upper limits of particle partial widths are sampled according to the procedure outlined in sec .5.2.1 of paper i. specifically , the porter - thomas distribution of dimensionless reduced widths is obtained with mean values of for protons ( or neutrons ) and for -particles , and the distribution is truncated at the experimental upper limit of the dimensionless reduced width ( see eq .( 38 ) of paper i ) . if the spin and parity of a particular nuclear level is unknown , we assume formation of the expected ( but yet undetected ) resonance via s - waves ( ) . table 1 of paper ii contains a list of references for each reaction evaluated in the present work .the list is not exhaustive by any means , but provides the reader with an impression on the most recent or relevant work considered here .we hope that making our recommended input data available to the community represents a step forward in terms of the reproducibility and transparency of evaluated reaction rates .the input data presented in app . [ sec : nuclinput ] are based on the most reliable information that we are able to extract from the published literature .they reflect our best current knowledge of these parameters . by no meanscan we exclude the possibility that , for example , a reported resonance strength was derived using the wrong stoichiometry , or that an incorrect value has been reported for a particular nuclear level .this issue should be kept in mind when drawing conclusions from our monte carlo reaction rate uncertainties .many test runs have been performed to ensure proper functioning of our new monte carlo code ` ratesmc ` .it is gratifying to see that the present reaction rates agree with those calculated using the previous code ` rateerrors ` in those simple and restricted circumstances where the latter code provides accurate results ( see discussion in sec .3.3 of paper i ) .however , a number of issues are disregarded by ` ratesmc ` although their implementation is straightforward from a computational point of view : ( i ) when integrating the rate contribution of a resonance numerically , and the resonance can be formed via two orbital angular momenta , and , we only take the dominant contribution into account when scaling the particle partial width with energy ( see eq .( 16 ) of paper i ) ; ( ii ) when integrating the rate contribution of a capture resonance numerically , and the -ray decay strength is fragmented , we only take the strongest primary transition ( and the associated final state excitation energy ) into account when scaling the -ray partial width with energy ( see eq . (17 ) of paper i ) ; ( iii ) when upper limits of partial widths are involved , the code samples over a porter - thomas distribution of dimensionless reduced widths , with a mean value of .the mean value is found from a least - squares fit , as discussed in sec .5.2.1 of paper i , but the _ uncertainty of the mean value _ is not taken into account in the present version of the code ; ( iv ) in some cases ( i.e. , for proton - induced reactions on , , mg , , , and ) , where information had to be extracted from the mirror nucleus , the analog assignments are not unambiguous . in principle , one could sample over a discrete distribution representing the different choices of analog assignments , but the present version of the code disregards this option ; ( v ) for pairs of competing reactions , such as ( p, ) and ( p, ) , or ( , ) and ( ,n ) , the partial widths entering in the rate calculations are correlated .therefore , if our reaction rates are used in monte carlo nucleosynthesis studies , or are employed to derive the branching ratio ( for example , ) , then the resulting uncertainties on element abundances or branching ratios are likely overestimated .we did not implement the above options in the code since it is our intention to keep the formalism as simple and transparent as possible .although we doubt that these issues will have major consequences , we may consider them in a future version of ` ratesmc ` .the present monte carlo reaction rates also refine the motivation for future measurements in nuclear astrophysics .for example , if an experiment reduces the upper limit of a spectroscopic factor by an order of magnitude , then the upper limit " of the partial _ classical _ reaction rate is also reduced by that same factor .however , the monte carlo reaction rates , which are more reliable , behave in an entirely different manner , as explained at length in sec .4.4 of paper ii .furthermore , there will be now a new emphasis on precise measurements of resonance energies since the associated uncertainties often give rise to long tails in the reaction rate probability density function ( secs .4.2 and 4.3 of paper ii ) .finally we would like to comment on direct " versus indirect " measurements . the expression direct " refers to the measurement of a reaction cross section of astrophysical interest .an indirect " measurement , on the other hand , refers to a study of some nuclear quantity ( by using a reaction that is not necessarily the same as the one that occurs in the stellar plasma ) from which the cross section of astrophysical interest may be partially inferred .for example , measurements of the reactions (p , p) , (p , t) mg or ( , n) mg in order to study astrophysically important mg states represent indirect studies of the (p,) mg reaction , whereas an experiment using a radioactive beam on a hydrogen target is called a direct measurement of the (p,) mg reaction .the calculation of reaction rates using results from indirect measurements necessarily involves the application of some nuclear model .unfortunately , the systematic uncertainties introduced by a model are frequently difficult to quantify . for this reason it should be obvious that _ a direct measurement is generally preferred over an indirect one , even if the estimated reaction rate uncertainties calculated using results from indirect measurements are relatively small . _in the present paper of the series ( paper iii ) we publish the nuclear physics input data used to compute the monte carlo reaction rates presented in paper ii .the reaction rates are calculated using the new method discussed in paper i. our input data , listed in the appendix , are based on the most reliable information that we are able to extract from the published literature . by making our recommended input data available to the community we intend to improve the reproducibility and transparency of the evaluated reaction rates .the reaction rate uncertainties given in paper ii are statistical in nature and do not account for unknown systematic errors in the nuclear input data listed here .the survey of literature for this review was concluded in november 2009 .we would like to thank ryan fitzgerald for helpful comments .this work was supported in part by the u.s .department of energy under contract no .de - fg02 - 97er41041 .as an example , we provide below a table with the nuclear data input used to calculate the reaction rates for a hypothetical reaction x(p,)y with the code ` ratesmc ` .none of the entries in this table represent physical values , but are listed here for illustrative purposes only .all kinematic quantities are given in the center of mass reference frame .a row is disregarded as input if it begins with the symbol ` ! ' .the meaning of each input row will be briefly explained .+ + sample nuclear data input : + .... 01 17x(p , a)14y 02 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 03 1 !zproj 04 8 !ztarget 05 2 !zexitparticle ( = 0 when only 2 channels open ) 06 1.0078 !aproj 07 16.999 !atarget 08 4.0026 ! aexitparticle ( = 0 when only 2 channels open ) 09 0.5 ! jproj 10 1.5 !jtarget 11 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 12 5261.3 ! projectile separation energy ( kev ) 13 4015.3! exit particle separation energy ( = 0 when only 2 channels open ) 14 1.25 ! radius parameter r0 ( fm ) 15 3 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) 16 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 17 1.0 ! minimum energy for numerical integration ( kev ) 18 5000 ! number of random samples ( > 5000 for better statistics ) 19 0 ! = 0 for rate output at all temperatures ; = nt for rate output at selected temperatures 20 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 21 nonresonant contribution 22 s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 23 3.1e2 -2.1e-1 4.5e-6 0.4 1200.0 24 0.0 0.0 0.0 0.0 0.0 25 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 26 resonant contribution 27 note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !28 note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width 29 ecm decm wgdwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 30 -5.2 0.6 0 0 1 0.022 0.010 2 30.8 2.6 0 0.77 0.21 1 0.0 1 31 477.3 1.1 0 0 4 140 32 1 110 12 3 0.13 0.02 1 0.0 1 32 510.5 2.0 6.0e-1 1.8e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 33 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 34 upper limits of resonances 35 note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... 36 note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels! 37 ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 38 -3.4 0.7 1 7.9e-3 0.0 1 0.0045 15.0 2.9 1 0.0 0.33 0.07 1 0.0 0.0 1 39 73.5 0.4 1 4.5e-7 1.8e-7 1 0.0 150.0 0.0 1 0.010 0.21 0.02 1 0.0 0.0 1 40 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 41 interference between resonances [ numerical integration only ] 42 note : + for positive , - for negative interference ; + - if interference sign is unknown 43 ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf 44 + - 45 -2.3 0.6 1 6.2e-3 0.0 1 0.0045 17.0 1.1 1 0.0 0.71 0.07 1 0.0 0.0 46 89.0 0.8 1 2.9e-9 0.8e-9 1 0.0 230.0 9.0 1 0.0 0.66 0.03 1 0.0 0.0 47 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 48 comments : 49 1 .narrow resonance information is adopted from peterson et al . 1973 ..... explanation of input : aaaaaaaaa= row 01 : reaction label .+ row 02 : separator .+ row 03 : projectile charge .+ row 04 : target charge .+ row 05 : charge of _ exit particle _ ; it refers to the channel other than the entrance + and the -ray channel ; = 0 if only two channels are open and radiative + capture is the only possible reaction .+ row 06 : projectile mass in atomic mass units .+ row 07 : target mass in atomic mass units .+ row 08 : mass of exit particle in atomic mass units .+ row 09 : projectile spin .+ row 10 : target spin . + row 11 : spin of exit particle .+ row 12 : separation energy of incident particle in kev .+ row 13 : separation energy of exit particle in kev .+ row 14 : radius parameter in fm , used for calculating penetration factor .+ row 15 : label of -ray channel ; channel 1 refers to the incident particle , channel + 2 to the emitted quantum , and channel 3 to the spectator quantum .+ row 16 : separator .+ row 17 : minimum energy cutoff ( in kev ) for numerical integration of rates .+ row 18 : number of random samples .+ row 19 : flag for temperature output ; = 0 outputs results at all temperatures .+ row 20 : separator .+ rows 21 - 22 : comments .+ row 23 : input for nonresonant contribution ; ` s , s,s ` are the parameters , + , of the astrophysical s - factor ( eq . 5 of paper i ) in units + of kevb , b , b / kev , respectively ; ` fracerr ` is the fractional uncertainty , + }/e[x]$ ] , of the effective s - factor ( eq .8 of paper i ) ; ` cutoff energy ` + labels the energy ( in kev ) at which the s - factor is cut off at + higher energies ; it is related to the cutoff temperature ( see eq. 9 of + paper i ) by the expression + aaaaaaaaa= where is in mev and all other quantities have the same meaning + as in sec . 2 of paper i. + row 24 : input for a second nonresonant contribution , if needed .+ row 25 : separator .+ rows 26 - 29 : comments .+ row 30 : input for resonance contribution , one input row for each resonance ; + ` ecm , decm ` : resonance energy and uncertainty ; ` wg , dwg ` : resonance + strength , , and associated uncertainty ; ` jr ` : resonance spin ; + ` g1,dg1,l1 ` : incident particle partial width , uncertainty , orbital angular + momentum quantum number ; for a subthreshold resonance ( ) + the dimensionless reduced width ( eq .14 of paper i ) is listed instead + of the entrance channel partial width ; ` g2,dg2,l2 ` : partial width , + uncertainty , angular momentum quantum number ( or multipolarity for + -rays ) of emitted quantum ; ` g3,dg3,l3 ` : partial width , uncertainty , + angular momentum quantum number ( or multipolarity for -rays ) of + spectator quantum ; ` exf ` : excitation energy of level in residual nucleus + that is populated in primary transition ; ` int ` : = 0 for analytical rate + calculation ; = 1 for numerical rate calculation ( see eq . 1 of paper i ) ; for + the rate contribution is always computed numerically ; when the + resonance strength is entered the rate contribution is always computed + analytically , regardless of the flag value ; ` ecm , decm , exf ` are in units of + kev , while resonance strengths and partial widths are in ev .+ row 31 : input for second resonance ; in this example , the rate contribution is + calculated from partial widths and the rate is chosen to be integrated + numerically .+ row 32 : input for third resonance ; in this example , the rate contribution is + calculated from the resonance strength and the rate is necessarily + computed analytically .+ row 33 : separator .+ rows 34 - 37 : comments .+ row 38 : input for resonance contribution when only a partial ( or reduced ) width + upper limit is available for at least one reaction channel ; the number of + upper limit channels must be less than the number of open channels ; + the meaning of the input quantities is the same as for row 30 , except + that ( i ) resonance strengths are not allowed as input , and ( ii ) the mean + value , ` pt ` , for the porter - thomas distribution of dimensionless reduced + widths is entered for each upper limit channel ( sec .5.2.1 of paper i ) ; + upper limit channels are identified by a non - zero value for the partial + ( or reduced ) width and a zero value for the corresponding uncertainty ; + in this example , an upper limit for the dimensionless reduced proton + width is entered ( since ) .+ row 39 : input for second resonance ; in this example , an upper limit for the + dimensionless reduced -particle width is entered .+ row 40 : separator .+ rows 41 - 43 : comments .+ row 44 : flag for interference between resonances of same spin and parity ( sec .+ 2.4 of paper i ) ; ` + ` : positive interference ; ` - ` : negative interference ; ` + - ` : + unknown interference sign ( a binary probability density function is then + used for the random sampling ; sec .4.4 of paper i ) .+ rows 45 - 46 : input for two interfering resonances ; the meaning of the input quantities + is the same as for row 38 , except that the rate contribution is always + computed numerically . + row 47 : separator .+ rows 48 - 49 : comments ; references given in this section are summarized again after + each table .the symbols ` gp ` , ` ga ` , ` gn ` , ` gg ` and ` g ` refer to the proton , + -particle , neutron , -ray partial width and total width , respectively ; + ` er ` denotes the resonance energy in the center of mass system . +.... 14c(p , g)15n * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 6 ! ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 14.003242 !atarget 1.009 !aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 0.0 !jtarget 0.5 !jexitparticle ( = 0 when only 2 channels open ) 10207.42 ! projectile separation energy ( kev ) 10833.30 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 ! = 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * non - resonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 5.24 -1.22e-3 5.9e-7 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 242.3 0.3 2.9e-4 0.5e-4 2.5 0 0 0 0 0 0 0 0 0 0.0 0 325.9 0.5 3.7e-2 0.6e-2 2.5 0 0 0 0 0 0 0 0 0 0.0 0 242.3 0.3 3.1e-3 0.5e-3 4.50 0 0 0 0 0 0 0 0 0.0 0 494.5 0.3 0.84 0.13 2.5 0 0 0 0 0 0 0 0 0 0.0 0 596.6 2.0 0.27 0.04 1.5 0 0 0 0 0 0 0 0 0 0.0 0 1085.4 0.7 0 0 0.5 7.7e3 2.0e3 1 0.29 0.10 1 7.0e2 2.0e2 1 0.0 1 1230.2 0.7 0 0 0.5 6.8e3 0.5e3 0 4.2 0.7 1 34.6e3 0.9e3 0 0.0 1 1408.0 4.0 0 0 0.5 400.9e3 6.3e3 0 21.2 0.8 1 4.0e3 0.2e3 0 0.0 1 2315.0 8.0 0 0 2.5 58.4e3 10.e3 2 0.34 0.15 2 0.0 0.0 2 5300.0 1 3183.0 10.0 0 0 1.5 35.0e3 10.0e3 2 3.0 0.9 1 26.0e3 8.0e3 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .resonance energies are derived from excitation energies ( ajzenberg - selove 1991 ) and q - value ( audi et al .resonance strengths are taken from goerres et al .1990 for the five low energy resonances .partial widths for the next five resonances are found in bartholomew et al .1955 , ferguson and gove 1959 , french et al . 1961 ,kuan et al . ( 1971 ; 1976 ) , ramirez et al .1972 , and niecke et al . 1977 .4 . as in goerres et al .1990 , direct capture is calculated using spectroscopic factors from bommer et al ..... references : ajzenberg - selove ; audi et al . ; bartholomew et al . ; bommer et al . ; ferguson and gove ; french et al . ; grres et al . ; kuan et al . ; niecke et al . ; ramirez et al . . .... 14c(a ,g)18o * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2 ! zproj 6 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 4.0026 !aproj 14.003242 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.0 ! jproj 0.0 !jtarget 0 !jexitparticle ( = 0 when only 2 channels open ) 6226.3 !projectile separation energy ( kev ) 0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 3.98e3 1.6 1.64e-3 1.53 2000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int -28.1 0.7 0 0 1.0 0.025 0.020 1 0.178 0.029 1 0 0 0 0.0 1 890.6 1.3 0.45 0.09 3.0 0 0 0 0 0 0 0 0 0 0.0 0 1389.6 0.9 1.23 0.25 4.0 0 0 0 0 0 0 0 0 0 0.0 0 1638.0 5.0 0.47 0.14 1.0 0 0 0 0 0 0 0 0 0 0.0 0 1811.5 0.9 3.22 0.64 1.0 0 0 0 0 0 0 0 0 0 0.0 0 1899.0 2.0 2.92 0.58 5.0 0 0 0 0 0 0 0 0 0 0.0 0 1987.0 4.0 0 0 2.0 1.0e3 0.8e3 2 0.412 0.093 2 0 0 0 1982.0 1 2056.0 3.0 0 0 3.0 8.0e3 1.0e3 3 0.488 0.129 1 0 0 0 3555.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 178.1 1.3 3 0.9e-15 0.0 3 0.010 0.022 0.017 1 0 0.0 0.0 0 0 1980.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 . following gai et al .1987 , goerres et al .1992 and lugaro et al .2004 , we consider 9 natural parity levels , including one below threshold .resonance energies are derived from excitation energies ( tilley et al .1995 ) and q - value ( audi et al . 2003 ) .3 . seven resonance strengths and radiative widths are taken from gai et al .alpha - particle widths for the -28 and 178 kev resonances are extracted from the alpha transfer experiment of cunsolo et al .1981 , assuming a factor of two uncertainty .non - resonant contribution is adopted from goerres et al .1992 . .... .... 14n(a , g)18f * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2 ! zproj 7 ! ztarget 0! zexitparticle ( = 0 when only 2 channels open ) 4.0026 !aproj 14.003 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.0 !jproj 1.0 ! jtarget 0 ! jexitparticle ( = 0 when only 2 channels open ) 4414.6 ! projectile separation energy ( kev ) 0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2.0 ! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 8.5e1 0.0 0.0 0.79 2000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 445.4 2.1 0 0 1.0 4.5e-5 0.4e-51 0.011 0.003 1 0 0 0 1041.6 1 883.0 1.6 21.0e-3 1.5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1087.4 2.1 0.0057 0.0010 0 0 0 0 0 0 0 0 0 0 0.0 0 1190.26 0.57 1.35 0.10 0 0 0 0 0 0 0 0 0 0 0.0 0 1257.0 0.5 0.45 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1375.2 0.6 0.016 0.006 0 0 0 0 0 0 0 0 0 0 0.0 0 1681.8 1.2 0.066 0.013 0 0 0 0 0 0 0 0 0 0 0.0 0 1693.4 3.0 0.027 0.010 0 0 0 0 0 0 0 0 0 0 0.0 0 1825.8 0.9 1.18 0.22 0 0 0 0 0 0 0 0 0 0 0.0 0 1827.4 3.0 1.32 0.24 0 0 0 0 0 0 0 0 0 0 0.0 0 1895.9 0.9 0.17 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1970.9 1.8 0.53 0.13 0 0 0 0 0 0 0 0 0 0 0.0 0 2070.3 1.6 0.053 0.020 0 0 0 0 0 0 0 0 0 0 0.0 0 2152.4 1.6 0.097 0.020 0 0 0 0 0 0 0 0 0 0 0.0 0 2229.1 0.9 0.90 0.17 0 0 0 0 0 0 0 0 0 0 0.0 0 2362.4 1.5 0.04 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 237.4 2.1 4.0 4.1e-15 0.0 4 0.00001 0.1 0.01 1 0.0 0 0 0 0.0 1121.4 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .direct capture s - factor from goerres et al .note that it is not based on experimental spectroscopic factors , but was estimated using an alpha - particle spectroscopic factor of sa=0.1 for all final states ; we assign a factor 2 uncertainty to the s - factor ( yielding a fractional uncertainty of 0.79 ; see the numerical example at the end of sec .5.1.2 of paper i ) . 2 .er=237 kev : ( i ) for estimation of upper limit on alpha - particle partial width , see comments in paper ii ; ( ii ) value of gg=0.1 + -0.01 ev is a guess ( inconsequential since ga<<gg ) . 3 . er=445kev : partial widths computed using measured resonance strength ( goerres et al . 2000 ) and measured lifetime ( rolfs , berka and azuma 1973 ) . 4 .ex=4964 kev ( er=549 kev with jp;t=2+;1 ) disregarded ( isospin - forbidden decay ) .ex=5603/5605 kev ( er=1189/1190 kev ) : doublet ; measured summed strength adopted here .strengths for other resonances adopted from becker et al . 1982 ;goerres et al . 2000 ; kieser et al . 1979 ; and rolfs ,charlesworth and azuma 1973 . 7 .levels with jp=0 + can be disregarded since their population as resonances in 14n+a is forbidden according to angular momentum selection rules .we disregard ex=4848 kev ( jp=5- ; er=434 kev ) since this ( unobserved ) l=5 resonance is presumably negligible compared to observed er=445 kev ( l=1 ) resonance ..... .... 15n(a , g)19f * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2 ! zproj 7 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 4.0026 !aproj 15.000 !atarget 0 !aexitparticle ( = 0 when only 2 channels open ) 0.0 !jproj 0.5 !jtarget 0 !jexitparticle ( = 0 when only 2 channels open ) 4013.74 ! projectile separation energy ( kev ) 0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2.0 ! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 6.148e3 0.0 0.0 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int -15.0 0.7 0 0 3.5 0.135 0.05 4 0.035 0.013 1 0 0 0 0.0 1 18.8 1.2 0 0 4.5 1.2e-69 0.4e-69 4 0.0098 0.0022 1 0 0 0 0.0 1 363.96 0.08 6.0e-9 4.7e-9 0 0 0 0 0 0 0 0 0 0 0.0 0 536.2 0.8 95.5e-6 11.7e-6 0 0 0 0 0 0 0 0 0 0 0.0 0 542.4 0.5 6.4e-6 2.5e-6 0 0 0 0 0 0 0 0 0 0 0.0 0 668.8 0.7 5.6e-3 0.6e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1092.9 0.9 9.7e-3 1.6e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1323.0 2.0 0 0 0.5 1.3e3 0.5e3 1 1.69 0.14 1 0 0 0 0.0 1 1404.0 1.0 0.380 0.044 0 0 0 0 0 0 0 0 0 0 0.0 0 1449.8 1.5 2.10 0.14 0 0 0 0 0 0 0 0 0 0 0.0 0 1487.0 1.7 0 0 1.5 4.e3 1.e3 1 1.78 0.17 1 0 0 0 0.0 1 1521.0 2.0 0.344 0.040 0 0 0 0 0 0 0 0 0 0 0.0 0 1607.0 1.0 0.323 0.038 0 0 0 0 0 0 0 0 0 0 0.0 0 1924.0 1.0 0.416 0.048 0 0 0 0 0 0 0 0 0 0 0.0 0 2056.0 1.0 0 0 3.5 1.2e3 0.4e3 3 0.525 0.065 1 0 0 0 0.0 1 2074.0 1.0 0 0 1.5 4.7e3 1.6e3 2 2.5 0.3 1 0 0 0 0.0 1 2086.0 2.0 0.440 0.069 0 0 0 0 0 0 0 0 0 0 0.0 0 2146.9 0.9 2.40 0.50 0 0 0 0 0 0 0 0 0 0 0.0 0 2268.0 2.0 0 0 2.5 2.4e3 0.7e3 3 0.33 0.07 1 0 0 0 0.0 1 2316.0 2.0 0 0 3.5 2.4e3 0.7e3 3 0.19 0.04 1 0 0 0 0.0 1 2483.0 1.4 1.7 0.30 0 0 0 0 0 0 0 0 0 0 0.0 0 2486.3 0.9 2.3 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 2513.8 1.4 0 0 1.5 4.e3 1.5e3 1 1.2 0.2 1 0 0 0 0.0 1 2540.0 2.0 0 0 3.5 1.6e3 0.5e3 3 0.16 0.03 1 0 0 0 0.0 1 2578.0 2.0 1.6 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 2773.0 2.0 10.9 1.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2824.7 0.9 0 0 2.5 1.2e3 0.4e3 3 0.33 0.07 1 0 0 0 0.0 1 2877.0 4.0 0 0 1.5 28.e3 8.e3 2 3.0 0.7 1 0 0 0 0.0 1 2912.8 1.7 0 0 3.5 2.4e3 0.8e3 4 2.4 0.4 1 0 0 0 0.0 1 3152.5 0.7 1.00 0.12 0 0 0 0 0 0 0 0 0 0 0.0 0 3525.9 0.9 17.5 1.7 0 0 0 0 0 0 0 0 0 0 0.0 0 3646.9 0.9 3.7 0.9 0 0 0 0 0 0 0 0 0 0 0.0 0 3923.0 3.0 3.1 0.5 0 0 0 0 0 0 0 0 0 0 0.0 0 4274.0 2.0 0.55 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 4296.3 1.2 2.1 0.5 0 0 0 0 0 0 0 0 0 0 0.0 0 4356.0 4.0 0 0 3.5 7.5e3 1.5e3 3 0.13 0.05 1 0 0 0 0.0 1 4569.8 1.6 5.1 1.3 0 0 0 0 0 0 0 0 0 0 0.0 0 4578.2 1.0 0 0 1.5 2.0e3 0.1e3 2 0.8 0.2 1 0 0 0 0.0 1 4615.0 4.0 2.5 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 4850.0 4.0 0.20 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 4939.0 3.0 0.89 0.13 0 0 0 0 0 0 0 0 0 0 0.0 0 5016.0 5.0 0 0 2.5 4.2e3 1.0e3 2 0.18 0.09 1 0 0 0 0.0 1 5086.0 0.7 0.48 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 5087.0 4.0 0.4 0.1 0 0 0 0 0 0 0 0 0 0 0.0 0 5153.0 1.4 0 0 0.5 6.2e3 0.5e3 1 1.4 1.0 1 0 0 0 0.0 1 5190.0 7.0 0 0 1.5 10.2e3 1.5e3 1 0.8 0.3 1 0 0 0 0.0 1 5253.0 4.0 0 0 5.5 2.e3 1.e3 5 0.025 0.007 1 0 0 0 0.0 1 5266.0 5.0 0.38 0.09 0 0 0 0 0 0 0 0 0 0 0.0 0 5307.0 1.1 0 0 0.5 5.0e3 0.2e3 1 3.4 1.7 1 0 0 0 0.0 1 5495.0 4.0 0.7 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 5522.7 2.0 0 0 2.5 6.3e3 1.5e3 3 0.17 0.06 1 0 0 0 0.0 1 5572.0 3.0 0 0 3.5 8.9e3 1.2e3 3 1.3 0.8 1 0 0 0 0.0 1 5628.0 6.0 0 0 1.5 8.e3 4.e3 1 0.5 0.25 1 0 0 0 0.0 1 5640.0 6.0 0 0 1.5 6.e3 3.e3 1 1.0 0.5 1 0 0 0 0.0 1 5653.8 1.5 0 0 1.5 3.6e3 0.4e3 1 0.5 0.25 1 0 0 0 0.0 1 5696.0 4.0 4.0 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 5806.0 1.0 3.5 0.8 0 0 0 0 0 0 0 0 0 0 0.0 0 5820.0 3.0 0.51 0.10 0 0 0 0 0 0 0 0 0 0 0.0 0 5860.0 1.8 3.6 0.6 0 0 0 0 0 0 0 0 0 0 0.0 0 5912.0 3.0 19.3 3.0 0 0 0 0 0 0 0 0 0 0 0.0 0 6074.0 5.0 2.37 0.50 0 0 0 0 0 0 0 0 0 0 0.0 0 6123.3 0.8 0 0 1.5 4.3e3 0.6e3 2 1.6 0.5 1 0 0 0 0.0 1 6351.0 4.0 0 0 3.5 3.0e3 1.5e3 3 0.2 0.1 1 0 0 0 0.0 1 6397.0 3.0 15.0 3.0 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int -105.57 0.21 1.5 5.0e-2 0.0 1 0.010 0.073 0.041 1 0 0 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 . same input data as in angulo et al. 1999 ( nacre ) , except for the resonances measured by wilmes et al . 2002 .2 . for the important 4.38 mev and near threshold levels , we use transfer data from de oliveira et al . 1996 .3 . direct capture s - factor adopted from de oliveira et al .1996 ( after correction for -105 kev contribution to nonresonant s - factor ) . .... .... 15o(a , g)19ne * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2 !zproj 8 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 4.0026 !aproj 15.003 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.0 ! jproj 0.5 !jtarget 0 ! jexitparticle ( = 0 when only 2 channels open ) 3529.1 ! projectile separation energy ( kev ) 0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 2.0e4 0.0 0.0 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 505.4 1.0 0 0 1.5 1.0e-5 1.5e-5 1 0.082 0.017 1 0 0 0 0.0 1 848.7 0.8 0 0 3.5 1.5e-4 2.3e-4 3 0.458 0.092 1 0 0 0 0.0 1 1018.6 1.2 0 0 0.5 3.5e-3 0.9e-3 0 0.032 0.005 1 0 0 0 0.0 1 1072.7 1.0 0 0 2.5 0.026 0.007 3 0.060 0.016 1 0 0 0 0.0 1 1183.0 10.0 0 0 2.5 0.24 0.09 2 0.043 0.008 1 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .resonance energies are deduced from the excitation energies given by tilley et al .1995 and tan et al . 2005 .total widths are deduced from dsam measurements by tilley et al .1995 , tan et al .2005 , kanungo et al .2006 and mythili et al . 2008 .alpha - particle widths for the first two resonances are deduced from measurements of alpha - particle transfer to the mirror levels , performed by mao et al .1995 and de oliveira et al . 1996 .other alpha - particle widths are obtained from measured alpha - particle branching ratios ( davids et al .2003 ) . 5 .a constant value of 20 mev b is adopted for the direct capture contribution ( langanke et al .1986 ; dufour and descouvemont 2000 ) . 6 . above 0.6 gk, the rate is matched using results from the talys statistical model code ( goriely et al .2008 ) . ....references : davids et al . ; dufour and descouvemont ; goriely , hilaire and koning ; kanungo et al. ; langanke et al . ; mao , fortune and lacaze ; mythili et al . ; de oliveira et al . ; tan et al . ; tilley . .... 16o(a ,g)20ne * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2 ! zproj 8 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 4.0026 !aproj 15.9949 !atarget 0 !aexitparticle ( = 0 when only 2 channels open ) 0.0 !jproj 0.0 !jtarget 0 !jexitparticle ( = 0 when only 2 channels open ) 4729.85 ! projectile separation energy ( kev ) 0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2.0 ! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * non - resonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 3.62e3 -0.6872 0.0 0.4 1500.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 891.6 1.7 0 0 3 2.4e-3 0.7e-3 3 2.7e-4 0.8e-4 1 0 0 0 1633.7 1 1057.9 2.6 0 0 1 28.0 3.0 1 6.7e-3 1.0e-3 1 0 0 0 1633.7 1 1995.2 5.0 0 0 0 1.9e4 0.9e3 0 7.1e-2 1.2e-2 2 0 0 0 1633.7 1 2426.5 0.5 0 0 3 8.2e3 0.3e3 3 1.5e-3 0.2e-3 1 0 0 0 4247.7 1 2461.2 3.0 0 0 0 3.4e3 0.2e3 0 4.4e-3 0.8e-3 2 0 0 0 1633.7 1 2692.1 1.2 0 0 2 8.0e3 1.0e3 2 3.0e-2 4.0e-3 1 0 0 0 1633.7 1 3103.6 1.5 0 0 2 2.0e3 0.5e3 2 6.9e-2 0.7e-2 2 0 0 0 0.0 1 3978.2 7.0 0.21 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 4047.8 2.2 1.35 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 4301.2 7.0 3.05 0.38 0 0 0 0 0 0 0 0 0 0 0.0 0 4386.2 3.0 0.18 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 4753.2 3.0 1.3 0.5 0 0 0 0 0 0 0 0 0 0 0.0 0 5260.2 8.0 8.0 3.0 0 0 0 0 0 0 0 0 0 0 0.0 0 5543.4 1.9 19.5 1.5 0 0 0 0 0 0 0 0 0 0 0.0 0 6360.2 3.0 30.2 3.5 0 0 0 0 0 0 0 0 0 0 0.0 0 6540.2 5.0 2.06 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 6828.2 4.0 0.41 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 7198.2 4.0 0.23 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 7221.2 4.0 0.131 0.002 0 0 0 0 0 0 0 0 0 0 0.0 0 7491.2 4.0 1.41 0.23 0 0 0 0 0 0 0 0 0 0 0.0 0 7526.2 3.0 6.6 0.8 0 0 0 0 0 0 0 0 0 0 0.0 0 7671.2 5.0 1.94 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .resonance strengths from tilley et al .1998 ( see also angulo et al .2 . for er<3.5 mev ( except for lowest resonances ;see below ) , g and wg from tilley et al .1998 , mayer 2001 and angulo et al .1999 used to calculate partial widths .3 . direct capture s - factor estimated from transitions shown in fig . 1 of mohr 2005 , but disregarding the transitions due to er=1058 kev resonance ( they taken into account in the resonance input ) ; we increased the uncertainty slightly ( 40% ) compared to mohr 2005 ( 30% ) .4 . er=1058kev : partial widths calculated from wg ( tilley et al .1998 ) and ga=28 + -3 ev ( macarthur et al . 1980 ) . 5 .er=892 kev : partial widths calculated from wg ( tilley et al .1998 ) and tau=246 fs ( weighted average of values in ajzenberg - selove 1972 ) ; solution of quadratic equation gives gx=2.4e-3 ev and gy=2.7e-4 ev ; the larger value is identified with ga , in agreement with ( 6li , d ) measurement ( mao et al . 1996 ) ..... .... 17o(p , g)18f * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 8 !ztarget 2 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 16.999 !atarget 4.0026 !aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 2.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 5606.5 ! projectile separation energy ( kev ) 4414.6 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * non - resonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 4.62 0.0 0.0 0.23 1200 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int -3.12 0.57 0 0 1 0.054 0.018 2 0.485 0.051 1 42.8 1.6 0 0.0 1 489.9 1.2 1.3e-2 0.16e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 529.9 0.3 1.1e-1 2.5e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 556.7 1.0 0 0 3 14000 500 0 0.57 0.13 1 5.0 0.6 2 0.0 1 633.9 0.9 0.16 0.026 0 0 0 0 0 0 0 0 0 0 0.0 0 704.0 0.9 3.2e-2 7.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 878.4 1.6 1.8e-2 0.7e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1170.5 1.5 1.40e-1 2.8e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1196.6 1.5 2.7e-2 0.92e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1270.8 1.7 5.0e-2 1.9e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0 0.0 0.0 0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf + - -1.64 0.57 1 8.2e-3 0.0 1 0.0045 0.894 0.074 1 0.0 32.0 2.1 1 0.0 0.0 65.1 0.5 1 1.9e-8 3.2e-9 1 0.0 0.44 0.02 1 0.0 130.0 5.0 1 0.0 0.0 + - 183.35 0.25 2 0.0040 0.00024 1 0.0 0.0096 0.0036 1 0.0 13.3 5.5 1 0.0 0.0 1037.2 0.9 2 368.0 61.0 1 0.0 0.84 0.20 1 0.0 231.0 40.0 1 0.0 0.0 + - 676.7 1.0 2 10000 500 0 0.0 1.09 0.23 1 0.0 27.0 3.0 2 0.0 0.0 779.0 1.8 2 109.0 11.0 0 0.0 0.261 0.068 1 0.0 286.0 87.0 2 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 . for the 557 and 677 kev resonances , we have wg = wgg since g = gp ..... .... 17o(p , a)14n * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 8 !ztarget 2 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 16.999 !atarget 4.0026 !aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 2.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 5606.5 ! projectile separation energy ( kev ) 4414.6 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 3 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int -3.12 0.57 0 0 1 0.054 0.018 2 42.8 1.6 0 0.485 0.051 1 0.0 1 489.9 1.2 0 0 4 138 26 1 106 17 3 0.1 0.001 1 0.0 1 501.5 3.0 0 0 1 0.20 0.02 2 33.6 3.3 0 0.1 0.001 1 0.0 1 556.7 1.0 0 0 3 14000 500 0 5.0 0.6 2 0.1 0.001 1 0.0 1 633.9 0.9 0 0 3 58.2 7.0 1 133 24 3 0.1 0.001 1 0.0 1 635.5 3.0 0 0 3 40.8 3.7 1 137 35 3 0.1 0.001 1 0.0 1 655.5 2.5 0 0 1 27 3 2 575 120 0 0.1 0.001 1 0.0 1 676.7 1.0 0 0 2 10000 500 0 27 3 2 0.1 0.001 1 0.0 1 704.0 0.9 0 0 3 525 117 0 426 82 2 0.1 0.001 1 0.0 1 779.0 1.8 0 0 2 109 11 0 286 87 2 0.1 0.001 1 0.0 1 878.4 1.6 0 0 3 277 91 0 123 25 2 0.1 0.001 1 0.0 1 960.5 1.6 0 0 5 1.2 0.1 2 560 132 4 0.1 0.001 1 0.01 1026.5 10.0 0 0 1 2920 315 1 77090 2000 1 0.1 0.001 1 0.0 1 1037.2 0.9 0 0 2 368 61 1 231 40 1 0.1 0.001 1 0.0 1 1170.5 1.5 0 0 4 9000 1000 2 150 24 4 0.1 0.001 1 0.0 1 1204.5 10.0 0 0 2 2750 450 0 210 67 2 0.1 0.001 1 0.0 1 1250.5 10.0 0 0 3 5000 1000 1 30 7 3 0.1 0.001 1 0.0 1 1594.5 2.1 0 0 4 29400 1000 2 500 58 4 0.1 0.001 1 0.0 1 1640.5 2.1 0 0 1 5000 1000 2 55000 5000 0 0.1 0.001 1 0.0 1 1684.5 2.1 0 0 3 15820 1426 1 44180 15000 3 0.1 0.001 1 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0 0.0 0.0 0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf + - -1.64 0.57 1 8.2e-3 0.0 1 0.0045 32.0 2.1 1 0.0 0.894 0.074 1 0.0 0.0 65.1 0.5 1 1.9e-8 3.2e-9 1 0.0 130.0 5.0 1 0.0 0.44 0.02 1 0.0 0.0 + - 183.35 0.25 2 0.0040 0.00024 1 0.0 13.3 5.5 1 0.0 0.0096 0.0036 1 0.0 0.0 1202.5 5.0 2 16570 1600 1 0.0 71500 2000 1 0.0 0.1 0.001 1 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .gg=0.1 + -0.001 ev for er=1203 kev and er=490 - 1685 kev is a guess ( inconsequential ) . ........ 18o(p , g)19f * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 8 !ztarget 2 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 17.999 !atarget 4.0026 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 7994.8 ! projectile separation energy ( kev ) 4013.74 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 1.57e1 0.34e-3 -2.42e-6 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 19.3 0.7 0 0 2.5 2.3e-19 0.5e-19 2 2.3 1.0 1 2.5e3 1.0e3 3 0.0 1 142.9 0.1 0 0 0.5 1.67e-1 0.12e-1 0 0.72 0.15 1 1.23e2 0.24e2 1 0.0 1 204.2 1.0 5.0e-6 1.0e-6 0 0 0 0 0 0 0 0 0 0 0.0 0 259.5 2.6 3.7e-5 0.5e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 315.2 1.3 0 0 2.5 1.9e-2 0.3e-2 2 0.78 0.34 1 47.0 19.0 3 0.0 1 588.7 1.7 1.0e-2 0.2e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 597.1 1.2 0 0 1.5 1.4e2 0.7e2 1 0.71 0.39 1 2.0e3 0.1e3 2 0.0 1 798.4 1.6 0 0 0.5 24.6e3 1.4e3 0 2.5 0.4 1 20.e3 1.0e3 1 0.0 1 931.9 2.8 0 0 1.5 76.0 7.0 1 0.34 0.06 1 3.5e3 0.3e3 2 0.0 1 1106.2 4.0 0.29 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1172.2 1.5 0 0 0.5 0.38e3 0.03e3 0 1.4 1.0 1 5.4e3 0.38e3 1 0.0 1 1323.2 2.1 0.08 0.01 0 0 0 0 0 0 0 0 0 0 0.0 0 1326.2 1.2 0 0 0.5 0.22e3 0.02e3 0 3.4 1.7 1 4.7e3 0.4e3 1 0.0 1 1541.6 2.1 0.025 0.005 0 0 0 0 0 0 0 0 0 0 0.0 0 1571.0 3.0 0.041 0.010 0 0 0 0 0 0 0 0 0 0 0.0 0 1580.0 4.0 0.06 0.01 0 0 0 0 0 0 0 0 0 0 0.0 0 1591.0 3.0 0.025 0.004 0 0 0 0 0 0 0 0 0 0 0.0 0 1672.7 1.6 0 0 1.5 2.0e3 0.6e3 2 1.0 0.4 1 1.4e3 0.4e3 1 0.0 1 1825.2 1.2 2.8 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 1879.2 1.9 0.13 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1892.0 3.0 0 0 0.5 11.e3 3.0e3 0 0.36 0.20 1 18.0e3 5.4e3 1 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 89.0 3.0 1.5 8.0e-8 2.5e-8 1 0 0.60 0.25 1 0 3.0e3 0.0 2 0.010 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .almost the same input data as in angulo et al .1999 ( nacre ) are adopted here ; however , we obtain the resonant rate contributions by numerical integration , whenever possible .2 . proton width for er=19 kev from champagne and pitt 1986 and la cognata et al .2008 , while total and radiative widths are from a private communication quoted in wiescher et al . 1980 .proton , total and radiative widths for er=89 kev are adopted from lorentz - wirzba et al .1979 , tilley et al . 1995 and a private communication quoted in wiescher et al . 1980 , respectively .4 . direct capture s - factor adopted from wiescher et al .1980 . 5 . above t=5 gkthe rate is extrapolated using the most hauser - feshbach rate ..... .... 18o(p , a)15n * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 8 !ztarget 2 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 17.999 !atarget 4.0026 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 7994.8 ! projectile separation energy ( kev ) 4013.74 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 3 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * non - resonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 19.3 0.7 0 0 2.5 2.3e-19 0.5e-19 2 2.5e3 1.0e3 3 2.3 1.0 1 0.0 1 142.9 0.1 0 0 0.5 1.67e-1 0.12e-1 0 1.23e2 0.24e2 1 0.72 0.15 1 0.0 1 315.2 1.3 0 0 2.5 1.9e-2 0.3e-2 2 4.7e1 1.9e1 3 0.78 0.34 1 0.0 1 597.1 1.2 0 0 1.5 1.4e2 0.7e2 1 2.0e3 0.1e3 2 0.71 0.39 1 0.0 1 931.9 2.8 0 0 1.5 7.6e1 0.7e1 1 3.5e3 0.3e3 2 0.34 0.06 1 0.0 1 1106.2 4.0 0 0 3.5 4.7 0.6 4 5.6e2 0.76e2 3 0 0 0 0.0 1 1172.2 1.5 0 0 0.5 3.8e2 0.3e2 0 5.4e3 0.38e3 1 1.4 1.0 1 0.0 1 1326.2 1.2 0 0 0.5 2.2e2 0.2e2 0 4.7e3 0.4e3 1 3.4 1.7 1 0.0 1 1672.7 1.6 0 0 1.5 2.0e3 0.6e3 2 1.4e3 0.4e3 1 1.0 0.4 1 0.0 1 1825.2 1.2 0 0 2.5 9.0e1 3.0e1 3 7.0e1 2.0e1 2 0 0 0 0.0 1 1892.0 3.0 0 0 0.5 1.1e4 0.3e4 0 1.8e4 0.54e4 1 0.36 0.20 1 0.0 1 2167.0 3.0 0 0 0.5 2.2e3 0.7e3 0 0.9e3 0.3e3 1 0 0 0 0.0 1 2237.0 3.0 0 0 0.5 2.7e3 0.8e3 0 1.6e3 0.5e3 1 0 0 0 0.0 1 2259.0 3.0 0 0 0.5 1.0e4 3.0e3 0 12.0e3 4.0e3 1 0 0 0 0.0 1 2313.0 4.0 0 0 1.5 4.9e3 1.5e3 2 4.3e3 1.3e3 1 0 0 0 0.0 1 2501.5 1.4 0 0 1.5 2.3e3 0.7e3 2 0.95e3 0.3e3 1 0 0 0 0.0 1 2619.5 1.7 0 0 2.5 0.66e3 0.20e3 2 1.0e3 0.3e3 3 0 0 0 0.0 1 2768.5 2.6 0 0 0.5 4.3e3 1.3e3 1 1.1e3 0.3e3 0 0 0 0 0.0 1 2864.9 2.0 0 0 2.5 12.3e3 3.7e3 2 5.4e3 1.6e3 3 0 0 0 0.0 1 2980.2 2.6 0 0 1.5 4.7e3 1.4e3 2 4.3e3 1.3e3 1 0 0 0 0.0 1 3291.0 7.0 0 0 2.5 4.07e3 0.95e3 2 7.7e3 4.8e3 3 0 0 0 0.0 1 3355.0 25.0 0 0 0.5 228.3e3 1.9e3 0 43.0e3 31.0e3 1 0 0 0 0.0 1 3455.0 3.5 0 0 0.5 16.1e3 2.8e3 1 22.0e3 7.0e3 0 0 0 0 0.0 1 3507.0 5.0 0 0 1.5 11.4e3 1.9e3 1 16.0e3 6.0e3 2 0 0 0 0.0 1 3545.0 7.0 0 0 2.5 3.5e3 1.0e3 2 18.3e3 4.8e3 3 0 0 0 0.0 1 3608.0 12.0 0 0 1.5 26.0e3 8.0e3 1 43.0e3 16.0e3 2 0 0 0 0.0 1 3658.0 4.0 0 0 1.5 11.2e3 1.8e3 2 19.0e3 8.0e3 1 0 0 0 0.0 1 4045.0 20.0 0 0 0.5 70.0e3 60.0e3 1 64.0e3 16.0e3 0 0 0 0 0.0 1 4141.0 8.0 0 0 1.5 61.0e3 15.0e3 1 51.0e3 9.0e3 2 0 0 0 0.0 1 4227.0 12 .0 0 1.5 39.0e3 10.0e3 2 36.0e3 9.0e3 1 0 0 0 0.0 1 4527.0 7.0 0 0 0.5 2.6e3 0.9e3 1 13.4e3 4.4e3 0 0 0 0 0.0 1 4582.0 10.0 0 0 2.5 4.3e3 1.6e3 2 44.4e3 7.8e3 3 0 0 0 0.0 1 4585.0 25.0 0 0 0.5 112.0e3 28.0e3 1 226.0e3 33.0e3 0 0 0 0 0.0 1 4785.0 10.0 0 0 2.5 12.3e3 6.2e3 2 82.0e3 33.0e3 3 0 0 0 0.0 1 4865.0 30.0 0 0 1.5 118.0e3 25.0e3 2 161.0e3 24.0e3 1 0 0 0 0.0 1 4945.0 25.0 0 0 2.5 11.0e3 8.0e3 2 76.0e3 14.0e3 3 0 0 0 0.0 1 4985.0 50.0 0 0 0.5 18.0e3 10.0e3 1 105.0e3 33.0e3 0 0 0 0 0.0 1 5095.0 75.0 0 0 1.5 71.0e3 27.0e3 1 213.0e3 56.0e3 2 0 0 0 0.0 1 5322.0 8.0 0 0 3.5 9.1e3 2.1e3 3 22.0e3 10.0e3 4 0 0 0 0.0 1 5365.0 25.0 0 0 1.5 1.9e3 1.2e3 1 36.0e3 18.0e3 2 0 0 0 0.0 1 5737.0 11.0 0 0 3.5 11.6e3 2.2e3 3 43.0e3 9.0e3 4 0 0 0 0.0 1 6045.0 20.0 0 0 2.5 11.4e3 2.8e3 2 129.0e3 29.0e3 3 0 0 0 0.0 1 6105.0 21.0 0 0 1.5 7.6e3 2.8e3 1 76.0e3 29.0e3 2 0 0 0 0.0 1 6335.0 20.0 0 0 1.5 8.5e3 2.8e3 1 67.0e3 29.0e3 2 0 0 0 0.0 1 6705.0 20.0 0 0 1.5 19.9e3 4.7e3 1 103.0e3 38.0e3 2 0 0 0 0.0 1 6745.0 50.0 0 0 0.5 95.0e3 24.0e3 0 265.0e3 70.0e3 1 0 0 0 0.0 1 6785.0 20.0 0 0 2.5 29.9e3 5.7e3 2 179.0e3 48.0e3 3 0 0 0 0.0 1 6925.0 30.0 0 0 3.5 19.0e3 3.8e3 3 178.0e3 29.0e3 4 0 0 0 0.0 1 7365.0 20.0 0 0 0.5 5.7e3 1.9e3 1 61.0e3 10.0e3 0 0 0 0 0.0 1 7405.0 30.0 0 0 2.5 6.6e3 1.9e3 2 73.0e3 24.0e3 3 0 0 0 0.0 1 7775.0 21.0 0 0 1.5 7.6e3 2.8e3 1 89.0e3 24.0e3 2 0 0 0 0.0 1 8205.0 40.0 0 0 1.5 15.2e3 3.8e3 2 155.0e3 29.0e3 1 0 0 0 0.0 1 8235.0 30.0 0 0 3.5 12.3e3 3.8e3 3 209.0e3 38.0e3 4 0 0 0 0.0 1 8285.0 20.0 0 0 1.5 12.3e3 3.8e3 1 154.0e3 29.0e3 2 0 0 0 0.0 1 9055.0 40.0 0 0 1.5 37.0e3 7.6e3 1 293.0e3 67.0e3 2 0 0 0 0.0 1 9165.0 40.0 0 0 3.5 28.4e3 7.6e3 3 294.0e3 67.0e3 4 0 0 0 0.0 1 9455.0 30.0 0 0 1.5 2.8e3 1.9e3 1 29.0e3 19.0e3 2 0 0 0 0.0 1 9655.0 60.0 0 0 3.5 4.7e3 2.8e3 3 90.0e3 57.0e3 4 0 0 0 0.0 1 9935.0 40.0 0 0 1.5 21.8e3 4.7e3 1 232.0e3 57.0e3 2 0 0 0 0.0 1 10035.0 40.0 0 0 3.5 30.3e3 6.6e3 3 333.0e3 57.0e3 4 0 0 0 0.0 1 11075.0 60.0 0 0 1.5 20.8e3 6.6e3 1 532.0e3 142.0e3 2 0 0 0 0.0 1 11835.0 150.0 0 0 2.5 12.3e3 5.7e3 3 355.0e3 57.0e3 2 0 0 0 0.0 1 11895.0 30.0 0 0 1.5 37.0e3 7.6e3 1 435.0e3 57.0e3 2 0 0 0 0.0 1 12815.0 50.0 0 0 0.5 30.3e3 4.7e3 1 381.0e3 57.0e3 0 0 0 0 0.0 1 12935.0 50.0 0 0 1.5 11.4e3 3.8e3 1 305.0e3 48.0e3 2 0 0 0 0.0 1 13055.0 50.0 0 0 3.5 23.7e3 4.7e3 3 423.0e3 29.0e3 4 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 89.0 3.0 1.5 8.0e-8 2.5e-8 1 0.0 3.0e3 0.0 2 0.010 0.6 0.25 1 0.0 0.0 1 204.2 1.0 2.5 7.7e-4 2.0e-4 2 0.0 0.8e3 0.0 3 0.010 0.0 0.0 0 0.0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf + - 656.0 30.0 0.5 5.6e3 1.0e3 0 0.0 2.0e5 1.1e5 1 0.0 0.0 0 0 0.0 0.0 798.4 1.6 0.5 2.46e4 0.14e4 0 0.0 2.0e4 1.0e3 1 0.0 2.5 0.4 1 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 12/11/09 1 .up to er=2 mev , input data are taken from the same source as for the 18o(p , g)19f reaction ( see previous table ) ; for higher energies the partial widths are adopted from sellin et al .1969 , orihara et al . 1973 ,almanza et al .1975 and murillo et al . 1979 .2 . for broad 656 kev resonance ,parameters are adopted from yagi 1962 , mak et al .1978 and lorentz - wirzba et al . 1979 .interference between the er=656 and 798 kev resonances ( jp=1/2 + ) is included . .... .... 18o(a , g)22ne * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2 ! zproj 8 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 4.0026 !aproj 17.9992 !atarget 1.009 ! aexitparticle ( = 0 when only 2 channels open ) 0.0 ! jproj 0.0 !jtarget 0.5 ! jexitparticle ( = 0 when only 2 channels open ) 9668.1 ! projectile separation energy ( kev ) 10364.3 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * non - resonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 6.7e4 -39.6 1.4e-2 0.5 2500.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 57.0 7.0 0 0 2 2.0e-41 1.6e-41 2 0.1 0.01 1 0 0 0 0.0 0 398.0 8.0 4.8e-7 2.4e-7 0 0 0 0 0 0 0 0 0 0 0.0 0 469.0 7.0 7.1e-7 1.7e-7 0 0 0 0 0 0 0 0 0 0 0.0 0 541.6 0.8 0 0 1 7.63e-5 0.63e-5 1 11.2 3.3 1 0 0 0 0.0 1 613.5 0.8 4.9e-4 0.4e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 628.0 0.8 1.2e-3 0.1e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 947.2 4.1 4.1e-4 1.0e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1025.8 3.3 1.3e-3 0.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1082.2 3.3 1.0e-3 0.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1190.2 3.3 2.2e-3 0.3e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1254.8 4.1 6.5e-2 0.8e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1362.0 5.0 2.0e-4 0.5e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1460.1 4.1 1.8e-3 0.3e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1526.4 3.3 7.2e-3 1.1e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1602.5 4.1 7.0e-3 1.1e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1799.6 4.1 0.48 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 2017.9 5.0 0 0 2 500.0 170.0 2 7.7 2.3 1 4500.0 1500.0 0 1275.0 1 2083.9 10.0 0 0 1 3410.0 1023.0 1 0.29 0.10 1 7590.0 2277.0 1 0.0 1 2217.9 10.0 0 0 1 3400.0 1133.0 1 0.71 0.24 1 6600.0 2200.0 1 0.0 1 2611.9 10.0 0 0 1 6600.0 2200.0 1 4.7 1.6 1 5.9e4 2.0e4 1 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 174.0 8.0 0 1.5e-17 0.0 0 0.01 0.1 0.01 1 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .direct capture s - factor from trautvetter et al .1978 ; for different results , see buchmann et al .1988 or descouvemont 1988 .2 . for all resonances above and including er=947 kev ,energies and strengths are adopted from trautvetter et al .er=398 - 628 kev : resonance energies from either endt 1998 or vogelaar et al .1990 ; resonance strengths from vogelaar et al .1990 and dababneh et al . 2003 .note that for the very weakly observed er=398 kev resonance we adopt a strength uncertainty that is larger ( 50% ) than the published value ( dababneh et al .kev : ga calculated from measured resonance strength ; gg is equal to the total width ( berg and wiehard 1979 ) . 5 .er=57 kev : we assume jp=2 + , based on the fact that la=2 partial waves describe the measured 18o(6li , d)22ne angular distribution better than the originally suggested jp=3- assignment ( giesen et al .1994 ) ; for the reported alpha - particle spectroscopic factor we assume an uncertainty of a factor of 2 .er=174 kev : the level is very weakly populated in the 18o(6li , d)22ne work of giesen et al .1994 ; in fact , even an la=0 transfer can not be excluded , as can be seen by comparing the angular distributions for this excitation energy range ( tab . 6 in giesenet al . 1994 ) ; thus we treat the estimated la=0 spectroscopic factor as an upper limit ( this procedure differs from the one in the work of giesen et al .1994 , where an jp=2 + assignment was adopted ) . 7 .er=1255 kev : the total width is known for this level ( g = gn=25 kev ) , but not enough information is available to deduce the gamma - ray and alpha - particle partial widths from the measured ( a , g ) resonance strength ; thus we do not take the tail of this broad resonance into account .er>2 mev : we believe that angulo et al .1999 ( nacre ) have misinterpreted the " gg " values listed in graff et al . 1968 and chouraqui et al .1970 as resonance strengths ; thus the reported nacre strengths for these high- energy resonances are likely erroneous .we derive alpha - particle , gamma - ray and neutron partial widths from the information provided in graff et al .1968 , chouraqui et al . 1970 andgoldberg et al . 2004 ..... references : berg and wiehard ; buchmann , dauria and mccorquodale ; chouraqui et al . ; dababneh et al . ; descouvemont ; endt ; giesen et al . ; goldberg et al . ; graff et al . ; trautvetter et al . ; vogelaar et al . . .... 17f(p ,g)18ne * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 9 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 17.002 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 2.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 3923.5 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 2.4 0.0 0.0 0.4 2500.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 596.1 7.0 0 0 1 1.1e2 0.44e2 1 1.5e-2 0.75e-2 1 0 0 0 3576.0 1 599.8 2.5 0 0 3 1.8e4 0.2e4 0 2.5e-2 1.25e-2 1 0 0 0 1887.0 1 665.1 7.0 0 0 0 4.9e1 2.0e1 2 1.1e-3 0.55e-3 2 0 0 0 1887.0 1 1182.1 8.0 0 0 2 4.7e4 0.5e4 0 6.5e-2 3.25e-2 1 0 0 0 1887.0 1 1229.1 8.0 6.4e-3 3.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1537.1 5.0 4.2e-2 2.1e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 2226.1 10.0 0 0 1 1.8e4 0.72e4 1 1.8e-1 0.9e-1 1 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * .... .... 18f(p , g)19ne * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 9 ! ztarget 2 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 18.001 !atarget 4.0026 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 1.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 6411.2 ! projectile separation energy ( kev ) 3529.1 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 34.0 4.1e-4 0.0 14.1 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int -121.0 6.0 0 0 0.5 0.087 0.026 0 1.3 2.1 1 1.2e4 0.4e4 1 0.0 1 8.0 6.0 0 0 1.5 7.2e-36 2.1e-36 1 1.3 2.1 1 4.e3 2.e3 2 0.0 1 330.0 6.0 0 0 1.5 2.22 0.69 1 5.0 2.6 2 5.7e3 0.7e3 2 275.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 26.0 9.0 0.5 2.2e-17 0.0 1 0.0045 1.3 2.1 1 0 2.16e5 0.19e5 0 0 0.0 1 287.0 6.0 2.5 2.4e-2 0.0 2 0.0045 0.29 0.15 1 0 1.2e3 0.3e3 3 0 1616.0 1 450.0 6.0 3.5 1.0 0.0 3 0.0045 2.3 1.2 1 0 1.2e3 0.3e3 4 0 238.0 1 827.0 6.0 1.5 700.0 0.0 0 0.0045 1.3 2.1 1 0 6.0e3 5.2e3 1 0 0.0 1 842.0 10.0 0.5 1.8e3 0.0 0 0.0045 1.3 2.1 1 0 2.3e4 2.0e4 1 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf + - 38.0 7.0 1.5 1.17e-11 0.35e-11 0 0 1.1 0.6 1 0 1.3e3 0.4e3 1 0 275.0 664.7 1.6 1.5 1.52e4 0.10e4 0 0 1.3 2.1 1 0 2.38e4 0.12e4 1 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .radiative widths are scaled from analog levels ( nessaraja et al .2007 ) when possible .otherwise , typical values and uncertainties are adopted from statistics of radiative widths in 19f .the direct capture contribution is adopted from the calculation of utku et al .1998 using 18o+p spectroscopic factors .3 . for an interpretation of the large fractional uncertainty of the direct capture s - factor , see the numerical example at the end of sec .5.1.2 in paper i ; the values of e[x]=34.0 kevb and fracerr = sqrt(v[x])/e[x]=14.1 entered here correspond to a median value of 2.4 kevb with a factor of 10 uncertainty .4 . otherwise , the same input data as for the 18f(p , a ) reaction are used . .... .... 18f(p , a)15o * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 9 ! ztarget 2 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 18.001 !atarget 4.0026 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 1.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 6411.2 ! projectile separation energy ( kev ) 3529.1 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 3 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int -121.0 6.0 0 0 0.5 0.087 0.026 0 1.2e4 0.4e4 1 1.3 2.1 1 0.0 1 8.0 6.0 0 0 1.5 7.2e-36 2.1e-36 1 4.e3 2.e3 2 1.3 2.1 1 0.0 1 330.0 6.0 0 0 1.5 2.22 0.69 1 5.7e3 0.7e3 2 5.0 2.6 2 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 26.0 9.0 0.5 2.2e-17 0.0 1 0.0045 2.16e5 0.19e5 0 0 1.3 2.1 1 0 0.0 1 287.0 6.0 2.5 2.4e-2 0.0 2 0.0045 1.2e3 0.3e3 3 0 0.29 0.15 1 0 0.0 1 450.0 6.0 3.5 1.0 0.0 3 0.0045 1.2e3 0.3e3 4 0 2.3 1.2 1 0 0.0 1 827.0 6.0 1.5 700.0 0.0 0 0.0045 6.0e3 5.2e3 1 0 1.3 2.1 1 0 0.0 1 842.0 10.0 0.5 1.8e3 0.0 0 0.0045 2.3e4 2.0e4 1 0 1.3 2.1 1 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf + - 38.0 7.0 1.5 1.17e-11 0.35e-11 0 0 1.3e3 0.4e3 1 0 1.1 0.6 1 0 0.0 664.7 1.6 1.5 1.52e4 0.10e4 00 2.38e4 0.12e4 1 0 1.3 2.1 1 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .summary tables from utku et al .1998 , bardayan et al .2004 , chae et al .2006 and nessaraja et al .2007 were considered , but original data were preferred .2 . level energies mostly come from utku et al . 1998 .alpha - particle widths have been measured by utku et al .1998 , bardayan et al . 2001 , 2004 , or are scaled ( bardayan et al . 2005 ) from analog levels .partial widths or resonance strengths for the 330 and 665 kev resonances have been measured directly by bardayan et al .( 2001 , 2002 ) .5 . recent neutron and proton transfer experiments by adekola 2009 have reassigned spins and parities for the 8 kev ( 1/2- , 3/2- ) and 38 kev ( 3/2-,3/2 + ) resonances previously thought to both have 3/2 + ( utku et al .1998 ) , interfering with the 665 kev resonance . herewe assume 3/2- and 3/2 + , respectively , for these two resonances and use the corresponding proton widths extracted by adekola 2009 .we allow for interferences between the 38 and 665 kev resonances .data for the subthreshold level also originate from adekola 's thesis .6 . reported resonances above 900 kev are not considered because of conflicting experimental data . .... .... 19ne(p , g)20na * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 10 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 19.002 !atarget 0 !aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 0.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 2193.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 1.0025 0.2288e-3 0.1376e-6 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decmwg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 452.0 9.0 9.0e-3 6.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 656.0 9.0 8.0e-3 7.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 808.0 7.0 0.0 0.0 1 1.98e4 2.0e3 0 0.047 0.023 1 0 0 0 0.0 1 893.0 7.0 0.0 0.0 0 3.59e4 2.0e3 0 0.107 0.053 1 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .level energies from tilley et al . 1998; proton separation energy from audi et al .2 . proton widths from tilley et al .1998 ( coszach et al . 1994 ) .radiative width from shell model with 50% assigned uncertainty plus spin assignment uncertainty for the first two levels .4 . direct capture s - factor adopted from vancraeynest et al .1998 ; note that the energy in their eq .( 10 ) must be in units of mev although their s - factor is in units of kevb ..... .... 20ne(p , g)21na * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 10 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.008 !aproj 19.992 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.0 !jtarget 0 ! jexitparticle ( = 0 when only 2 channels open ) 2431.69 ! projectile separation energy ( kev ) 0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 18.14 -8.93e-3 5.776e-6 0.24 2000.0 19.68 -0.144 7.394e-4 0.45 239.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int -6.79 0.42 0 0 0.5 0.445 0.099 0 0.17 0.05 1 0 0 0 0.0 1 366.2 0.5 1.1e-4 2.0e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 397.4 0.7 6.2e-5 1.2e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 1112.6 0.4 1.125 1.8e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1247.2 0.4 0.035 0.01 0 0 0 0 0 0 0 0 0 0 0.0 0 1430.5 0.5 0.050 0.015 0 0 0 0 0 0 0 0 0 0 0.0 0 1737.9 0.7 0 0 1.5 1.8e5 1.5e4 1 0.50 0.15 1 0 0 0 332.0 1 1862.6 0.6 2.40 0.70 0 0 0 0 0 0 0 0 0 0 0.0 0 2036.2 0.7 0 0 1.5 2.1e4 3.0e3 2 0.80 0.20 1 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf ! + - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .nonresonant contribution from fits to direct capture data of rolfs et al .1975 , renormalized using revised strength for the 1112.6 kev resonance and re - analysis of 16o(p , g ) direct capture ( iliadis et al . 2008 ) .two terms are necessary to cover the full energy range of rolfs et al . 1975 .2 . energy of subthreshold state is based on q=2431.69(14 ) kev , which includes a new measurement of the mass of 21na by mukherjee et al .spectroscopic factor of 0.71(14 ) is a weighted average of fits to direct capture and the resonance tail .the dimensionless reduced width is 0.609(61 ) ( iliadis 1997 ) and the gamma - ray partial width is from the lifetime measurement of anttila et al .3 . information on resonances : adopted energies from ensdf ( firestone 2004 ) , widths and strengths from rolfs et al . 1975 , and endt and van der leun 1978 . .... .... 20ne(a , g)24 mg * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2 !zproj 10 !ztarget 1 ! zexitparticle ( = 0 when only 2 channels open ) 4.0026 ! aproj 19.992 !atarget 1.0078 !aexitparticle ( = 0 when only 2 channels open ) 0.0 !jproj 0.0 !jtarget 0.5 ! jexitparticle ( = 0 when only 2 channels open ) 9316.55 ! projectile separation energy ( kev ) 11692.68 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 9.61e4 -12.9 0.0 0.79 1500.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 794.4 0.4 2.9e-4 6.0e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 1016.74 0.13 3.0e-4 6.0e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 1043.96 0.13 0.0 0.0 2 9.6e-5 2.0e-5 2 0.44 0.09 2 0 0 0 1369.0 1 1363.2 0.4 0.0 0.0 0 1.9 0.6 0 0.19 0.10 2 0 0 0 1369.0 1 1414.24 0.11 2.6e-3 0.5e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1600.41 0.17 0.0 0.0 0 7.0 0.8 2 0.47 0.11 2 0 0 0 1369.0 1 1699.3 0.7 1.5 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 1845.5 0.8 0.23 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1891.9 1.6 2.7e-3 1.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1900.24 0.19 1.7 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 2073.3 1.1 0.0 0.0 0 500.0 125.0 1 0.16 0.03 1 0 0 0 1369.0 1 2135.96 0.13 1.2 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 2201.7 0.6 0.0 0.0 0 500.0 125.0 2 0.14 0.03 2 0 0 0 1369.0 1 2279.5 0.8 0.041 0.009 0 0 0 0 0 0 0 0 0 0 0.0 0 2379.1 0.6 0.0 0.0 0 1.6 0.6 4 0.10 0.05 2 0 0 0 9301.0 1 2411.6 1.0 0.0 0.0 0 1.0e+42.0e+3 0 0.35 0.05 2 0 0 0 1369.0 1 2548.4 1.3 0.0 0.0 0 7.0e+3 3.0e+3 1 0.39 0.06 1 0 0 0 0.0 1 2653.1 1.0 0.0 0.0 0 2.4e+3 500.0 2 0.13 0.02 2 0.046 0.010 0 1369.0 1 2685.8 2.0 0.33 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 2698.7 0.8 0.0 0.0 0 700.0 200.0 3 0.14 0.07 1 0 0 0 1369.0 1 2732.6 2.0 0.0 0.0 0 7.1 2.1 4 2.4 0.30 2 4.9e-3 1.3e-3 2 4123.0 1 2802.95 0.18 0.0 0.0 0 1.9e+3 300.0 2 0.26 0.04 2 0 0 0 9516.0 1 2844.05 0.16 0.0 0.0 0 5.7e+3 400.0 4 0.092 0.030 2 0 0 0 4123.0 1 3086.8 0.7 0.0 0.0 0 98.3 44.5 2 2.33 1.09 2 13.4 2.5 0 4238.0 1 3124.0 3.0 0.13 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 3151.0 3.0 0.0 0.0 0 3.8e+3 300.0 2 0.22 0.11 2 0 0 0 0.0 1 3187.0 3.0 0.0 0.0 0 2.3e+3 300.0 4 0.66 0.08 2 0 0 0 1369.0 1 3260.0 3.0 0.0 0.0 0 6.2e+3 300.0 4 0.088 0.044 2 100.0 43.0 2 0.0 1 3320.3 0.6 0.0 0.0 0 22.7 3.1 4 0.87 0.45 2 7.3 3.7 2 4123.0 1 3420.6 0.9 0.0 0.0 0 4.29e+3 860.0 2 0.13 0.07 2 3.2e+3 1.1e+3 0 0.0 1 3489.4 0.8 0.0 0.0 0 420.0 180.0 2 0.78 0.39 2 850.0 660.0 0 0.0 1 3544.0 3.0 0.37 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 3727.0 3.0 0.0 0.0 0 2.8e+3 600.0 0 0.71 0.09 1 0 0 0 9967.0 1 3733.67 0.14 0.0 0.0 0 20.2 6.2 3 5.45 1.00 1 61.4 1.64 1 4123.0 1 3739.0 3.0 0.60 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 3770.2 0.7 0.0 0.0 0 498.0 233.0 2 3.6 0.6 1 6.0e+3 3.0e+3 0 5235.0 1 4090.0 5.0 1.4 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 4119.0 5.0 0.69 0.1 0 0 0 0 0 0 0 0 0 0 0.0 0 4450.0 3.0 0.17 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 4700.0 3.0 5.9 2.9 0 0 0 0 0 0 0 0 0 0 0.0 0 4759.0 4.0 24.0 5.0 0 0 0 0 0 0 0 0 0 0 0.0 0 4781.0 4.0 1.3 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 4829.0 4.0 1.5 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 5008.0 4.0 6.4 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int -15.40 0.08 2.0 0.64 0.0 2 0.01 0.094 0.027 2 0 0.0 0.0 0 0 1369.0 1 -11.16 0.24 0.0 0.81 0.0 0 0.01 3.8e-3 7.7e-4 2 0 0.0 0.0 0 0 1369.0 1 215.93 0.10 2.0 1.08e-20 0.0 2 0.01 0.060 0.027 2 0 0.0 0.0 0 0 1369.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf ! + - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .nonresonant contribution from ( 6li , d ) relative spectroscopic factors of anantaraman et al .1977 , normalized as described in paper ii .information on resonances : adopted energies and total widths from ensdf ( firestone 2007 ) ; strengths from smulders 1965 ( corrected by schmalbrock et al . 1983 ) , highland and thwaites 1968 ( renormalized ) , fifield et al .1978 , 1979 ( with updated stopping powers ) , schmalbrock et al . 1983 , and endt and van der leun 1978 . .... .... 21ne(p , g)22na * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 10 ! ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 20.99 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 1.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 6739.6 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 10000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 2.0e1 0.0 0.0 0.5 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decmwg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 120.86 0.04 3.75e-5 0.75e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 258.22 0.04 2.1e-3 0.37e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 259.07 0.04 8.25e-2 1.25e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 277.14 0.04 2.0e-3 0.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 336.00 0.4 8.13e-3 1.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 413.3 4.0 1.87e-2 0.37e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 481.5 2.0 0.2 0.038 0 0 0 0 0 0 0 0 0 0 0.0 0 500.9 1.5 0.76 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 539.3 3.0 0.125 0.025 0 0 0 0 0 0 0 0 0 0 0.0 0 540.3 3.0 2.87e-2 8.73e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 621.4 3.0 3.12e-2 6.24e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 632.9 2.0 3.0e-2 6.25e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 639.5 1.0 5.87e-2 0.0125 0 0 0 0 0 0 0 0 0 0 0.0 0 662.4 2.0 0.138 0.0376 0 0 0 0 0 0 0 0 0 0 0.0 0 669.9 0.5 0.85 0.175 0 0 0 0 0 0 0 0 0 0 0.0 0 684.4 2.0 2.6e-2 6.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 733.0 0.5 3.37 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 776.5 1.0 0.287 0.062 0 0 0 0 0 0 0 0 0 0 0.0 0 808.3 1.0 0.337 0.062 0 0 0 0 0 0 0 0 0 0 0.0 0 834.8 1.0 0.175 0.0375 0 0 0 0 0 0 0 0 0 0 0.0 0 860.0 3.0 0.16 0.037 0 0 0 0 0 0 0 0 0 0 0.0 0 866.3 2.0 0.1 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 897.3 3.0 0.11 0.024 0 0 0 0 0 0 0 0 0 0 0.0 0 944.0 3.0 3.0e-2 6.25e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1039.5 1.0 0.36 0.12 0 0 0 0 0 0 0 0 0 0 0.0 0 1061.9 1.0 0.54 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 1082.4 1.0 0.125 0.037 0 0 0 0 0 0 0 0 0 0 0.0 0 1150.4 1.1 1.0 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 1180.8 2.1 1.125 0.375 0 0 0 0 0 0 0 0 0 0 0.0 0 1225.9 2.1 0.21 0.062 0 0 0 0 0 0 0 0 0 0 0.0 0 1237.9 2.1 0.46 0.14 0 0 0 0 0 0 0 0 0 0 0.0 0 1279.1 4.0 0.125 0.038 0 0 0 0 0 0 0 0 0 0 0.0 0 1302.1 2.1 0.41 0.12 0 0 0 0 0 0 0 0 0 0 0.0 0 1362.1 4.0 0.26 0.1 0 0 0 0 0 0 0 0 0 0 0.0 0 1369.2 1.9 0.21 0.062 0 0 0 0 0 0 0 0 0 0 0.0 0 1375.5 1.5 0.20 0.063 0 0 0 0 0 0 0 0 0 0 0.0 0 1426.2 1.5 2.6 0.9 0 0 0 0 0 0 0 0 0 0 0.0 0 1458.5 4.0 0.125 0.038 0 0 0 0 0 0 0 0 0 0 0.0 0 1472.7 1.7 0.875 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 1495.3 1.7 3.25 1.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1697.7 1.6 0.44 0.088 0 0 0 0 0 0 0 0 0 0 0.0 0 1757.1 1.5 3.1 0.62 0 0 0 0 0 0 0 0 0 0 0.0 0 1823.6 1.9 0.45 0.0875 0 0 0 0 0 0 0 0 0 0 0.0 0 1936.7 2.0 0.61 0.12 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 16.6 6.0 2 1.0e-23 0.0 0 0.0045 0.1 0.001 1 0 0 0 0 0 0.0 0 94.6 7.0 0 5.1e-9 0.0 2 0.0045 0.1 0.001 1 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 . for er=16.6 and 94.6 kev resonances , value of gg=0.1 + -0.001 ev assumed ( not important ) .2 . for er=16.6 kev resonance , value of jp=2+ assumed ( s - wave resonance ) .3 . for er=94.6 kev resonance ,most likely assignment is jp=0 + ( with ex=6235 kev in 22ne as mirror state ) .4 . value of 0.5 assumed for direct capture s - factor fractional uncertainty .broad - resonance tails negligible compared to direct capture contribution ..... .... 22ne(p , g)23na * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 10 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 21.991 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 8794.11 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 6.2e1 0.0 0.0 0.4 1500.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 35.4 0.7 0 0 0.5 3.1e-15 1.2e-15 0 2.2 1.0 1 0 0 0 0.0 1 150.9 2.0 0 0 3.5 2.3e-9 9.2e-10 3 0.02 0.01 1 0 0 0 0.0 1 417.0 0.8 0.065 0.015 0 0 0 0 0 0 0 0 0 0 0.0 0 458.2 0.8 0.45 0.1 0 0 0 0 0 0 0 0 0 0 0.0 0 601.9 0.3 0.03 0.01 0 0 0 0 0 0 0 0 0 0 0.0 0 610.3 0.3 2.8 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 631.4 0.4 0.35 0.1 0 0 0 0 0 0 0 0 0 0 0.0 0 693.3 0.7 0.13 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 813.7 0.2 7.0 2.5 0 0 0 0 0 0 0 0 0 0 0.0 0 857.3 0.5 1.8 0.9 0 0 0 0 0 0 0 0 0 0 0.0 0 861.1 1.0 1.05 0.5 0 0 0 0 0 0 0 0 0 0 0.0 0 879.5 1.0 0.8 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 888.2 0.3 0.35 0.1 0 0 0 0 0 0 0 0 0 0 0.0 0 906.3 1.0 6.0 2.0 0 0 0 0 0 0 0 0 0 0 0.0 0 937.91 0.07 0.4 0.1 0 0 0 0 0 0 0 0 0 0 0.0 0 960.9 0.5 2.4 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 1021.0 0.4 0.9 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 1040.7 1.0 2.15 0.55 0 0 0 0 0 0 0 0 0 0 0.0 0 1055.2 0.5 2.15 0.55 0 0 0 0 0 0 0 0 0 0 0.0 0 1096.1 0.6 1.5 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 1122.2 0.6 0.6 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 1208.5 0.6 1.1 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 1221.9 0.4 10.5 1.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1254.3 0.6 0.2 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1275.9 0.6 2.75 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 1281.1 0.5 1.2 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 1290.3 0.2 0.8 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 1319.9 0.5 0.65 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 1331.0 0.5 1.4 0.35 0 0 0 0 0 0 0 0 0 0 0.0 0 1374.7 0.2 2.7 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 1436.7 0.3 4.5 1.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1448.8 1.4 1.1 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 1486.6 0.6 1.4 0.35 0 0 0 0 0 0 0 0 0 0 0.0 0 1523.1 0.6 5.0 1.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1543.7 0.7 1.05 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 1551.1 0.7 4.5 1.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1558.9 0.7 3.0 0.75 0 0 0 0 0 0 0 0 0 0 0.0 0 1645.6 1.0 6.5 1.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1653.7 1.2 1.75 0.45 0 0 0 0 0 0 0 0 0 0 0.0 0 1683.8 0.7 2.55 0.65 0 0 0 0 0 0 0 0 0 0 0.0 0 1706.8 0.7 3.3 0.85 0 0 0 0 0 0 0 0 0 0 0.0 0 1712.8 0.7 0.5 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 1724.1 0.7 2.2 0.55 0 0 0 0 0 0 0 0 0 0 0.0 0 1739.1 0.7 0.75 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 1754.2 0.9 5.5 1.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1779.5 0.8 1.15 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 1821.8 0.8 3.75 0.95 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 27.9 3.0 4.5 5.2e-26 0.0 5 0.0045 0.1 0.01 1 0 0 0 0 0 0.0 0 177.9 2.0 0.5 2.6e-6 0.0 0 0.0045 0.1 0.01 1 0 0 0 0 0 0.0 0 247.9 1.0 3.5 3.3e-8 0.0 4 0.0045 0.04 0.02 1 0 0 0 0 0 0.0 1 277.9 3.0 0.5 2.2e-6 0.0 0 0.0045 0.1 0.01 1 0 0 0 0 0 0.0 0 308.9 3.0 0.5 2.2e-6 0.0 0 0.0045 0.1 0.01 1 0 0 0 0 0 0.0 0 318.9 3.0 0.5 3.0e-6 0.0 0 0.0045 0.1 0.01 1 0 0 0 0 0 0.0 0 352.9 5.0 0.5 6.0e-4 0.0 0 0.0045 0.1 0.01 1 0 0 0 0 0 0.0 0 376.9 3.0 0.5 6.0e-4 0.0 0 0.0045 0.1 0.01 1 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .information for er>400 kev from endt 1990 ( strengths normalized relative to er=1222 kev ) . 2 .er=178 , 278 - 377 kev : s - wave resonances ( jp=1/2 + ) assumed for upper limit ; value of gg=0.1 + -0.01 ev is a guess ( inconsequential since gp<<gg ) ; gp upper limit values calculated from strength upper limits of goerres et al . 1983 . 3 .er=28 kev : h - wave resonance ( jp=9/2- ) assumed for upper limit ; gg=0.1 + -0.01 ev is a guess ( inconsequential ) .kev : contrary to hale et al .2001 , we adopt c2s=0.0011 ( see hale 's ph.d . thesis ) .er=248 kev : g - wave resonance ( jp=7/2 + ) assumed for upper limitdirect capture s - factor adopted from goerres et al .1983 , with uncertainty estimate from hale et al . 2001 . 7 .levels at ex=8862 , 8894 and 9000 kev ( powers et al .1971 ) have been disregarded . .... .... 22ne(a , g)26 mg * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2 !zproj 10 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 4.003 !aproj 21.991 !atarget 1.009 !aexitparticle ( = 0 when only 2 channels open ) 0.0 !jproj 0.0 !jtarget 0.5 ! jexitparticle ( = 0 when only 2 channels open ) 10614.78 ! projectile separation energy ( kev ) 11093.08 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 78.37 1.7 0 0 4 1.5e-46 1.2e-46 4 3.0 1.5 1 0 0 0 0.0 0 703.78 2.11 0 0 2 7.2e-6 4.4e-7 2 3.0 1.5 1 2.5e2 1.7e2 1 0.0 1 826.04 0.19 0 0 4 3.78e-6 4.44e-7 4 3.0 1.5 1 1.47e3 8.0e1 2 0.0 1 850.44 0.21 0 0 5 4.36e-6 9.09e-7 5 3.0 1.5 1 6.55e3 9.0e1 3 0.0 1 893.31 0.90 0 0 1 1.17e-4 2.0e-5 1 3.0 1.5 1 1.27e4 2.5e3 1 0.0 1 911.16 1.69 0 0 1 2.77e-4 2.33e-5 1 3.0 1.5 1 1.80e3 9.0e2 1 0.0 1 1015.22 1.69 0 0 1 2.83e-3 3.33e-4 1 3.0 1.5 1 1.35e4 1.7e3 1 0.0 1 1133.66 8.46 0 0 1 2.0e-2 3.0e-3 1 3.0 1.5 1 6.35e4 8.5e3 1 0.0 1 1171.74 3.38 0 0 1 1.67e-2 2.33e-3 1 3.0 1.5 1 2.45e4 2.4e3 1 0.0 1 1213.0 2.0 0 0 2 1.84e-1 1.03e-1 2 3.0 1.5 1 1.10e3 2.5e2 0 0.0 1 1280.0 4.0 2.0e-3 2.0e-4 1 0 0 0 0 0 0 0 0 1 0.0 0 1297.0 3.0 0 0 1 1.89 7.88e-1 1 3.0 1.5 1 5.0e3 2.0e3 1 0.0 1 1338.0 3.0 0 0 3 6.48e-1 3.33e-1 3 3.0 1.5 1 4.0e3 2.0e3 0 0.0 1 1437.0 3.0 0 0 3 8.58e-1 5.81e-1 3 3.0 1.5 1 3.0e3 2.0e3 0 0.0 1 1525.0 3.0 0 0 1 1.67 4.01e-1 1 3.0 1.5 1 1.5e4 2.0e3 1 0.0 1 1569.0 7.0 0 0 0 1.21e1 2.86 0 3.0 1.5 1 3.3e4 5.0e3 2 0.0 1 1658.0 7.0 0 0 0 1.63e2 3.49e1 0 3.0 1.5 1 5.5e4 1.0e4 2 0.0 1 1728.0 4.0 0 0 0 6.30e2 1.22e2 0 3.0 1.5 1 3.5e4 5.0e3 2 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 191.08 0.15 1 1.25e-23 0 1 0.01 3.0 1.5 1 0 0 0 0 0 0.0 0 334.31 0.1 1 1.20e-9 0 1 0.01 3.0 1.5 1 0 0 0 0 0 0.0 0 328.21 2.0 7 3.70e-23 0 7 0.01 3.0 1.5 1 0 0 0 0 0 0.0 0 497.38 0.08 2 9.28e-12 0 2 0.01 1.73 3.1e-2 1 0 2.58e3 2.40e1 0 0 0.0 1 548.16 0.10 2 8.74e-8 0 2 0.01 4.56 2.9e-1 1 0 4.64e3 1.00e2 1 0 0.0 1 556.28 0.16 2 1.25e-7 0 2 0.01 3.0 1.5 1 0 1.44 1.6e-1 2 0 0.0 0 568.27 0.19 1 2.08e-7 0 1 0.01 3.0 1.5 1 0 5.4e-1 8.8e-2 1 0 0.0 0 628.43 0.10 2 9.46e-7 0 2 0.01 7.42 6.0e-1 1 0 4.51e3 1.07e2 1 0 0.0 1 659.32 0.12 2 9.97e-7 0 2 0.01 3.24 3.5e-1 1 0 5.4e2 5.4e1 0 0 0.0 1 665.11 0.11 4 9.16e-8 0 4 0.01 5.9e-1 2.4e-1 1 0 1.51e3 3.4e1 1 0 0.0 1 670.81 0.13 1 1.69e-6 0 1 0.01 7.9e-1 4.6e-1 1 0 1.26e3 1.0e2 1 0 0.0 1 671.59 0.12 2 1.02e-6 0 2 0.01 4.26 6.0e-1 1 0 1.28e1 6.0 2 0 0.0 1 674.36 0.25 2 1.02e-6 0 2 0.01 3.0 1.5 1 0 1.54 4.6e-1 1 0 0.0 0 681.21 0.13 3 7.37e-7 0 3 0.01 3.31 7.3e-1 1 0 8.06e3 1.2e2 1 0 0.0 1 695.95 0.35 1 1.76e-6 0 1 0.01 3.0 1.5 1 0 1.12 4.0e-1 1 0 0.0 0 711.34 0.54 1 1.80e-6 0 1 0.01 3.0 1.5 1 0 6.0e-1 3.2e-1 1 0 0.0 0 713.40 0.14 1 1.81e-6 0 1 0.01 3.63 4.7e-1 1 0 4.2e2 8.6e1 1 0 0.0 1 714.34 0.55 1 1.81e-6 0 1 0.01 3.0 1.5 1 0 2.8 1.0 1 0 0.0 0 722.12 0.56 1 1.83e-6 0 1 0.01 3.0 1.5 1 0 1.42 5.6e-1 1 0 0.0 0 729.15 0.15 2 1.11e-6 0 2 0.01 1.18 2.7e-1 1 0 1.53e2 4.2e1 1 0 0.0 1 730.03 0.16 4 6.16e-7 0 4 0.01 1.82 3.8e-1 1 0 4.13e3 1.9e2 3 0 0.0 1 777.78 0.16 5 1.51e-7 0 5 0.01 3.0 1.5 1 0 2.9e2 1.9e1 2 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! + - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 . including results from longland et al . 2009 .the 704 kev resonance is considered to be the same as the one seen in the 22ne(a , n)25 mg reaction . .... .... 22ne(a , n)25 mg * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2 !zproj 10 ! ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 4.003 !aproj 21.991 !atarget 1.009 ! aexitparticle ( = 0 when only 2 channels open ) 0.0 ! jproj 0.0 !jtarget 0.5 ! jexitparticle ( = 0 when only 2 channels open ) 10614.78 ! projectile separation energy ( kev ) 11093.08 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 3 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 703.78 2.11 0 0 2 2.36e-5 2.2e-6 2 2.5e2 1.7e2 2 3 1.5 1 0.0 1 826.04 0.19 0 0 4 3.78e-6 4.4e-7 4 1.47e3 8.0e1 4 3 1.5 1 0.0 1 850.44 0.21 0 0 5 4.36e-6 9.1e-7 5 6.55e3 9.0e1 5 3 1.5 1 0.0 1 893.31 0.90 0 0 1 1.17e-4 2.0e-5 1 1.27e4 2.5e3 1 3 1.5 1 0.0 1 911.16 1.69 0 0 1 2.77e-4 2.3e-5 1 1.80e3 9.0e2 1 3 1.5 1 0.0 1 1015.22 1.69 0 0 1 2.83e-3 3.3e-4 1 1.35e4 1.7e3 1 3 1.5 1 0.0 1 1133.66 8.46 0 0 1 2.0e-2 3.0e-3 1 6.35e4 8.5e3 1 3 1.5 1 0.0 1 1171.74 3.38 0 0 1 1.67e-2 2.3e-3 1 2.45e4 2.4e3 1 3 1.5 1 0.0 1 1213.19 2.34 0 0 2 2.13e-1 8.4e-3 2 1.10e3 2.5e2 2 3 1.5 1 0.0 1 1247.88 2.54 0 0 1 1.5e-2 1.0e-2 1 2.45e4 3.4e3 1 3 1.5 1 0.0 1 1264.80 2.54 3.9e-1 5.7e-2 1 0 0 1 0 0 1 0 0 1 0.0 0 1275.80 2.54 5.6e-1 6.0e-2 1 0 0 1 0 0 1 0 0 1 0.0 0 1295.25 2.54 1.5 1.6e-1 1 0 0 1 0 0 1 0 0 1 0.0 0 1336.71 2.54 2.9 3.0e-1 1 0 0 1 0 0 1 0 0 1 0.0 0 1437.38 2.54 6.0 7.7e-1 1 0 0 1 0 0 1 0 0 1 0.0 0 1499.99 4.23 1.0 2.4e-1 1 0 0 1 0 0 1 0 0 1 0.0 0 1526.22 2.54 3.0 3.4e-1 1 0 0 1 0 0 1 0 0 1 0.0 0 1569.36 6.77 9.0e-1 2.1e-1 1 0 0 1 0 0 1 0 0 1 0.0 0 1649.74 8.46 3.1e+1 8.5 1 0 0 1 0 0 1 0 0 1 0.0 0 1730.95 6.77 2.0e+2 3.3e+1 1 0 0 1 0 0 1 0 0 1 0.0 0 1820.63 8.46 2.8e+1 7.0 1 0 0 1 0 0 1 0 0 1 0.0 0 1936.54 12.69 1.2e+2 4.5e+1 1 0 0 1 0 0 1 0 0 1 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 497.38 0.08 2 9.28e-12 0 2 0.01 2.58e3 2.4e1 0 0 1.73 3e-2 1 0 0.0 1 548.16 0.10 2 3.80e-8 0 2 0.01 4.64e3 1.0e2 1 0 4.56 0.29 1 0 0.0 1 556.28 0.16 2 1.50e-8 0 2 0.01 1.44 1.6e-1 2 0 3.0 1.5 1 0 0.0 0 568.27 0.19 1 2.08e-7 0 1 0.01 5.40e-1 8.8e-2 1 0 3.0 1.5 1 0 0.0 0 628.43 0.10 2 2.40e-8 0 2 0.01 4.51e3 1.1e2 1 0 7.42 0.60 1 0 0.0 1 659.32 0.12 2 2.20e-8 0 2 0.01 5.40e2 5.4e1 0 0 3.24 0.35 1 0 0.0 1 665.11 0.11 4 1.44e-8 0 4 0.01 1.51e3 3.4e1 1 0 5.9e-1 2.4e-1 1 0 0.0 1 670.81 0.13 1 2.57e-8 0 1 0.01 1.26e3 1.0e2 1 0 7.9e-1 4.6e-1 1 0 0.0 1 671.59 0.12 2 1.54e-8 0 2 0.01 1.28e1 6.0 2 0 4.26 0.60 1 0 0.0 1 674.36 0.25 2 1.54e-8 0 2 0.01 1.54 0.46 1 0 3.0 1.5 1 0 0.0 0 681.21 0.13 3 1.43e-8 0 3 0.01 8.06e3 1.2e2 1 0 3.31 0.73 1 0 0.0 1 695.95 0.35 1 5.34e-9 0 1 0.01 1.12 0.40 1 0 3.0 1.5 1 0 0.0 0 711.34 0.54 1 4.11e-8 0 1 0.01 6.0e-1 3.2e-1 1 0 3.0 1.5 1 0 0.0 0 713.40 0.14 1 1.67e-7 0 1 0.01 4.24e2 8.6e1 1 0 3.63 0.47 1 0 0.0 1 714.34 0.55 1 4.12e-8 0 1 0.01 2.8 1.0 1 0 3.0 1.5 1 0 0.0 0 722.12 0.56 1 4.17e-8 0 1 0.01 1.42 0.56 1 0 3.0 1.5 1 0 0.0 0 729.15 0.15 2 4.00e-8 0 2 0.01 1.53e2 4.2e1 1 0 1.18 0.27 1 0 0.0 1 730.03 0.16 4 4.68e-9 0 4 0.01 4.13e3 1.9e2 3 0 1.82 0.38 1 0 0.0 1 777.78 0.16 5 3.34e-9 0 5 0.01 2.90e2 1.9e1 2 0 3.0 1.5 1 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! + - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 . including results from longland et al . 2009 .the 704 kev resonance is considered to be the same as the one seen in the 22ne(a , g)26 mg reaction . .... .... 21na(p , g)22 mg * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 11 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 20.997 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 1.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 5504.2 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 20000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 7.9 -3.39e-3 3.6e-6 0.4 1300.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 205.7 0.5 1.0e-3 2.0e-4 0 0 0 0 0 0 0 0 0 0 0 0 454.0 5.0 8.6e-4 2.9e-4 0 0 0 0 0 0 0 0 0 0 0 0 541.4 2.9 1.2e-2 1.4e-3 0 0 0 0 0 0 0 0 0 0 0 0 738.4 1.0 2.2e-1 2.5e-2 0 0 0 0 0 0 0 0 0 0 0 0 821.3 0.9 5.6e-1 7.7e-2 0 0 0 0 0 0 0 0 0 0 0 0 1101.1 2.5 3.7e-1 6.2e-2 0 0 0 0 0 0 0 0 0 0 0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * .... .... 22na(p , g)23 mg * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 11 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 21.9944 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 3.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 7580.3 !projectile separation energy 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 1.8e1 0.0 0.0 0.4 1500.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 43.1 1.7 0 0 4.5 1.0e-16 0.4e-16 2 1.6e-1 0.8e-1 1 0 0 0 0.0 1 66.6 3.0 0 0 1.5 1.8e-12 0.72e-12 2 2.0e-2 1.0e-2 1 0 0 0 0.0 0 204.3 1.8 1.4e-3 0.3e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 271.2 2.0 1.6e-2 0.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 435.0 2.2 6.8e-2 2.0e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 480.7 2.4 3.7e-2 1.2e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 579.4 2.4 2.4e-1 0.3e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 706.7 2.4 3.6e-1 0.6e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 760.7 2.4 9.5e-2 0.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 188.9 1.7 2.5 0.0072 0 0 0.0045 0.33 0.16 1 0 0 0 0 0 2715.0 0 221.7 2.4 2.5 0.0061 0 0 0.0045 0.2 0.1 1 0 0 0 0 0 0.0 0 493.7 6.2 2.5 0.018 0 0 0.0045 0.2 0.1 1 0 0 0 0 0 0.0 0 613.5 8.0 2.5 0.021 0 0 0.0045 0.2 0.1 1 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .gg=2.0e-2 ev for er=66.6 kev resonance is a guess ( not important since gp<<gg ) . 2 .gg=2.0e-1 ev for er=222 , 494 and 614 kev resonances is a guess ; we assume for these undetected resonances that gp<<gg ( assumption most likely inconsequential for total rates ) . 3 .spin and parity of er=494 and 614 kev resonances unknown ; for upper limit contributions we assume s - waves ( jp=5/2 + ) . .... .... 23na(p , g)24 mg * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 11 !ztarget 2 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 22.9897 !atarget 4.0026 !aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 1.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 11692.68 ! projectile separation energy ( kev ) 9316.55 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 8000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 2.48e1 -7.31e-3 6.42e-6 0.4 1100.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 5.5 1.0 0 0 4 5.7e-58 2.3e-58 2 0.13 0.03 1 1.5 0.6 4 0.0 1 169.7 0.9 0 0 1 6.1e-5 1.1e-5 1 0.33 0.071 8.0e3 2.0e3 1 0.0 1 217.0 1.8 2.7e-9 1.4e-9 0 0 0 0 0 0 0 0 0 0 0.0 0 240.4 0.2 5.3e-4 1.8e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 295.9 0.06 0.105 0.019 0 0 0 0 0 0 0 0 0 0 0.0 0 358.7 0.4 1.4e-3 3.8e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 490.8 0.1 9.13e-2 1.25e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 567.3 0.4 0.237 0.037 0 0 0 0 0 0 0 0 0 0 0.0 0 648.5 0.4 0.637 0.112 0 0 0 0 0 0 0 0 0 0 0.0 0 708.1 0.3 0.12 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 712.8 0.3 0.175 0.037 0 0 0 0 0 0 0 0 0 0 0.0 0 761.9 1.0 2.4e-3 1e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 836.1 0.6 0.912 0.19 0 0 0 0 0 0 0 0 0 0 0.0 0 946.3 0.1 0.237 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 966.7 0.1 0.0575 0.0212 0 0 0 0 0 0 0 0 0 0 0.0 0 968.5 0.4 4.62e-2 1.9e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 977.7 0.4 1.625 0.375 0 0 0 0 0 0 0 0 0 0 0.0 0 1040.9 0.6 6.25e-3 2.5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1046.7 0.7 0.03 6.25e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1085.8 1.0 0.0413 0.0175 0 0 0 0 0 0 0 0 0 0 0.0 0 1115.5 0.5 0.225 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1125.5 0.2 1.12 0.38 0 0 0 0 0 0 0 0 0 0 0.0 0 1154.6 0.5 0.175 0.038 0 0 0 0 0 0 0 0 0 0 0.0 0 1160.0 0.5 0.0125 6.25e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1202.8 0.5 4.75e-2 1.25e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1229.4 0.4 1.0 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 1263.2 0.15 4.25 0.75 0 0 0 0 0 0 0 0 0 0 0.0 0 1271.6 0.5 0.15 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1305.4 0.5 0.025 0.01 0 0 0 0 0 0 0 0 0 0 0.0 0 1337.6 0.1 2.0 0.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1357.8 0.07 3.375 0.75 0 0 0 0 0 0 0 0 0 0 0.0 0 1396.6 0.4 1.25 0.375 0 0 0 0 0 0 0 0 0 0 0.0 0 1453.8 1.0 0.01 3.75e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1492.3 0.8 1.87e-2 7.5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1569.8 1.0 0.412 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 1576.6 0.7 1.37e-2 6.25e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1583.4 1.0 0.03 0.0125 0 0 0 0 0 0 0 0 0 0 0.0 0 1653.6 0.6 1.0 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 1674.8 0.8 0.175 0.0625 0 0 0 0 0 0 0 0 0 0 0.0 0 1727.2 0.8 0.175 0.0875 0 0 0 0 0 0 0 0 0 0 0.0 0 1732.6 1.2 1.25e-2 5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1754.7 0.8 0.175 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1782.8 0.8 0.0437 0.015 0 0 0 0 0 0 0 0 0 0 0.0 0 1850.3 0.8 0.375 0.125 0 0 0 0 0 0 0 0 0 0 0.0 0 2079.6 3.0 6.25e-2 0.025 0 0 0 0 0 0 0 0 0 0 0.0 0 2108.3 3.0 0.3 0.075 0 0 0 0 0 0 0 0 0 0 0.0 0 2130.4 3.0 0.625 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 2149.5 3.0 6.25e-2 2.5e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 2188.8 3.0 0.3 0.075 0 0 0 0 0 0 0 0 0 0 0.0 0 2201.3 3.0 0.312 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 2242.5 3.0 0.425 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 2255.9 3.0 0.7 0.19 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 37.1 1.7 0 1.1e-19 0.0 2 0.0045 0.37 0.06 1 0 1.0e4 0.2e4 0 0 0.0 1 138.0 1.5 0 1.2e-6 0.0 1 0.0045 0.3 0.15 1 0 0 0 0 0 0.0 0 167.3 3.0 6 2.3e-8 0.0 4 0.0045 7.3e-3 3.6e-3 1 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 . for er=167.3 kev resonance : jp=6 + assumed for upper limit ; since gg / g>0.95 we assume gg>>ga and g = gg ; also no resonance is observed in 20ne+a ( a non - zero value of ga would make the rate contribution even smaller ) .2 . for er=217.0 kev , jp=1- assumed .state corresponding to er=138.0 kev has been observed in 23na(3he , d ) but strength may be caused by non - direct contribution ; direct ( p , g ) search excludes l=0 ( s - wave ) ; assume p - wave ( jp=0- ) for upper limit . since gg / g=0.95 + -0.04 , we assume g = gg and gg>>ga ; value of gg=0.3 ev is a guess ( inconsequential ) . 4 .contribution of subthreshold resonances negligible compared to direct capture ..... .... 23na(p , a)20ne * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 11 !ztarget 2 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 22.9897 !atarget 4.0026 !aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 1.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 11692.68 ! projectile separation energy ( kev ) 9316.55 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 3 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 8000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int -303.0 3.0 0 0 1 0.058 0.023 1 5.0e2 2.5e2 1 0.1 0.001 1 0.0 1 -174.0 2.0 0 0 2 0.0138 0.0055 0 5.0e2 2.5e2 2 0.16 0.08 1 0.0 1 5.5 1.0 0 0 4 5.7e-58 2.3e-58 2 1.5 0.6 4 0.13 0.03 1 0.0 1 169.7 0.9 0 0 1 6.1e-5 1.1e-5 1 8.0e3 2.0e3 1 0.33 0.07 1 0.0 1 217.0 1.8 5.4e-5 1.3e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 274.0 0.5 0 0 2 5.6e-2 1.1e-2 0 2.3e3 0.5e3 2 0.1 0.001 1 0.0 1 295.9 0.06 1.03e-2 2.6e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 324.5 0.6 7.16e-2 0.29e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 358.7 0.4 4.1e-3 1.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 426.5 1.0 5.7e-3 1.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 567.0 0.4 38 3 0 0 0 0 0 0 0 0 0 0 0.0 0 708.1 0.3 7.68e-2 1.9e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 712.8 0.3 7.4 1.3 0 0 0 0 0 0 0 0 0 0 0.0 0 761.9 1.0 3.3 0.5 0 0 0 0 0 0 0 0 0 0 0.0 0 779.1 1.0 1.8 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 809.8 5.0 0.51 0.10 0 0 0 0 0 0 0 0 0 0 0.0 0 880.7 1.0 63 26 0 0 0 0 0 0 0 0 0 0 0.0 0 968.5 0.4 46 14 0 0 0 0 0 0 0 0 0 0 0.0 0 977.7 0.4 0.21 0.051 0 0 0 0 0 0 0 0 0 0 0.0 0 1046.7 0.7 441 51 0 0 0 0 0 0 0 0 0 0 0.0 0 1085.8 1.0 400 41 0 0 0 0 0 0 0 0 0 0 0.0 0 1115.5 0.5 121 10 0 0 0 0 0 0 0 0 0 0 0.0 0 1154.6 0.5 0.15 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1160.0 0.5 9.7 1.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1202.8 0.5 9.2 1.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1229.4 0.4 359 41 0 0 0 0 0 0 0 0 0 0 0.0 0 1263.2 0.15 0.51 0.13 0 0 0 0 0 0 0 0 0 0 0.0 0 1271.6 0.5 36 4 0 0 0 0 0 0 0 0 0 0 0.0 0 1337.6 0.1 0.3 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 1357.8 0.07 12.5 3.1 0 0 0 0 0 0 0 0 0 0 0.0 0 1396.6 0.4 154 21 0 0 0 0 0 0 0 0 0 0 0.0 0 1446.1 1.0 19.5 4.9 0 0 0 0 0 0 0 0 0 0 0.0 0 1468.3 0.7 0.41 0.10 0 0 0 0 0 0 0 0 0 0 0.0 0 1492.3 0.8 29 7 0 0 0 0 0 0 0 0 0 0 0.0 0 1569.8 1.0 1353 338 0 0 0 0 0 0 0 0 0 0 0.0 0 1641.6 3.0 359 90 0 0 0 0 0 0 0 0 0 0 0.0 0 1653.6 0.6 1.02 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 1662.9 0.8 2.4 0.6 0 0 0 0 0 0 0 0 0 0 0.0 0 1727.2 0.8 25.6 6.4 0 0 0 0 0 0 0 0 0 0 0.0 0 1732.6 1.2 441 41 0 0 0 0 0 0 0 0 0 0 0.0 0 1754.7 0.8 0.61 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 1760.9 0.8 7.2 1.8 0 0 0 0 0 0 0 0 0 0 0.0 0 1790.8 0.8 0.31 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 1850.3 0.8 2.2 0.54 0 0 0 0 0 0 0 0 0 0 0.0 0 1940.5 1.1 3.4 0.8 0 0 0 0 0 0 0 0 0 0 0.0 0 1985.4 0.9 308 44 0 0 0 0 0 0 0 0 0 0 0.0 0 2032.6 1.0 174 20 0 0 0 0 0 0 0 0 0 0 0.0 0 2079.6 3.0 502 51 0 0 0 0 0 0 0 0 0 0 0.0 0 2193.6 3.0 4817 1024 0 0 0 0 0 0 0 0 0 0 0.0 0 2255.9 3.0 461 51 0 0 0 0 0 0 0 0 0 0 0.0 0 2327.8 3.0 635 61 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 37.1 1.7 0 1.1e-19 0.0 2 0.0045 1.0e4 0.2e4 0 0 0.37 0.06 1 0 0.0 1 138.0 1.5 0 1.2e-6 0.0 1 0.00450.016 0.008 1 0 0.3 0.15 1 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 . for er=-303.0 and 274.0 kev resonances , value of gg=0.1 + -0.001 ev is assumed ( inconsequential ) . 2 .er=167.3 kev resonance is disregarded ; see comments under 23na(p , g ) input .state corresponding to er=138.0 kev has been observed in 23na(3he , d ) but strength may be caused by non - direct contribution ; direct ( p , g ) search excludes l=0 ( s - wave ) ; assume p - wave ( jp=0- ) for upper limit . since gg / g=0.95 + -0.04, we find with gg=0.3 ev ( approximate average in this ex range ) a value of ga=0.016 ev ; rough estimate seems appropriate since this is an upper limit resonance . .... .... 22mg(p , g)23al * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 12 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 21.999 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 122.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 7000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 4.4e-1 2.7e-4 -4.5e-8 0.4 4500.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3dg3 l3 exf int 405 27 0 0 0.5 74 30 0 7.2e-7 1.4e-7 2 0 0 0 0.0 1 1651 40 0 0 1.5 912 365 2 8.3e-4 4.2e-4 1 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .s - factor for direct capture to ground state calculated with c2s=0.22 from 23ne mirror state . 2 .several higher - energy resonances are missing between those considered here and those observed in the scattering study of he et al .2007 . ........ 23mg(p , g)24al * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 12 ! ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 22.994 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 1.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 1872.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 8000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * non - resonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 2.25e1 -0.011 1.38e-5 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int -764.0 3.0 0 0 2 0.10 0.05 0 7.3e-3 3.7e-3 1 0 0 0 425.8 1 -324.0 3.0 0 0 2 0.045 0.022 0 2.5e-3 1.3e-3 1 0 0 0 1088.2 1 473.0 3.0 0 0 3 0.17 0.07 2 0.033 0.0165 1 0 0 0 500.1 1 651.0 4.0 0 0 4 1.1 0.4 2 0.053 0.0265 1 0 0 0 0 1 919.0 4.0 0 0 2 2.0e3 8.0e2 0 0.083 0.0415 1 0 0 0 500.1 1 1001.0 4.0 0 0 3 2.3e1 9.2 2 0.014 0.007 1 0 0 0 1107.9 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .resonance energies deduced from ex ( visser et al .2007 , visser et al .2008 , lotay et al .2 . measured spectroscopic factors adopted from 24na mirror levels ( tomandl et al .3 . gamma - ray partial widths from herndl et al .1998 ( shell model ) for unbound states , and from 24na mirror state lifetimes ( endt 1990 ) for bound states .4 . information on g - ray branching ratios from tomandl et al . 2004 and lotay et al . 2008 ..... .... 24mg(p , g)25al * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 12 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 23.985 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 2271.6 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 8000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 2.5e1 0.0 0.0 0.4 1200.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 213.7 0.9 0 0 0.5 1.4e-2 1.2e-3 0 1.41e-1 6.3e-2 1 0 0 0 452.0 1 401.9 0.8 0 0 1.5 1.7e-1 1.6e-2 2 2.38e-2 1.5e-3 1 0 0 0 0.0 1 790.1 0.8 0.55 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1152.70.7 2.8e-2 0.6e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1424.1 0.7 0.16 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1551.4 1.7 0.79 0.12 0 0 0 0 0 0 0 0 0 0 0.0 0 1587.2 0.9 0.21 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1924.4 3.0 0.58 0.10 0 0 0 0 0 0 0 0 0 0 0.0 0 2311.4 4.0 1.8e-1 0.4e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * .... .... 24mg(a , g)28si * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2 !zproj 12 !ztarget 1 ! zexitparticle ( = 0 when only 2 channels open ) 4.0026 ! aproj 23.985 !atarget 1.0078 !aexitparticle ( = 0 when only 2 channels open ) 0.0 !jproj 0.0 !jtarget 0.5 ! jexitparticle ( = 0 when only 2 channels open ) 9984.14 ! projectile separation energy ( kev ) 11585.11 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 2.0 ! minimum energy for numerical integration ( kev ) 7000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 1.50e6 -217.1 0.0 0.79 1500.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 1009.9 0.3 2.2e-4 6.0e-5 0 0 0 0 0 0 1 0 0 0 0.0 0 1094.37 0.14 5.8e-4 1.1e-4 0 0 0 0 0 0 1 0 0 0 0.0 0 1157.0 1.0 1.9e-3 3.0e-4 0 0 0 0 0 0 1 0 0 0 0.0 0 1210.93 0.13 2.1e-4 4.0e-5 0 0 0 0 0 0 2 0 0 0 0.0 0 1311.3 0.4 9.6e-2 9.0e-3 0 0 0 0 0 0 1 0 0 0 0.0 0 1531.2 0.4 6.0e-2 8.0e-3 0 0 0 0 0 0 2 0 0 0 0.0 0 1600.47 0.19 4.3e-2 7.0e-3 0 0 0 0 0 0 1 0 0 0 0.0 0 1672.5 0.5 0.0 0.0 2 1.4e-1 7.0e-2 2 3.5e-2 8.0e-3 1 0 0 0 1779.0 1 1685.3 0.4 0.0 0.0 1 1.8e-1 5.0e-2 1 0.28 9.0e-2 1 0 0 1 1779.0 1 1794.5 0.7 3.5e-2 6.0e-3 0 0 0 0 0 0 1 0 0 0 0.0 0 1915.3 0.3 5.4e-2 7.0e-3 0 0 0 0 0 0 1 0 0 0 0.0 0 1991.6 0.3 0.0 0.0 4 1.0e-2 1.4e-3 4 2.3e-2 5.0e-3 1 1.2e-2 5.0e-3 2 4617.9 1 2038.2 0.4 6.0e-2 1.0e-2 0 0 0 0 0 0 2 0 0 0 0.0 0 2087.2 0.3 0.33 0.05 0 0 0 0 0 0 2 0 0 0 0.0 0 2197.5 0.5 7.8e-2 1.1e-2 0 0 0 0 0 0 1 0 0 0 0.0 0 2209.8 0.3 0.19 0.02 0 0 0 0 0 0 1 0 0 0 0.0 0 2256.3 0.5 0.52 0.07 0 0 0 0 0 0 2 0 0 0 0.0 0 2305.4 0.3 6.9e-2 1.0e-2 0 0 0 0 0 0 2 0 0 0 0.0 0 2316.6 0.3 2.3e-2 4.0e-2 0 0 0 0 0 0 1 0 0 0 0.0 0 2456.2 0.3 1.05 0.18 0 0 0 0 0 0 2 0 0 0 0.0 0 2490.1 0.3 1.6 0.3 0 0 0 0 0 0 1 0 0 0 0.0 0 2503.9 0.3 0.0 0.0 3 7.83 0.74 3 0.33 3.0e-2 1 132.0 42.0 0 1779.0 1 2566.2 0.3 0.69 0.14 0 0 0 0 0 0 2 0 0 0 0.0 0 2741.2 0.3 0.0 0.0 2 352.0 84.0 2 1.41 0.28 2 281.0 67.0 0 0.0 1 2820.6 0.5 0.21 5.0e-2 0 0 0 0 0 0 1 0 0 0 0.0 0 2830.7 0.6 0.0 0.0 1 3.5e3 100.0 1 6.7e-2 1.3e-2 1 0 0 0 0.0 1 2870.28 0.14 1.1 0.3 0 0 0 0 0 0 1 0 0 0 0.0 0 2874.3 0.5 0.6 0.1 0 0 0 0 0 0 1 0 0 0 0.0 0 2915.3 0.3 0.0 0.0 2 297.0 51.0 2 2.52 0.24 1 1.26e3 216.0 0 6276.2 1 2917.3 0.3 0.5 0.3 0 0 0 0 0 0 1 0 0 0 0.0 0 2938.9 0.3 0.4 0.2 0 0 0 0 0 0 1 0 0 0 0.0 0 2988.6 0.7 0.0 0.0 1 1.44e3 194.0 1 0.32 8.0e-2 1 270.0 74.0 1 0.0 1 3055.0 0.6 0.0 0.0 0 3.2e3 100.0 0 0.30 7.0e-2 2 0 0 0 1779.0 1 3109.0 0.3 0.9 0.2 0 0 0 0 0 0 1 0 0 0 0.0 0 3121.1 0.5 2.1 0.5 0 0 0 0 0 0 2 0 0 0 0.0 0 3244.7 0.6 1.24 0.27 0 0 0 0 0 0 2 0 0 0 0.0 0 3261.8 0.7 0.0 0.0 3 1.59e3 161.0 3 0.18 3.0e-2 1 8.01e3 213.0 1 6276.2 1 3375.7 0.6 0.23 6.0e-2 0 0 0 0 0 0 2 0 0 0 0.0 0 3432.1 0.6 0.37 9.0e-2 0 0 0 0 0 0 1 0 0 0 0.0 0 3598.2 0.5 0.7 0.2 0 0 0 0 0 0 1 0 0 0 0.0 0 3655.0 1.0 0.0 0.0 2 656.0 117.0 2 0.16 6.0e-2 1 5.04e3 504.0 0 1779.0 1 3693.5 0.8 0.0 0.0 2 820.0 221.0 2 0.35 0.12 2 473.0 155.0 0 0.0 1 3721.6 0.5 0.8 0.2 0 0 0 0 0 0 1 0 0 0 0.0 0 3872.7 0.9 0.3 0.1 0 0 0 0 0 0 1 0 0 0 0.0 0 3888.8 1.2 0.0 0.0 3 2.35e3 868.0 3 5.63 2.21 1 4.75e3 1.45e3 1 1779.0 1 3955.7 1.0 6.0e-2 3.0e-2 0 0 0 0 0 0 2 0 0 0 0.0 0 3987.2 0.7 1.2 0.3 0 0 0 0 0 0 1 0 0 0 0.0 0 4105.0 3.0 0.0 0.0 3 3.15e3 537.0 3 0.16 7.0e-2 1 1.15e3 253.0 1 1779.0 1 4325.0 5.0 2.0 0.6 0 0 0 0 0 0 1 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 197.45 0.12 3.0 5.12e-29 0.0 3 0.01 0.13 0.11 1 0 0.0 0.0 0 0 4617.9 1 529.95 0.30 2.0 1.63e-11 0.0 2 0.01 0.31 0.30 1 0 0.0 0.0 0 0 1779.0 1 821.0 1.00 2.0 2.10e-6 0.0 2 0.01 0.37 0.35 2 0 0.0 0.0 0 0 0.0 1 931.6 0.70 3.0 2.29e-6 0.0 3 0.01 0.57 0.51 1 0 0.0 0.0 0 0 1779.0 1 968.7 0.30 2.0 3.20e-4 0.0 2 0.01 0.23 0.21 1 0 0.0 0.0 0 0 1779.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf ! + - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .nonresonant contribution from ( 6li , d ) relative spectroscopic factors of draayer et al .1974 and tanabe et al . 1983 ,normalized as described in paper ii .2 . information on resonances : adopted energies and total widths from ensdf ( firestone 2007 ) and endt 1990 ; strengths from smulders and endt 1962 , lyons et al . 1969 , maas et al .1978 , cseh et al . 1982 , and strandberg et al . 2008 .upper limits on strengths of low - energy resonances assume average gamma - ray partial widths .no gamma - decays are observed for the presumed 821 kev and known 3956 kev resonances ; we have assumed e2 decays to the ground state . .... .... 25mg(p , g)26al^t * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 12 ! ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 24.9858 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 2.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 6306.45 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 50000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 7.3e1 0.0 0.0 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 37.01 0.09 0 0 4 6.0e-22 2.4e-22 2 0.1 0.001 1 0 0 0 0.0 0 57.54 0.09 0 0 3 4.8e-13 1.9e-13 0 0.1 0.001 1 0 0 0 0.0 0 92.19 0.22 0 0 2 2.8e-10 1.1e-10 1 0.1 0.001 1 0 0 0 0.0 0 108.01 0.11 0 0 0 2.5e-10 1.0e-10 2 0.1 0.001 1 0 0 0 0.0 0 189.49 0.09 7.2e-7 9.7e-8 0 0 0 0 0 0 0 0 0 0 0.0 0 244.23 0.09 5.5e-6 6.3e-7 0 0 0 0 0 0 0 0 0 0 0.0 0 291.87 0.17 4.5e-5 5.2e-6 0 0 0 0 0 0 0 0 0 0 0.0 0 303.95 0.08 3.0e-2 3.5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 374.00 0.09 6.6e-2 6.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 417.80 0.09 0.107 0.013 0 0 0 0 0 0 0 0 0 0 0.0 0 477.34 0.07 7.3e-2 1.1e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 482.85 0.06 6.0e-2 7.7e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 494.67 0.06 4.8e-2 7.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 495.15 0.17 1.9e-2 7.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 511.41 0.10 2.2e-2 1.8e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 545.05 0.12 0.31 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 567.84 0.09 0.228 0.098 0 0 0 0 0 0 0 0 0 0 0.0 0 585.25 0.06 1.4e-3 5.8e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 629.75 0.09 3.4e-2 3.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 658.03 0.10 0.43 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 694.46 0.10 0.20 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 708.56 0.12 4.2e-2 5.7e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 744.77 0.09 0.14 0.01 0 0 0 0 0 0 0 0 0 0 0.0 0 779.52 0.17 0.20 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 786.33 0.10 1.3e-2 1.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 802.26 0.09 5.5e-2 4.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 835.35 0.07 5.8e-2 5.9e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 846.39 0.08 0.30 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 854.52 0.10 5.0e-2 4.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 861.20 0.08 4.9e-2 4.5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 891.99 0.13 0.33 0.033 0 0 0 0 0 0 0 0 0 0 0.0 0 915.97 0.10 0.67 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 931.23 0.07 5.1e-2 7.5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 947.2 0.2 0.68 0.09 0 0 0 0 0 0 0 0 0 0 0.0 0 979.17 0.12 3.4e-3 1.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 984.88 0.10 0.22 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1001.77 0.07 0.63 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1041.44 0.11 0.43 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1059.80 0.12 0.23 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1090.47 0.07 0.10 0.01 0 0 0 0 0 0 0 0 0 0 0.0 0 1092.25 0.11 0.47 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1103.17 0.09 0.33 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1118.62 0.09 0.20 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1133.05 0.15 4.1e-3 1.6e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1137.71 0.17 5.0e-2 3.8e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1148.89 0.20 0.36 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1157.99 0.12 0.475 0.063 0 0 0 0 0 0 0 0 0 0 0.0 0 1188.93 0.06 0.75 0.10 0 0 0 0 0 0 0 0 0 0 0.0 0 1190.6 2.0 0.14 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1222.81 0.07 8.3e-3 3.3e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1233.07 0.12 0.40 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 1241.75 0.10 2.1e-2 5.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1251.11 0.25 0.33 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 1254.8 0.2 0.94 0.14 0 0 0 0 0 0 0 0 0 0 0.0 0 1285.10 0.11 8.3e-2 2.5e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1289.61 0.13 0.24 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1298.35 0.11 7.5e-2 1.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1316.23 0.11 0.10 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1321.07 0.13 1.00 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 1341.4 0.4 1.0e-3 2.5e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1455.39 0.11 7.0e-2 2.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1465.80 0.08 0.24 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1466.6 2.0 5.8e-3 1.7e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1507.18 0.19 9.2e-2 2.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1518.21 0.16 6.7e-21.8e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1525.16 0.09 0.62 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 1558.6 0.3 0.17 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 1567.84 0.16 0.51 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1573.2 0.3 0.10 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1584.72 0.10 2.8 0.24 0 0 0 0 0 0 0 0 0 0 0.0 0 1614.82 0.15 3.3e-2 1.7e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1632.34 0.09 1.9 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 1646.90 0.08 4.0 0.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1675.6 2.0 1.20 0.12 0 0 0 0 0 0 0 0 0 0 0.0 0 1694.18 0.09 9.0e-2 1.6e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1701.63 0.10 0.19 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1704.73 0.08 0.20 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 1729.3 0.3 1.5e-2 2.5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1740.19 0.11 7.5e-2 1.7e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1757.6 2.0 0.63 0.12 0 0 0 0 0 0 0 0 0 0 0.0 0 1760.99 0.10 0.19 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 129.99 0.12 5 1.7e-10 0.0 2 0.0045 0.1 0.001 1 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .direct capture s - factor from endt and rolfs 1987 . 2 .er=189 - 1761 kev : energies and strengths from endt 1990 ; the former are calculated from ex and qpg , while the latter are normalized to the values listed in tab . 1 of iliadis et al .er=37 - 130 kev : proton partial widths from iliadis et al .1996 ; values of gg=0.1 ev are guesses ( inconsequential ) .kev : parity is not known experimentally ; we adopt jp=4 + , as predicted by the shell model ( see iliadis et al ..... .... 25mg(p , g)26al^g * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 12 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 24.9858 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 2.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 6306.45 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 50000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 5.2e1 0.0 0.0 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 37.01 0.09 0 0 4 4.7e-22 1.9e-22 2 0.1 0.001 1 0 0 0 0.0 0 57.54 0.09 0 0 3 3.9e-13 1.6e-13 0 0.1 0.001 1 0 0 0 0.0 0 92.19 0.22 0 0 2 2.4e-10 0.9e-10 1 0.1 0.001 1 0 0 0 0.0 0 108.01 0.11 0 0 0 1.8e-10 0.7e-10 2 0.1 0.001 1 0 0 0 0.0 0 189.49 0.09 4.8e-7 6.5e-8 0 0 0 0 0 0 0 0 0 0 0.0 0 244.23 0.09 4.2e-6 4.8e-7 0 0 0 0 0 0 0 0 0 0 0.0 0 291.87 0.17 3.6e-5 4.1e-6 0 0 0 0 0 0 0 0 0 0 0.0 0 303.95 0.08 2.6e-2 3.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 374.00 0.09 4.4e-2 4.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 417.80 0.09 0.103 0.012 0 0 0 0 0 0 0 0 0 0 0.0 0 477.34 0.07 4.1e-2 6.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 482.85 0.06 5.4e-2 6.9e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 494.67 0.06 3.0e-2 4.7e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 495.15 0.17 1.0e-2 3.9e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 511.41 0.10 1.2e-2 1.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 545.05 0.12 0.19 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 567.84 0.09 0.137 0.059 0 0 0 0 0 0 0 0 0 0 0.0 0 585.25 0.06 1.3e-3 5.4e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 629.75 0.09 1.7e-2 1.7e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 658.03 0.10 0.37 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 694.46 0.10 8.8e-2 8.8e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 708.56 0.12 2.9e-2 3.9e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 744.77 0.09 9.7e-2 6.9e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 779.52 0.17 0.038 0.008 0 0 0 0 0 0 0 0 0 0 0.0 0 786.33 0.10 5.6e-3 6.0e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 802.26 0.09 4.5e-2 3.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 835.35 0.07 4.1e-2 4.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 846.39 0.08 0.22 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 854.52 0.10 2.9e-2 2.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 861.20 0.08 3.5e-2 3.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 891.99 0.13 0.023 0.002 0 0 0 0 0 0 0 0 0 0 0.0 0 915.97 0.10 0.66 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 931.23 0.07 3.4e-2 5.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 947.2 0.2 0.48 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 979.17 0.12 1.7e-4 6.0e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 984.88 0.10 0.19 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1001.77 0.07 0.34 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1041.44 0.11 0.35 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1059.80 0.12 0.14 0.01 0 0 0 0 0 0 0 0 0 0 0.0 0 1090.47 0.07 0.065 0.007 0 0 0 0 0 0 0 0 0 0 0.0 0 1092.25 0.11 0.38 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1103.17 0.09 0.29 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1118.62 0.09 0.13 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1133.05 0.15 1.6e-4 6.4e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 1137.71 0.17 1.9e-2 1.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1148.89 0.20 0.10 0.009 0 0 0 0 0 0 0 0 0 0 0.0 0 1157.99 0.12 0.318 0.042 0 0 0 0 0 0 0 0 0 0 0.0 0 1188.93 0.06 0.51 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 1190.6 2.0 0.081 0.011 0 0 0 0 0 0 0 0 0 0 0.0 0 1222.81 0.07 7.8e-3 3.1e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1233.07 0.12 0.15 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1241.75 0.10 0.018 0.004 0 0 0 0 0 0 0 0 0 0 0.0 0 1251.11 0.25 0.18 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1254.8 0.2 0.34 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1285.10 0.11 6.7e-2 2.0e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1289.61 0.13 0.17 0.01 0 0 0 0 0 0 0 0 0 0 0.0 0 1298.35 0.11 2.8e-2 4.8e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1316.23 0.11 0.038 0.008 0 0 0 0 0 0 0 0 0 0 0.0 0 1321.07 0.13 0.88 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 1341.4 0.4 4.8e-4 1.2e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1455.39 0.11 5.2e-2 1.7e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1465.80 0.08 0.18 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1466.6 2.0 3.9e-3 1.1e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1507.18 0.19 2.0e-2 5.1e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1518.21 0.16 5.8e-21.6e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1525.16 0.09 0.53 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 1558.6 0.3 0.11 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1567.84 0.16 0.23 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1573.2 0.3 0.028 0.006 0 0 0 0 0 0 0 0 0 0 0.0 0 1584.72 0.10 2.1 0.18 0 0 0 0 0 0 0 0 0 0 0.0 0 1614.82 0.15 3.1e-2 1.6e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1632.34 0.09 1.7 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 1646.90 0.08 3.7 0.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1675.6 2.0 0.76 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 1694.18 0.09 1.6e-2 2.9e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1701.63 0.10 0.11 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1704.73 0.08 0.19 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 1729.3 0.3 3.5e-3 5.8e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1740.19 0.11 5.0e-2 1.1e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1757.6 2.0 0.46 0.09 0 0 0 0 0 0 0 0 0 0 0.0 0 1760.99 0.10 0.18 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 129.99 0.12 5 1.2e-10 0.0 2 0.0045 0.1 0.001 1 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .see comments for 25mg(p , g)26al^t .ground state branching ratios are adopted from endt and rolfs 1987 ( uncertainties are on the order of 1% ) , except for er=189 , 244 and 292 kev for which we adopt the values of iliadis 1989 . .... .... 25mg(p , g)26al^m * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 12 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 24.9858 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 2.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 6078.15 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 50000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 2.1e1 0.0 0.0 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decmwg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 37.01 0.09 0 0 4 1.3e-22 5.0e-23 2 0.1 0.001 1 0 0 0 0.0 0 57.54 0.09 0 0 3 9.1e-14 3.6e-14 0 0.1 0.001 1 0 0 0 0.0 0 92.19 0.22 0 0 2 4.2e-11 1.7e-11 1 0.1 0.001 1 0 0 0 0.0 0 108.01 0.11 0 0 0 7.3e-11 2.9e-11 2 0.1 0.001 1 0 0 0 0.0 0 189.49 0.09 2.5e-7 3.3e-8 0 0 0 0 0 0 0 0 0 0 0.0 0 244.23 0.09 1.3e-6 1.5e-7 0 0 0 0 0 0 0 0 0 0 0.0 0 291.87 0.17 9.5e-6 1.1e-6 0 0 0 0 0 0 0 0 0 0 0.0 0 303.95 0.08 3.9e-3 4.6e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 374.00 0.09 2.2e-2 2.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 417.80 0.09 4.3e-3 5.2e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 477.34 0.07 0.032 0.005 0 0 0 0 0 0 0 0 0 0 0.0 0 482.85 0.06 6.0e-3 7.7e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 494.67 0.06 0.018 0.003 0 0 0 0 0 0 0 0 0 0 0.0 0 495.15 0.17 8.9e-3 3.5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 511.41 0.10 9.7e-3 7.9e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 545.05 0.12 0.12 0.01 0 0 0 0 0 0 0 0 0 0 0.0 0 567.84 0.09 0.091 0.039 0 0 0 0 0 0 0 0 0 0 0.0 0 585.25 0.06 9.8e-5 4.1e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 629.75 0.09 1.7e-2 1.7e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 658.03 0.10 0.056 0.008 0 0 0 0 0 0 0 0 0 0 0.0 0 694.46 0.10 0.11 0.01 0 0 0 0 0 0 0 0 0 0 0.0 0 708.56 0.12 1.3e-2 1.8e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 744.77 0.09 0.043 0.003 0 0 0 0 0 0 0 0 0 0 0.0 0 779.52 0.17 0.16 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 786.33 0.10 7.4e-3 8.0e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 802.26 0.09 9.9e-3 7.6e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 835.35 0.07 1.7e-2 1.7e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 846.39 0.08 0.078 0.008 0 0 0 0 0 0 0 0 0 0 0.0 0 854.52 0.10 0.021 0.002 0 0 0 0 0 0 0 0 0 0 0.0 0 861.20 0.08 1.4e-2 1.3e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 891.99 0.13 0.31 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 915.97 0.10 0.013 0.001 0 0 0 0 0 0 0 0 0 0 0.0 0 931.23 0.07 1.7e-2 2.5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 947.2 0.2 0.20 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 979.17 0.12 3.2e-3 1.1e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 984.88 0.10 0.029 0.003 0 0 0 0 0 0 0 0 0 0 0.0 0 1001.77 0.07 0.29 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1041.44 0.11 0.077 0.009 0 0 0 0 0 0 0 0 0 0 0.0 0 1059.80 0.12 0.094 0.008 0 0 0 0 0 0 0 0 0 0 0.0 0 1090.47 0.07 0.035 0.004 0 0 0 0 0 0 0 0 0 0 0.0 0 1092.25 0.11 0.089 0.010 0 0 0 0 0 0 0 0 0 0 0.0 0 1103.17 0.09 0.040 0.006 0 0 0 0 0 0 0 0 0 0 0.0 0 1118.62 0.09 0.066 0.010 0 0 0 0 0 0 0 0 0 0 0.0 0 1133.05 0.15 3.9e-3 1.5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1137.71 0.17 3.1e-2 2.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1148.89 0.20 0.26 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1157.99 0.12 0.16 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1188.93 0.06 0.24 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1190.6 2.0 0.059 0.008 0 0 0 0 0 0 0 0 0 0 0.0 0 1222.81 0.07 5.0e-4 2.0e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1233.07 0.12 0.25 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 1241.75 0.10 2.5e-3 6.0e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1251.11 0.25 0.15 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1254.8 0.2 0.60 0.09 0 0 0 0 0 0 0 0 0 0 0.0 0 1285.10 0.11 1.6e-2 4.8e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1289.61 0.13 0.074 0.006 0 0 0 0 0 0 0 0 0 0 0.0 0 1298.35 0.11 4.7e-2 8.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1316.23 0.11 0.062 0.012 0 0 0 0 0 0 0 0 0 0 0.0 0 1321.07 0.13 0.12 0.01 0 0 0 0 0 0 0 0 0 0 0.0 0 1341.4 0.4 5.2e-4 1.3e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1455.39 0.11 1.8e-2 6.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1465.80 0.08 0.058 0.007 0 0 0 0 0 0 0 0 0 0 0.0 0 1466.6 2.0 1.9e-3 5.6e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1507.18 0.19 7.2e-2 1.8e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1518.21 0.16 8.7e-3 2.3e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1525.16 0.09 0.087 0.011 0 0 0 0 0 0 0 0 0 0 0.0 0 1558.6 0.3 0.065 0.022 0 0 0 0 0 0 0 0 0 0 0.0 0 1567.84 0.16 0.28 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1573.2 0.3 0.072 0.014 0 0 0 0 0 0 0 0 0 0 0.0 0 1584.72 0.10 0.70 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 1614.82 0.15 2.3e-3 1.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1632.34 0.09 0.25 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1646.90 0.08 0.32 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1675.6 2.0 0.44 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1694.18 0.09 7.4e-2 1.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1701.63 0.10 0.082 0.013 0 0 0 0 0 0 0 0 0 0 0.0 0 1704.73 0.08 0.012 0.004 0 0 0 0 0 0 0 0 0 0 0.0 0 1729.3 0.3 1.2e-2 1.9e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1740.19 0.11 2.5e-2 5.6e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1757.6 2.0 0.17 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1760.99 0.10 7.6e-3 2.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 129.99 0.12 5 4.6e-11 0.0 2 0.0045 0.1 0.001 1 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .see comments for 25mg(p , g)26al^t .isomeric state branching ratios are adopted from endt and rolfs 1987 ( uncertainties are on the order of 1% ) , except for er=189 , 244 and 292 kev for which we adopt the values of iliadis 1989 . .... .... 26mg(p , g)27al * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 12 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 25.98259 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 8271.05 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 7.45e1 -1.62e-2 0.0 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 52.7 1.0 0 0 2.5 7.4e-17 3.0e-17 2 0.1 0.01 1 0 0 0 0.0 0 104.7 1.0 0 0 0.5 1.0e-9 0.4e-9 1 0.1 0.01 1 0 0 0 0.0 0 149.7 1.0 8.0e-8 3.0e-8 0 0 0 0 0 0 0 0 0 0 0.0 0 219.0 1.2 5.0e-5 2.0e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 266.0 1.0 1.0e-6 0.8e-6 0 0 0 0 0 0 0 0 0 0 0.0 0 281.25 0.09 7.0e-3 1.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 325.9 0.1 0 0 1.5 0.46 0.08 1 0.195 0.018 1 0 0 0 0.0 1 437.0 0.1 0 0 0.5 3.40 0.71 0 0.91 0.07 1 0 0 0 0.0 1 444.9 0.5 0.035 0.011 0 0 0 0 0 0 0 0 0 0 0.0 0 460.5 0.4 0 0 3.5 1.2e-3 0.3e-3 3 0.19 0.03 1 0 0 0 0.0 1 481.9 0.5 0 0 2.5 7.7e-4 3.0e-4 2 1.05 0.13 1 0 0 0 0.0 1 502.5 0.5 0 0 2.5 3.3e-3 1.0e-3 2 3.7 0.3 1 0 0 0 0.0 1 625.6 0.5 0 0 2.5 0.021 0.006 2 0.8 0.2 1 0 0 0 0.0 1 633.4 0.8 0.02 0.006 0 0 0 0 0 0 0 0 0 0 0.0 0 637.5 0.5 0.37 0.09 0 0 0 0 0 0 0 0 0 0 0.0 0 691.79 0.05 0.29 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 779.30 0.05 0.325 0.043 0 0 0 0 0 0 0 0 0 0 0.0 0 780.00 0.05 0.6 0.1 0 0 0 0 0 0 0 0 0 0 0.0 0 808.77 0.06 1.05 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 918.62 0.05 0.9 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 944.72 0.05 0.48 0.14 0 0 0 0 0 0 0 0 0 0 0.0 0 964.4 0.8 1.5 0.45 0 0 0 0 0 0 0 0 0 0 0.0 0 968.0 0.8 0.3 0.1 0 0 0 0 0 0 0 0 0 0 0.0 0 1000.1 0.8 0.095 0.035 0 0 0 0 0 0 0 0 0 0 0.0 0 1002.8 0.8 0.45 0.20 0 0 0 0 0 0 0 0 0 0 0.0 0 1005.4 0.8 0 0 1.5 0.10e3 0.03e3 1 0.35 0.10 1 0 0 0 0.0 1 1036.7 0.9 0.13 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1087.8 1.0 0.025 0.008 0 0 0 0 0 0 0 0 0 0 0.0 0 1118.5 0.9 0.215 0.055 0 0 0 0 0 0 0 0 0 0 0.0 0 1129.3 0.9 0 0 0.5 0.11e3 0.05e3 0 0.55 0.15 1 0 0 0 0.0 1 1202.9 0.8 0.55 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 1230.2 1.0 0.025 0.008 0 0 0 0 0 0 0 0 0 0 0.0 0 1240.1 0.9 1.65 0.40 0 0 0 0 0 0 0 0 0 0 0.0 0 1329.3 0.9 0 0 1.5 1.4 0.3 2 11 2 1 0 0 0 0.0 1 1357.1 0.9 0 0 0.5 2.8e3 0.1e3 1 1.80 0.45 1 0 0 0 844.0 1 1363.1 0.9 1.5 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 1393.3 0.8 1.15 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 1444.5 0.8 0.10 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1491.4 0.8 2.9 0.8 0 0 0 0 0 0 0 0 0 0 0.0 0 1524.9 0.9 0 0 3.5 0.046 0.011 4 4 3 1 0 0 0 0.0 1 1550.2 0.9 0 0 1.5 1.3 0.3 2 17 5 1 0 0 0 0.0 1 1563.0 1.0 0 0 0.5 3.0e3 0.9e3 1 0.45 0.15 1 0 0 0 0.0 1 1568.3 1.0 0 0 2.5 0.09 0.03 2 0.93 0.18 1 0 0 0 0.0 1 1575.3 1.0 0.8 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 1650.6 0.9 0 0 1.5 1.8e3 0.5e3 1 0.83 0.25 1 0 0 0 844.0 1 1659.2 0.9 0 0 0.5 1.4e3 0.4e3 1 1.4 0.4 1 0 0 0 0.0 1 1670.0 0.9 0.55 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 1681.7 1.6 0.035 0.011 0 0 0 0 0 0 0 0 0 0 0.0 0 1684.2 1.0 0.55 0.20 0 0 0 0 0 0 0 0 0 0 0.0 0 1689.0 0.9 1.0 0.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1691.5 0.9 3.1 1.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1705.6 0.9 0.27 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 1719.5 0.9 2.2 0.6 0 0 0 0 0 0 0 0 0 0 0.0 0 1728.6 1.0 0.35 0.10 0 0 0 0 0 0 0 0 0 0 0.0 0 1753.2 0.9 1.45 0.35 0 0 0 0 0 0 0 0 0 0 0.0 0 1818.3 0.9 0 0 1.5 2.7e3 0.8e3 1 0.50 0.15 1 0 0 0 0.0 1 1821.7 0.9 1.65 0.60 0 0 0 0 0 0 0 0 0 0 0.0 0 1841.2 0.9 4.0 1.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1850.0 1.0 0.32 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 1863.9 1.1 0.21 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 1893.5 0.9 5.15 0.45 0 0 0 0 0 0 0 0 0 0 0.0 0 1947.2 3.0 0 0 1.5 4.1e4 1.2e4 1 8.3 2.3 1 0 0 0 844.0 1 1972.2 0.9 10.5 3.2 0 0 0 0 0 0 0 0 0 0 0.0 0 1987.3 1.0 1.45 0.6 0 0 0 0 0 0 0 0 0 0 0.0 0 2015.6 2.0 0.95 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 2035.8 2.0 0.85 0.20 0 0 0 0 0 0 0 0 0 0 0.0 0 2047.3 2.0 0.26 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 2061.8 2.0 5.0 1.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2066.6 2.0 2.5 0.9 0 0 0 0 0 0 0 0 0 0 0.0 0 2068.5 2.0 0.65 0.20 0 0 0 0 0 0 0 0 0 0 0.0 0 2077.2 2.0 1.25 0.30 0 0 0 0 0 0 0 0 0 0 0.0 0 2093.6 2.0 0.60 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 2101.3 2.0 9.5 2.9 0 0 0 0 0 0 0 0 0 0 0.0 0 2137.9 2.0 4.5 1.4 0 0 0 0 0 0 0 0 0 0 0.0 0 2151.3 2.0 3.15 0.95 0 0 0 0 0 0 0 0 0 0 0.0 0 2187.9 2.0 0.85 0.26 0 0 0 0 0 0 0 0 0 0 0.0 0 2207.2 3.0 0.45 0.14 0 0 0 0 0 0 0 0 0 0 0.0 0 2209.1 3.0 4.5 1.4 0 0 0 0 0 0 0 0 0 0 0.0 0 2238.0 2.0 3.5 1.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2247.6 2.0 0.085 0.026 0 0 0 0 0 0 0 0 0 0 0.0 0 2257.3 2.0 1.15 0.35 0 0 0 0 0 0 0 0 0 0 0.0 0 2284.2 2.0 4.25 1.28 0 0 0 0 0 0 0 0 0 0 0.0 0 2294.8 2.0 5.0 1.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2316.9 2.0 12.0 3.6 0 0 0 0 0 0 0 0 0 0 0.0 0 2321.8 3.0 1.25 0.38 0 0 0 0 0 0 0 0 0 0 0.0 0 2327.6 3.0 1.25 0.38 0 0 0 0 0 0 0 0 0 0 0.0 0 2341.0 3.0 3.5 1.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2354.5 2.0 1.95 0.59 0 0 0 0 0 0 0 0 0 0 0.0 0 2359.4 3.0 0.25 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 2363.0 3.0 0.15 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 2376.7 2.0 5.0 1.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2403.6 2.0 7.0 2.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2404.6 2.0 3.5 1.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2420.9 2.0 2.5 0.8 0 0 0 0 0 0 0 0 0 0 0.0 0 2445.1 2.0 7.0 2.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2451.8 2.0 1.1 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2466.2 2.0 3.5 1.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2479.7 2.0 2.5 0.8 0 0 0 0 0 0 0 0 0 0 0.0 0 2497.1 3.0 0.065 0.020 0 0 0 0 0 0 0 0 0 0 0.0 0 2506.7 3.0 0.095 0.029 0 0 0 0 0 0 0 0 0 0 0.0 0 2509.6 3.0 0.65 0.20 0 0 0 0 0 0 0 0 0 0 0.0 0 2511.5 3.0 0.35 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 2520.2 3.0 0.030 0.009 0 0 0 0 0 0 0 0 0 0 0.0 0 2532.7 2.0 1.2 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 2561.6 2.0 2.85 0.86 0 0 0 0 0 0 0 0 0 0 0.0 0 2565.4 3.0 0.95 0.29 0 0 0 0 0 0 0 0 0 0 0.0 0 2567.4 3.0 2.2 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2593.4 2.0 2.2 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2600.1 3.0 0.35 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 2628.9 3.0 0.15 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 2639.6 3.0 1.0 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2651.1 3.0 1.6 0.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2659.8 3.0 14.5 4.4 0 0 0 0 0 0 0 0 0 0 0.0 0 2667.5 3.0 0.55 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 2699.3 3.0 0.6 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 2702.2 3.0 3.0 0.9 0 0 0 0 0 0 0 0 0 0 0.0 0 2723.4 3.0 1.05 0.32 0 0 0 0 0 0 0 0 0 0 0.0 0 2732.0 3.0 2.2 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2740.7 3.0 0.35 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 2801.4 3.0 4.0 1.2 0 0 0 0 0 0 0 0 0 0 0.0 0 2804.3 3.0 0.65 0.20 0 0 0 0 0 0 0 0 0 0 0.0 0 2806.2 3.0 0.15 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 2824.5 3.0 0.75 0.22 0 0 0 0 0 0 0 0 0 0 0.0 0 2830.3 4.0 5.5 1.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2855.3 3.0 0.8 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 2866.9 3.0 0.25 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 15.7 1.0 4.5 3.4e-40 0.0 5 0.0045 0.1 0.01 1 0 0 0 0 0 0.0 0 89.7 3.0 0.5 2.0e-10 0.0 0 0.0045 0.1 0.01 1 0 0 0 0 0 0.0 0 124.7 1.0 5.5 1.4e-13 0.0 5 0.0045 0.1 0.01 1 0 0 0 0 0 0.0 0 136.6 3.0 0.5 3.0e-8 0.0 0 0.0045 0.1 0.01 1 0 0 0 0 0 0.0 0 170.7 1.0 3.5 1.5e-8 0.0 3 0.0045 1.3 0.3 1 0 0 0 0 0 0.0 1 249.7 2.0 0.5 2.0e-6 0.0 0 0.0045 0.1 0.01 1 0 0 0 0 0 0.0 0 314.7 1.0 3.5 7.5e-6 0.0 3 0.0045 0.1 0.01 1 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .direct capture s - factor adopted from iliadis et al .2 . estimate of gg=0.1 + -0.01 ev for some low - energy resonances is a rough guess ( inconsequential ) .3 . for er=137 and 250 kev , assume s - wave resonances ( implying jp=1/2 + ) for upper limit contributions .kev : jp=5/2 + ( l=2 ) is tentative since jp=5/2- ( l=3 ) can not be excluded based on proton transfer data .er=90 kev : state is weakly populated in proton transfer ; for upper limit contribution assume l=0 ( jp=1/2 + ) . 6 .er=105 kev : l=1 and l=2 fit proton transfer data almost equally well ; we adopt here tentatively l=1 ( jp=1/2- )since it fits the data perhaps slightly better .er=326 , 437 kev : partial widths are calculated from resonance strengths and branching ratios ( gg / g adopted from champagne et al ..... .... 23al(p , g)24si * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 13 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 23.0073 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 2.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 3304.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 7000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 4.64e0 -6.24e-4 0.0 0.5 2500.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 137 29 0 0 2 9.3e-6 4.6e-6 0 1.7e-2 0.8e-2 1 0 0 0 0.0 1 746 150 0 0 4 5.8e-1 2.9e-1 2 7.2e-4 3.6e-4 2 0 0 0 0.0 1 866 150 0 0 0 2.9e1 1.4e1 2 3.4e-4 1.7e-4 2 0 0 0 0.0 1 1166 150 0 0 3 2.2e4 1.1e4 0 8.0e-3 4.0e-3 1 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .present direct capture s - factor is calculated using shell - model spectroscopic factors from herndl et al .three higher lying resonances correspond to unobserved levels ; 150 kev energy uncertainty is a rough estimate only .3 . proton and gamma - ray partial widths are based on shell - model results of herndl et al .1995 ; assumed uncertainty amounts here to 50% ..... .... 24al(p , g)25si * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 13 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 23.9999 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 4.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 3408.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 7000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 2.7e1 0.0 0.0 0.5 5000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 190.0 100.0 0 0 2.5 5.7e-7 2.8e-7 2 4.0e-2 2.0e-2 1 0 0 0 0.0 1 410.0 22.0 0 0 4.5 7.5e-1 3.7e-1 0 1.3e-2 0.7e-2 1 0 0 0 0.0 1 500.0 100.0 0 0 3.5 2.38 1.19 0 1.5e-2 0.8e-2 1 0 0 0 0.0 1 510.0 100.0 0 0 2.5 0.1 0.05 2 7.2e-2 3.6e-2 1 0 0 0 0.0 1 730.0 100.0 0 0 2.5 5.3e-2 2.7e-2 2 1.2e-1 0.6e-1 1 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .energy uncertainty of 100 kev for 190 , 500 , 510 and 730 kev resonances is a rough guess only . .... .... 25al(p , g)26si * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 13 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 24.9904 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 2.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 5513.7 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * non - resonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 2.7e1 0.0 0.0 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 163.3 1.8 0 0 1 7.8e-9 3.1e-9 2 0.11 0.055 1 0 0 0 1808.7 1 407.0 2.6 0 0 3 3.9 1.6 0 0.10 0.05 1 0 0 0 4350.0 1 438.3 4.0 0 0 0 2.3e-2 0.9e-2 2 0.0046 0.0023 1 0 0 0 1808.7 1 806.3 4.0 0 0 1 4.6 1.8 2 0.11 0.055 1 0 0 0 2938.4 1 882.3 4.0 0 0 2 292 117 0 0.11 0.055 1 0 0 0 1808.7 1 965.3 4.0 0 0 4 3.2 1.3 2 0.017 0.0085 1 0 0 0 4900.3 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .excitation energies of first three resonances from wrede 2009 ; for last three resonances from column 1 of tab .i in parpottas et al .2004 , but we added an average value of 8 kev , by which the excitation energies of parpottas et al . seem to be too small ( see comments in wrede 2009 ) .2 . unlike wrede 2009, we prefer to calculate all resonance energies from excitation energies ; in particular , we do not use the proton energy from the beta - delayed proton decay of 26p since it is less precise than the value derived from ex .3 . for the last three resonances the jp assignments are uncertain ; note that the values of 2 + , 2 + , 0 + ( see column 6 of tab .i in parpottas et al .2004 ) are inconsistent both with the known level scheme of the 26 mg mirror and the calculated level scheme from the shell model .all spectroscopic factors and gamma - ray partial widths are adopted from the shell model ( iliadis et al .the final states for primary gamma - ray transitions are adopted from the 26 mg mirror decay ( endt 1990 ) ..... .... 26al^g(p , g)27si * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 13 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 25.986891 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 5.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 7462.96 !projectile separation energy 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 20000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * non - resonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 8.0e1 0.0 0.0 0.4 2000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 188.6 0.4 4.7e-5 0.7e-5 0 0 0 0 0 0 0 0 0 0 0 0 241.3 0.3 1.0e-5 0.5e-5 0 0 0 0 0 0 0 0 0 0 0 0 276.3 0.4 2.9e-3 0.3e-3 0 0 0 0 0 0 0 0 0 0 0 0 368.5 0.4 6.9e-2 0.7e-2 0 0 0 0 0 0 0 0 0 0 0 0 693.6 0.4 6.7e-2 0.9e-2 0 0 0 0 0 0 0 0 0 0 0 0 702.1 0.2 3.3e-2 0.5e-2 0 0 0 0 0 0 0 0 0 0 0 0 741.8 0.2 2.1e-1 0.3e-1 0 0 0 0 0 0 0 0 0 0 0 0 763.3 0.4 4.6e-2 0.7e-2 0 0 0 0 0 0 0 0 0 0 0 0 815.7 1.0 3.5e-3 0.6e-3 0 0 0 0 0 0 0 0 0 0 0 0 827.1 0.4 5.2e-2 0.7e-2 0 0 0 0 0 0 0 0 0 0 0 0 843.4 0.4 1.0e-1 0.2e-1 0 0 0 0 0 0 0 0 0 0 0 0 883.1 0.2 3.0e-2 0.5e-2 0 0 0 0 0 0 0 0 0 0 0 0 893.3 0.2 3.8e-2 0.8e-2 0 0 0 0 0 0 0 0 0 0 0 0 894.8 0.2 8.8e-2 1.4e-2 0 0 0 0 0 0 0 0 0 0 0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 6.0 0.6 4.5 1.31e-61 0.0 0 0.0045 0.2 0.1 1 0 0 0 0 0 0.0 0 68.3 0.7 2.5 6.02e-14 0.0 2 0.0045 0.2 0.1 1 0 0 0 0 0 0.0 0 94.0 3.0 4.5 1.55e-8 0.0 0 0.0045 0.2 0.1 1 0 0 0 0 0 0.0 0 126.7 0.8 4.5 9.41e-9 0.0 0 0.0045 0.2 0.1 1 0 0 0 0 0 0.0 0 230.8 0.9 2.5 1.70e-6 0.0 2 0.0045 0.2 0.1 1 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .gg=0.2 ev for upper limit resonances is a guess ( not important since gp<<gg ) . 2 .jp=4.5 for upper limit resonances assumed if no other information was available ( values not uniquely known ; assume s - wave resonances ) .kev : with few exceptions , resonance energies are calculated from excitation energies of lotay et al . 2009 .4 . for protonpartial widths of upper limit resonances c2s=1 assumed , with two exceptions : ( i ) c2s<0.002 for er=127 kev ( from vogelaar et al .1996 ) ; ( ii ) for er=231 kev proton width upper limit is derived from measured upper limit of resonance strength ( vogelaar 1989 ) .er=68 kev : according to lotay et al .2009 , the smallest possible orbital angular momentum is l=2 . .... .... 27al(p , g)28si * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 13 !ztarget 2 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 26.9815 !atarget 4.0026 !aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 2.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 11585.11 ! projectile separation energy ( kev ) 9984.14 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 8000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 1.01e2 -3.42e-2 2.74e-5 0.4 1800.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 195.5 0.9 9.2e-6 1.3e-6 0 0 0 0 0 0 0 0 0 0 0.0 0 214.7 0.4 4.2e-5 0.3e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 282.1 0.4 2.33e-4 0.13e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 314.8 0.4 1.8e-3 0.1e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 390.9 0.3 8.63e-3 0.52e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 430.6 0.5 1.50e-3 1.25e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 486.72 0.07 0.0258 0.0033 0 0 0 0 0 0 0 0 0 0 0.0 0 488.15 0.07 0.037 0.005 0 0 0 0 0 0 0 0 0 0 0.0 0 589.44 0.04 5.3e-36.4e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 609.47 0.04 0.264 0.016 0 0 0 0 0 0 0 0 0 0 0.0 0 631.08 0.04 0.110 0.009 0 0 0 0 0 0 0 0 0 0 0.0 0 654.84 0.04 0.0527 0.0055 0 0 0 0 0 0 0 0 0 0 0.0 0 705.06 0.04 0.129 0.007 0 0 0 0 0 0 0 0 0 0 0.0 0 709.97 0.04 0.159 0.014 0 0 0 0 0 0 0 0 0 0 0.0 0 716.21 0.04 0.021 0.002 0 0 0 0 0 0 0 0 0 0 0.0 0 733.01 0.04 0.150 0.010 0 0 0 0 0 0 0 0 0 0 0.0 0 739.59 0.04 0.190 0.015 0 0 0 0 0 0 0 0 0 0 0.0 0 745.81 0.04 0.40 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 855.85 0.05 0.012 0.001 0 0 0 0 0 0 0 0 0 0 0.0 0 889.77 0.05 0.140 0.013 0 0 0 0 0 0 0 0 0 0 0.0 0 903.55 0.05 0.183 0.017 0 0 0 0 0 0 0 0 0 0 0.0 0 956.08 0.02 1.91 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 965.89 0.05 3.3e-2 0.8e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 988.41 0.05 0.35 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1050.55 0.06 0.080 0.023 0 0 0 0 0 0 0 0 0 0 0.0 0 1057.80 0.06 0.043 0.012 0 0 0 0 0 0 0 0 0 0 0.0 0 1078.40 0.06 0.80 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 1129.70 0.06 0.093 0.027 0 0 0 0 0 0 0 0 0 0 0.0 0 1140.88 0.08 0.26 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1157.2 0.5 1.02 0.14 0 0 0 0 0 0 0 0 0 0 0.0 0 1169.45 0.06 0.49 0.14 0 0 0 0 0 0 0 0 0 0 0.0 0 1217.33 0.07 0.57 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1231.40 0.08 0.056 0.016 0 0 0 0 0 0 0 0 0 0 0.0 0 1269.77 0.07 0.71 0.12 0 0 0 0 0 0 0 0 0 0 0.0 0 1281.14 0.07 0.54 0.09 0 0 0 0 0 0 0 0 0 0 0.0 0 1315.02 0.07 0.85 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 1331.91 0.07 4.6 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 1338.66 0.3 1.83 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 1388.8 0.3 1.4e-2 0.5e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1404.6 0.2 0.166 0.016 0 0 0 0 0 0 0 0 0 0 0.0 0 1448.10 0.08 0.067 0.007 0 0 0 0 0 0 0 0 0 0 0.0 0 1465.0 0.2 1.24 0.13 0 0 0 0 0 0 0 0 0 0 0.0 0 1508.70 0.08 0.035 0.008 0 0 0 0 0 0 0 0 0 0 0.0 0 1520.9 1.0 0.204 0.022 0 0 0 0 0 0 0 0 0 0 0.0 0 1530.39 0.08 0.81 0.13 0 0 0 0 0 0 0 0 0 0 0.0 0 1532.3 1.0 8.3e-2 1.7e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1587.87 0.08 0.038 0.011 0 0 0 0 0 0 0 0 0 0 0.0 0 1603.2 0.5 1.58 0.23 0 0 0 0 0 0 0 0 0 0 0.0 0 1604.5 0.2 0.071 0.012 0 0 0 0 0 0 0 0 0 0 0.0 0 1619.17 0.10 0.33 0.09 0 0 0 0 0 0 0 0 0 0 0.0 0 1623.01 0.13 0.011 0.003 0 0 0 0 0 0 0 0 0 0 0.0 0 1644.3 0.5 0.057 0.004 0 0 0 0 0 0 0 0 0 0 0.0 0 1661.4 0.6 0.088 0.014 0 0 0 0 0 0 0 0 0 0 0.0 0 1662.2 0.6 1.0 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 1686.1 0.5 0.28 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1732.8 0.3 0.073 0.023 0 0 0 0 0 0 0 0 0 0 0.0 0 1735.02 0.09 1.15 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 1775.2 0.5 0.153 0.018 0 0 0 0 0 0 0 0 0 0 0.0 0 1829.9 0.5 0.021 0.007 0 0 0 0 0 0 0 0 0 0 0.0 0 1838.8 0.5 0.30 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1839.9 0.4 0.47 0.17 0 0 0 0 0 0 0 0 0 0 0.0 0 1893.1 0.5 0.159 0.023 0 0 0 0 0 0 0 0 0 0 0.0 0 1898.2 0.5 0.54 0.10 0 0 0 0 0 0 0 0 0 0 0.0 0 1906.3 0.6 0.52 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 1961.1 0.6 0.12 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1971.5 0.03 1.42 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 1983.5 0.6 0.017 0.004 0 0 0 0 0 0 0 0 0 0 0.0 0 1997.9 0.6 0.032 0.007 0 0 0 0 0 0 0 0 0 0 0.0 0 2026.0 0.8 0.022 0.005 0 0 0 0 0 0 0 0 0 0 0.0 0 2030.6 0.8 0.042 0.010 0 0 0 0 0 0 0 0 0 0 0.0 0 2050.7 0.7 0.083 0.021 0 0 0 0 0 0 0 0 0 0 0.0 0 2054.8 1.0 0.013 0.003 0 0 0 0 0 0 0 0 0 0 0.0 0 2077.7 0.7 0.042 0.010 0 0 0 0 0 0 0 0 0 0 0.0 0 2082.5 0.5 0.26 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 2093.1 0.7 0.053 0.013 0 0 0 0 0 0 0 0 0 0 0.0 0 2100.8 0.5 0.11 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 2121.2 0.6 0.73 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 2121.8 0.9 1.92 0.42 0 0 0 0 0 0 0 0 0 0 0.0 0 2125.6 1.0 0.048 0.009 0 0 0 0 0 0 0 0 0 0 0.0 0 2126.3 0.5 0.775 0.194 0 0 0 0 0 0 0 0 0 0 0.0 0 2220.3 0.8 0.13 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 2228.7 1.0 0.56 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 2245.1 0.9 0.021 0.005 0 0 0 0 0 0 0 0 0 0 0.0 0 2275.0 1.5 0.45 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 2288.4 1.2 2.2 0.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2316.1 1.1 0.33 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 2355.3 1.0 0.25 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 2382.7 0.8 0.28 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 2386.8 0.8 0.66 0.13 0 0 0 0 0 0 0 0 0 0 0.0 0 2394.4 0.8 2.25 0.42 0 0 0 0 0 0 0 0 0 0 0.0 0 2398.0 1.0 0.12 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 2427.2 1.0 1.33 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 2579.3 1.0 0.60 0.12 0 0 0 0 0 0 0 0 0 0 0.0 0 2614.2 1.0 1.2 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2623.2 1.0 0.83 0.17 0 0 0 0 0 0 0 0 0 0 0.0 0 2627.7 1.0 0.45 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 2642.5 3.0 0.15 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 2660.8 1.0 0.048 0.009 0 0 0 0 0 0 0 0 0 0 0.0 0 2747.3 1.0 0.108 0.025 0 0 0 0 0 0 0 0 0 0 0.0 0 2966.1 1.0 0.20 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 2970.2 1.0 0.33 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 2987.6 1.0 0.56 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 3048.9 1.0 0.16 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 3218.4 1.0 0.36 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 3542.8 1.0 2.92 0.58 0 0 0 0 0 0 0 0 0 0 0.0 0 2655.3 1.0 0.61 0.13 0 0 0 0 0 0 0 0 0 0 0.0 0 3818.4 1.0 0.28 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 71.5 0.5 2 7.4e-14 0.0 0 0.0045 0.035 0.009 1 0 0.14 0.07 2 0 0.0 1 84.3 0.4 1 2.6e-12 0.0 1 0.0045 0.275 0.090 1 0 0.183 0.050 1 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .direct capture s - factor adopted from iliadis et al .2001 ( result agrees with measurement of harissopulos et al .2 . resonance energies from endt 1998 .resonance strengths adopted from chronidou et al .1999 and harissopulos et al .2000 , normalized to standard values listed in tab . 1 of iliadis et al . 2001 .4 . for er=71.5 kev: only upper limit can be obtained for weak l=0 component in presence of dominant l=2 proton transfer angular distribution .5 . for er=84.3 kev : partial widths adopted from iliadis et al . 2001 . 6 .the triplet er=193.4 , 193.5 , 195.5 kev ( ex=11778.5 , 11778.6 , 11780.2 kev ) can not be resolved in ( p , g ) , ( p , a ) or ( 3he , d ) work ; however , measured ( p , g ) strength for er=195.5 kev can be regarded as total strength of triplet . .... .... 27al(p ,a)24 mg * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 13 !ztarget 2 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 26.9815 !atarget 4.0026 !aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 2.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 11585.11 ! projectile separation energy ( kev ) 9984.14 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 3 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 314.9 0.4 0 0 4 3.1e-3 1.2e-3 2 0.007 0.001 4 10 5 1 0.0 0 390.9 0.3 0 0 4 1.2e-2 0.5e-2 2 0.011 0.002 4 10 5 1 0.0 0 486.74 0.07 0 0 2 0.73 0.18 0 0.50 0.12 2 0.1 0.05 1 0.0 1 609.49 0.04 0.275 0.069 0 0 0 0 0 0 0 0 0 0 0.0 0 705.08 0.04 0.52 0.13 0 0 0 0 0 0 0 0 0 0 0.0 0 855.85 0.05 0.83 0.21 0 0 0 0 0 0 0 0 0 0 0.0 0 889.77 0.05 0.33 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 903.54 0.05 4.3 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 965.89 0.05 0.20 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1129.70 0.06 0.010 0.003 0 0 0 0 0 0 0 0 0 0 0.0 0 1140.88 0.08 0 0 2 250.0 50.0 0 810.0 160.0 2 1.0 0.1 1 0.0 1 1157.2 0.5 1.08 0.27 0 0 0 0 0 0 0 0 0 0 0.0 0 1169.45 0.06 0.36 0.09 0 0 0 0 0 0 0 0 0 0 0.0 0 1217.33 0.07 0.15 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1231.40 0.08 0.56 0.14 0 0 0 0 0 0 0 0 0 0 0.0 0 1269.77 0.07 3.3 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 1281.14 0.07 8.0e-3 2.1e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1316.7 0.2 0 0 2 960.0 190.0 0 500.0 100.0 2 1.0 0.1 1 0.0 1 1338.66 0.3 10.0 1.7 0 0 0 0 0 0 0 0 0 0 0.0 0 1388.8 0.3 0 0 1 250 50 1 1.5e3 0.2e3 1 1.0 0.1 1 0.0 1 1404.6 0.2 1.0e-3 2.5e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1448.10 0.08 3.0e-3 7.5e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1508.70 0.08 3.8 0.8 0 0 0 0 0 0 0 0 0 0 0.0 0 1519.4 1.0 12.5 2.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1530.39 0.08 7.5e-3 1.9e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1587.87 0.08 30.0 6.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1603.2 0.5 2.5 0.6 0 0 0 0 0 0 0 0 0 0 0.0 0 1604.5 0.2 0.28 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 1619.17 0.10 3.2 0.8 0 0 0 0 0 0 0 0 0 0 0.0 0 1623.02 0.13 0.7 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 1644.3 0.5 4.0 1.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1661.4 0.6 775 . 75 0 0 0 0 0 0 0 0 0 0 0.0 0 1686.1 0.5 8.0e-2 2.1e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1735.02 0.09 0.10 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1775.2 0.5 4.5 0.9 0 0 0 0 0 0 0 0 0 0 0.0 0 1829.9 0.5 26 .5.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1838.8 0.5 183 .17.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1898.2 0.5 16.8 3.3 0 0 0 0 0 0 0 0 0 0 0.0 0 1906.3 0.6 592 .58.3 0 0 0 0 0 0 0 0 0 0 0.0 0 1961.1 0.6 62.5 12.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1971.53 0.03 3.3e-2 0.8e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1974.9 0.6 0.11 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1983.5 0.6 6.6e-2 1.7e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 2025.9 0.8 0.23 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 2030.6 0.8 0.46 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 2050.7 0.7 0.12 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 2054.3 1.0 242.0 25.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2054.8 1.0 6.6e-2 1.7e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 2082.5 0.5 40.8 8.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2093.1 0.7 125.0 16.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2100.8 0.5 0.15 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 2121.2 0.6 23.3 5.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2126.3 0.5 0.19 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 2220.3 0.8 5.3 1.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2227.3 0.8 233.0 25.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2245.1 0.9 6.6e-2 1.7e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 2275.0 1.5 525 .50.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2288.4 1.2 916.7 167.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2316.1 1.1 4.3 1.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2355.3 1.0 525.0 50.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2382.7 0.8 30.0 5.8 0 0 0 0 0 0 0 0 0 0 0.0 0 2386.8 0.8 250.0 25.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2394.4 0.8 5.4e-2 1.4e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 2397.7 1.1 26.6 5.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2427.2 1.0 6.0 1.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2480.4 3.0 575.0 50.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2504.6 3.0 492. 50.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2510.4 3.0 94.1 18.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2518.5 1.0 30.8 5.8 0 0 0 0 0 0 0 0 0 0 0.0 0 2614.2 1.0 41.7 8.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2642.5 3.0 233.0 25.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2662.6 3.0 358.0 33.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2709.0 3.0 75.0 7.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2713.8 3.0 125.0 25.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2721.5 3.0 13.3 2.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2743.6 3.0 92.5 16.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2772.5 3.0 410.0 41.8 0 0 0 0 0 0 0 0 0 0 0.0 0 2789.9 3.0 150.0 17.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2808.5 1.0 75.0 16.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2817.7 1.0 25.0 5.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2908.5 3.0 1883.0 300.4 0 0 0 0 0 0 0 0 0 0 0.0 0 2908.5 3.0 166.0 16.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2930.7 3.0 82.5 12.9 0 0 0 0 0 0 0 0 0 0 0.0 0 2938.4 3.0 433.0 41.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2966.1 1.0 37.5 9.4 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 71.5 0.5 2 7.4e-14 0.0 0 0.0045 0.14 0.07 2 0 0.035 0.009 1 0 0.0 1 84.3 0.4 1 2.6e-12 0.0 1 0.0045 0.183 0.050 1 0 0.275 0.090 1 0 0.0 1 193.5 0.7 2 7.5e-4 0.0 0 0.0045 0.006 0.002 2 0 5.0 2.5 1 0 0.0 0 214.7 0.4 3 9.7e-5 3.9e-5 1 0 0.001 0.0 3 0.010 0.50 0.25 1 0 0.0 0 282.1 0.4 4 6.4e-5 2.6e-5 2 0 0.0035 0.0 4 0.010 1.0 0.5 1 0 0.0 0 437.2 0.4 5 3.4e-5 0.0 3 0.0045 0.0073 0.0018 5 0 1.0 0.5 1 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .resonance energies from endt 1998. 2 . for measured resonance strengths assumed uncertainty of 25% if no value is listed in endt 1998 . 3 . for er=71.5 kev : only upper limit can be obtained for weak l=0 component in presence of dominant l=2 proton transfer angular distribution .4 . for er=84.3 kev : partial widths adopted from iliadis et al . 2001 . 5 .the triplet er=193.4 , 193.5 , 195.5 kev ( ex=11778.5 , 11778.6 , 11780.2 kev ) can not be resolved in ( p , g ) , ( p , a ) or ( 3he , d ) work ; here we assume that total ( p , a ) strength is dominated by the second state since it has been seen in 24mg(a , g)28si work ; from ga / g<1.2e-3 ( champagne et al .1988 ) we find ( i ) ga from measured ( a , g ) strength , and ( ii ) gg>5 ev .we assume here as a crude estimate gg=5 ev that should not influence the total rates significantly ( shell model predicts gg=2 ev ; endt and booten 1993 ) .er=214.7 kev : gg=0.5 ev is a guess ( average for negative parity states in this ex range ) . 7 .er=282.1 kev : upper limit for ga is obtained with gg=1 ev ( rough guess ) and measured branching ratios of champagne et al .1988 . 8 .er=314.9 kev : gg=10 ev is a rough guess ; however , the measured branching ratios of champagne et al . 1988 and the measured upper limit for the ( p , a ) strength ( timmermann et al . 1988 ) hint at a relatively large gamma- ray partial width . 9 .er=348.4 kev disregarded since shell model ( endt 1998 ) predicts jp=5 + ( unnatural parity ) .er=390.9 kev : see comments under 8er=399.9 kev disregarded since shell model ( endt 1998 ) predicts jp=1 + ( unnatural parity ) .er=430.6 kev disregarded since shell model ( endt 1998 ) predicts jp=3 + ( unnatural parity ) . 13 .er=437.2 kev : gg / g=1 assumed ; gg=1 ev is a rough guess . 14 .er=486.7 kev : partial widths calculated from measured ( p , g ) , ( p , a ) and ( a , g ) strengths ..... .... 26si(p , g)27p * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 14 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 25.9923 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 861.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 7000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 5.45e1 -1.66e-2 4.00e-6 0.4 4100.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3dg3 l3 exf int 259 28 0 0 1.5 1.8e-4 0.72e-4 2 3.4e-3 1.7e-3 1 0 0 0 0.0 1 772 33 0 0 2.5 4.3 1.7 2 3.3e-4 1.7e-4 2 0 0 0 0.0 1 1090 100 0 0 2.5 4.4 1.8 2 6.0e-4 3.0e-4 2 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .s - factor for direct capture to ground state calculated with c2s=0.58 from 27 mg mirror state .the next resonances are expected at energies in excess of er=2 mev ( see moon et al ..... .... 27si(p , g)28p * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 14 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 26.9867 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 2.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 2063.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 7.87e1 -0.0154 1.0e-5 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 38 3 0 0 2 3.1e-21 1.24e-21 0 6.0e-2 3.0e-2 1 0 0 0 0.0 1 77 6 0 0 1 5.1e-14 2.04e-14 2 1.1e-2 0.55e-2 1 0 0 0 0.0 1 150 6 0 0 4 4.0e-8 1.6e-8 2 2.1e-2 1.05e-2 1 0 0 0 0.0 1 340 6 0 0 2 3.5e-2 1.4e-2 0 6.9e-3 3.45e-3 1 0 0 0 0.0 1 417 6 0 0 5 1.7e-2 0.68e-2 2 1.4e-3 0.7e-3 1 0 0 0 0.0 1 562 6 0 0 4 1.4e0 0.56e0 2 2.1e-2 1.05e-2 1 0 0 0 0.0 1 791 6 0 0 0 7.2e0 2.9e0 2 1.0e-2 0.5e-2 1 0 0 0 0.0 1 830 6 0 0 3 1.4e0 0.56e0 0 7.3e-3 3.65e-3 1 0 0 0 0.0 1 907 6 0 0 1 1.9e1 0.76e1 2 3.1e-2 1.55e-2 1 0 0 0 0.0 1 1098 6 0 0 3 3.1e2 1.24e2 0 6.0e-2 3.0e-2 1 0 0 0 0.0 1 1134 6 0 0 2 4.6e2 1.84e2 0 7.3e-2 3.65e-2 1 0 0 0 0.0 1 1184 6 0 0 4 1.3e3 0.52e3 1 1.0e-2 0.5e-2 1 0 0 0 0.0 1 1446 6 0 0 1 1.2e2 0.48e2 2 8.3e-2 4.15e-2 1 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * .... .... 28si(p , g)29p * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 14 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 27.977 ! atarget 0! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 2748.8 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 20000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 4.42e1 7.14e-3 3.5e-5 0.4 1100.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3dg3 l3 exf int 357.1 0.7 2.0e-3 0.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 698.8 0.7 1.7e-4 0.4e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1331.6 0.7 3.6e-2 0.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1594.2 1.0 3.3 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 1893.2 0.8 4.5e-4 1.3e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 2009.1 2.0 4.3e-1 0.3e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 2205.3 0.8 1.8e-1 0.5e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 2544.2 0.8 5.0e-2 1.0e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 2778.2 20.0 2.8 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2991.2 2.1 4.6e-1 0.7e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * .... .... 29si(p , g)30p * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 14 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 28.97649 ! atarget 0! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 0.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 5594.5 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 8000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 1.05e2 -1.1e-2 0.0 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 295.5 12.0 0.0 0.0 3 2.3e-5 0.9e-5 2 0.1 0.001 1 0 0 0 0.0 0 313.1 0.7 1.27e-2 4.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 402.4 0.7 0.22 0.021 0 0 0 0 0 0 0 0 0 0 0.0 0 674.8 0.7 0.275 0.063 0 0 0 0 0 0 0 0 0 0 0.0 0 704.5 0.4 0.1 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 886.5 0.4 9.3e-2 0.019 0 0 0 0 0 0 0 0 0 0 0.0 0 924.9 0.4 4.23e-2 8.46e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1003.2 0.6 6.3e-3 2.5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1073.3 0.6 3.8e-2 8.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1258.4 0.6 0.339 0.063 0 0 0 0 0 0 0 0 0 0 0.0 0 1278.9 0.6 0.148 0.063 0 0 0 0 0 0 0 0 0 0 0.0 0 1282.4 1.0 0.508 0.19 0 0 0 0 0 0 0 0 0 0 0.0 0 1326.8 1.0 0.72 0.19 0 0 0 0 0 0 0 0 0 0 0.0 0 1383.8 0.6 6.3e-3 2.5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1420.4 0.6 0.19 0.042 0 0 0 0 0 0 0 0 0 0 0.0 0 1450.4 0.6 1.9e-2 6.3e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1455.0 0.6 0.593 0.106 0 0 0 0 0 0 0 0 0 0 0.0 0 1524.6 0.6 1.27e-2 4.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1583.9 3.0 0.847 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 1608.4 0.6 0.19 0.042 0 0 0 0 0 0 0 0 0 0 0.0 0 1613.0 0.6 9.95e-2 0.019 0 0 0 0 0 0 0 0 0 0 0.0 0 1628.4 1.0 1.165 0.30 0 0 0 0 0 0 0 0 0 0 0.0 0 1687.5 0.6 0.40 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 1688.8 0.6 0.995 0.30 0 0 0 0 0 0 0 0 0 0 0.0 0 1710.4 0.6 0.169 0.063 0 0 0 0 0 0 0 0 0 0 0.0 0 1711.7 0.6 0.148 0.063 0 0 0 0 0 0 0 0 0 0 0.0 0 1788.9 0.6 8.0e-2 0.021 0 0 0 0 0 0 0 0 0 0 0.0 0 1898.0 1.0 0.27 0.062 0 0 0 0 0 0 0 0 0 0 0.0 0 1966.0 0.6 0.127 0.032 0 0 0 0 0 0 0 0 0 0 0.0 0 1967.9 0.6 0.38 0.095 0 0 0 0 0 0 0 0 0 0 0.0 0 1985.4 0.6 9.7e-2 0.024 0 0 0 0 0 0 0 0 0 0 0.0 0 2010.6 0.6 0.678 0.17 0 0 0 0 0 0 0 0 0 0 0.0 0 2041.6 0.6 0.36 0.09 0 0 0 0 0 0 0 0 0 0 0.0 0 2049.9 0.6 0.67 0.17 0 0 0 0 0 0 0 0 0 0 0.0 0 2093.8 0.6 5.5e-2 0.014 0 0 0 0 0 0 0 0 0 0 0.0 0 2154.8 0.6 0.23 0.058 0 0 0 0 0 0 0 0 0 0 0.0 0 2158.3 0.6 4.2e-2 0.011 0 0 0 0 0 0 0 0 0 0 0.0 0 2164.6 0.6 0.97 0.24 0 0 0 0 0 0 0 0 0 0 0.0 0 2191.9 0.6 5.1e-2 0.013 0 0 0 0 0 0 0 0 0 0 0.0 0 2231.9 0.6 4.8e-2 0.012 0 0 0 0 0 0 0 0 0 0 0.0 0 2289.4 0.6 0.1 0.025 0 0 0 0 0 0 0 0 0 0 0.0 0 2326.5 0.6 0.29 0.073 0 0 0 0 0 0 0 0 0 0 0.0 0 2327.3 0.6 0.38 0.095 0 0 0 0 0 0 0 0 0 0 0.0 0 2402.2 0.6 0.74 0.185 0 0 0 0 0 0 0 0 0 0 0.0 0 2406.3 1.0 0.11 0.028 0 0 0 0 0 0 0 0 0 0 0.0 0 2412.9 0.6 0.44 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 2419.8 0.6 7.6e-2 0.019 0 0 0 0 0 0 0 0 0 0 0.0 0 2500.9 2.9 0.63 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 2511.6 2.9 0.27 0.068 0 0 0 0 0 0 0 0 0 0 0.0 0 2557.1 2.9 4.0e-1 0.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2570.6 2.9 1.5e-1 0.038 0 0 0 0 0 0 0 0 0 0 0.0 0 2592.8 2.9 6.6e-2 0.017 0 0 0 0 0 0 0 0 0 0 0.0 0 2611.2 2.9 4.0e-1 0.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2676.9 2.9 3.3e-1 0.083 0 0 0 0 0 0 0 0 0 0 0.0 0 2681.7 2.9 7.0e-2 0.018 0 0 0 0 0 0 0 0 0 0 0.0 0 2683.7 2.9 2.8 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2724.2 2.9 6.5e-1 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 2755.2 2.9 3.0e-2 0.008 0 0 0 0 0 0 0 0 0 0 0.0 0 2756.1 2.9 6.7e-2 0.017 0 0 0 0 0 0 0 0 0 0 0.0 0 2757.1 2.9 2.5e-1 0.063 0 0 0 0 0 0 0 0 0 0 0.0 0 2791.9 2.9 2.5e-1 0.063 0 0 0 0 0 0 0 0 0 0 0.0 0 2803.5 2.9 8.4e-2 0.021 0 0 0 0 0 0 0 0 0 0 0.0 0 2814.1 2.9 1.3e-1 0.033 0 0 0 0 0 0 0 0 0 0 0.0 0 2831.5 2.9 2.8e-2 0.007 0 0 0 0 0 0 0 0 0 0 0.0 0 2837.3 2.9 8.2e-1 0.21 0 0 0 0 0 0 0 0 0 0 0.0 0 2856.6 2.9 4.7e-1 0.12 0 0 0 0 0 0 0 0 0 0 0.0 0 2889.5 2.9 1.5e-1 0.038 0 0 0 0 0 0 0 0 0 0 0.0 0 2931.1 2.9 5.1e-1 0.13 0 0 0 0 0 0 0 0 0 0 0.0 0 2935.9 2.9 7.8e-2 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 2987.1 2.9 1.0e-1 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 3024.8 2.9 4.2e-1 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 3037.4 2.9 1.6e-1 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 3048.0 2.9 5.1e-1 0.13 0 0 0 0 0 0 0 0 0 0 0.0 0 3052.8 2.9 3.2e-1 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 3067.3 2.9 9.1e-1 0.23 0 0 0 0 0 0 0 0 0 0 0.0 0 3075.0 2.9 5.7e-1 0.14 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 107.2 4.0 1 5.1e-11 0.0 0 0.0045 4.1e-2 2.1e-2 1 0 0.0 0.0 0 0 0.0 1 119.5 3.0 5 3.6e-15 0.0 4 0.0045 0.1 0.0001 1 0 0.0 0.0 0 0 0.0 0 213.5 3.0 3 1.8e-7 0.0 2 0.0045 0.1 0.0001 1 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1. value of jp=3 + assumed for er=295.5 kev ; value of gg=0.1 + -0.001 ev is a guess ( inconsequential ) .2 . for er=119.5 and 213.5 kev , values of jp=5 + and 3 + ,respectively , are assumed ; values of gg=0.1 + -0.0001 ev are guesses ( inconsequential ) .strengths of resonances above er=300 kev are adopted from endt 1998 , but renormalized to standard value for er=402 kev ..... .... 30si(p , g)31p * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 14 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 29.97377 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 7296.93 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 20000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 2.2e2 -4.0e-2 0.0 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 17.1 4.0 0 0 3.5 1.3e-42 0.5e-42 3 0.1 0.001 1 0 0 0 0.0 0 439.1 4.0 0 0 3.5 9.2e-5 3.7e-5 3 0.1 0.001 1 0 0 0 0.0 0 482.7 1.0 0.106 0.027 0 0 0 0 0 0 0 0 0 0 0.0 0 600.2 1.2 1.949 0.48 0 0 0 0 0 0 0 0 0 0 0.0 0 648.9 1.0 7.53e-2 0.019 0 0 0 0 0 0 0 0 0 0 0.0 0 735.6 0.9 8.86e-2 0.022 0 0 0 0 0 0 0 0 0 0 0.0 0 752.1 1.0 0.443 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 808.1 1.3 0.195 0.049 0 0 0 0 0 0 0 0 0 0 0.0 0 911.4 0.6 0.886 0.22 0 0 0 0 0 0 0 0 0 0 0.0 0 928.1 0.6 0.142 0.036 0 0 0 0 0 0 0 0 0 0 0.0 0 946.4 0.6 0.71 0.18 0 0 0 0 0 0 0 0 0 0 0.0 0 950.5 0.6 0.797 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 1059.0 0.6 0.106 0.027 0 0 0 0 0 0 0 0 0 0 0.0 0 1137.2 0.7 0.182 0.046 0 0 0 0 0 0 0 0 0 0 0.0 0 1164.3 0.7 0.71 0.18 0 0 0 0 0 0 0 0 0 0 0.0 0 1246.9 0.8 0.142 0.036 0 0 0 0 0 0 0 0 0 0 0.0 0 1255.5 0.8 0.576 0.144 0 0 0 0 0 0 0 0 0 0 0.0 0 1258.7 0.8 0.443 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 1278.9 0.8 1.019 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 1287.4 0.8 0.137 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1344.5 0.8 1.55 0.38 0 0 0 0 0 0 0 0 0 0 0.0 0 1352.7 0.8 1.949 0.48 0 0 0 0 0 0 0 0 0 0 0.0 0 1432.3 0.8 2.126 0.53 0 0 0 0 0 0 0 0 0 0 0.0 0 1433.8 0.8 1.417 0.35 0 0 0 0 0 0 0 0 0 0 0.0 0 1441.2 0.8 0.886 0.22 0 0 0 0 0 0 0 0 0 0 0.0 0 1460.6 0.8 1.55 0.38 0 0 0 0 0 0 0 0 0 0 0.0 0 1466.6 0.9 0.35 0.087 0 0 0 0 0 0 0 0 0 0 0.0 0 1543.2 0.9 0.177 0.044 0 0 0 0 0 0 0 0 0 0 0.0 0 1606.2 1.0 0.35 0.087 0 0 0 0 0 0 0 0 0 0 0.0 0 1613.0 1.0 0.35 0.087 0 0 0 0 0 0 0 0 0 0 0.0 0 1639.1 1.0 1.06 0.26 0 0 0 0 0 0 0 0 0 0 0.0 0 1697.8 1.0 0.177 0.044 0 0 0 0 0 0 0 0 0 0 0.0 0 1712.3 1.1 1.24 0.31 0 0 0 0 0 0 0 0 0 0 0.0 0 1749.4 1.1 1.95 0.48 0 0 0 0 0 0 0 0 0 0 0.0 0 1755.9 1.1 0.443 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 1770.4 1.1 1.5 0.37 0 0 0 0 0 0 0 0 0 0 0.0 0 1816.8 0.6 0.389 0.097 0 0 0 0 0 0 0 0 0 0 0.0 0 1818.9 0.6 2.126 0.53 0 0 0 0 0 0 0 0 0 0 0.0 0 1831.9 0.6 0.261 0.065 0 0 0 0 0 0 0 0 0 0 0.0 0 1834.3 0.6 0.62 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 1857.6 0.6 0.319 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 1859.7 0.6 0.75 0.18 0 0 0 0 0 0 0 0 0 0 0.0 0 1879.4 0.6 0.57 0.14 0 0 0 0 0 0 0 0 0 0 0.0 0 1909.5 0.6 0.297 0.074 0 0 0 0 0 0 0 0 0 0 0.0 0 1929.7 0.6 0.407 0.10 0 0 0 0 0 0 0 0 0 0 0.0 0 1944.1 0.7 0.89 0.22 0 0 0 0 0 0 0 0 0 0 0.0 0 1956.3 0.7 0.443 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 1959.3 0.7 0.443 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 1994.2 0.7 0.66 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 2023.0 0.8 3.2e-1 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 2061.6 0.8 2.4e-1 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 2064.3 0.8 6.5e-1 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 2103.2 0.8 2.2e-1 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 2115.9 0.8 5.8 1.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2144.2 0.9 6.6e-1 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 2152.3 0.9 1.2 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2180.4 0.9 1.6e-1 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 2228.1 0.9 7.5e-1 0.19 0 0 0 0 0 0 0 0 0 0 0.0 0 2239.9 1.0 6.6e-1 0.17 0 0 0 0 0 0 0 0 0 0 0.0 0 2273.9 1.0 1.4 0.35 0 0 0 0 0 0 0 0 0 0 0.0 0 2281.2 1.0 6.2e-1 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 2288.4 1.0 1.8 0.45 0 0 0 0 0 0 0 0 0 0 0.0 0 2297.3 1.0 6.6e-1 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 2301.9 1.0 1.1e-1 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 2315.3 1.0 4.8e-1 0.12 0 0 0 0 0 0 0 0 0 0 0.0 0 2423.9 1.0 2.8 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2459.3 1.9 2.0 0.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2463.2 1.9 5.3 1.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2468.0 1.9 1.1 0.28 0 0 0 0 0 0 0 0 0 0 0.0 0 2519.3 1.9 3.5e-1 0.09 0 0 0 0 0 0 0 0 0 0 0.0 0 2522.2 1.9 9.7e-1 0.24 0 0 0 0 0 0 0 0 0 0 0.0 0 2543.5 1.9 3.1e-1 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 2546.4 1.9 3.8 0.95 0 0 0 0 0 0 0 0 0 0 0.0 0 2555.1 1.9 6.2e-1 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 2568.6 1.9 1.2 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2570.6 1.9 1.2 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2610.2 1.9 4.8 1.2 0 0 0 0 0 0 0 0 0 0 0.0 0 2628.6 1.9 7.5e-1 0.19 0 0 0 0 0 0 0 0 0 0 0.0 0 2644.1 1.9 6.2e-1 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 2648.9 1.9 2.6e-1 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 2666.3 1.9 6.2e-1 0.16 0 0 0 0 0 0 0 0 0 0 0.0 0 2702.1 1.9 1.3 0.33 0 0 0 0 0 0 0 0 0 0 0.0 0 2722.5 1.9 3.8 0.95 0 0 0 0 0 0 0 0 0 0 0.0 0 2749.5 1.9 4.4 1.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2778.6 1.9 4.4e-1 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 2792.1 1.9 2.1 0.53 0 0 0 0 0 0 0 0 0 0 0.0 0 2795.0 1.9 1.5 0.38 0 0 0 0 0 0 0 0 0 0 0.0 0 2801.8 1.9 2.3 0.58 0 0 0 0 0 0 0 0 0 0 0.0 0 2819.2 1.9 5.7e-1 0.14 0 0 0 0 0 0 0 0 0 0 0.0 0 2847.3 1.9 1.3 0.33 0 0 0 0 0 0 0 0 0 0 0.0 0 2856.0 1.9 4.4e-1 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 2895.6 1.9 3.3 0.83 0 0 0 0 0 0 0 0 0 0 0.0 0 2928.5 1.9 1.6 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 52.1 5.0 1.5 5.0e-17 0.0 1 0.0045 0.1 0.001 1 0 0 0 0 0 0.0 0 144.3 0.7 5.5 1.4e-16 0.0 6 0.0045 0.1 0.001 1 0 0 0 0 0 0.0 0 169.1 2.0 3.5 2.7e-11 0.0 3 0.0045 0.1 0.001 1 0 0 0 0 0 0.0 0 418.1 5.0 0.5 0.23 0.0 0 0.0045 1.0 0.001 1 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1. value of jp=7/2- assumed for er=17.1 and 439.1 kev ; value of gg=0.1 + -0.001 ev is a guess ( inconsequential ) .values of jp=3/2- and 7/2- assumed for unobserved er=52.1 and 169.1 kev resonances , respectively ; value of gg=0.1 + -0.001 ev is a guess ( inconsequential ) .3 . value of gg=0.1 + -0.001 ev for unobserved er=144.3 kev resonance is a guess ( inconsequential ) .4 . value of jp=1/2 + ( s - wave ) assumed for unobserved er=418.1 kev resonance ; value of gg=1.0 + -0.001 ev is a guess , implying gp< gg . 5 . strengths of resonances above er=480 kev adopted from endt 1990 , but have been renormalized to standard strength of er=600 kev . .... .... 27p(p , g)28s * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 15 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 26.999 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 2460.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 7000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 2.00e1 0.0 0.0 0.5 10000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 1100 100 0 0 0 8.8e3 4.4e3 0 3.4e-4 1.7e-4 2 0 0 0 1518.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .uncertainty of 100 kev for er=1100 kev is a rough estimate only . .... .... 29p(p , g)30s * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 15 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 28.982 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 4399.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 7.3e1 -1.25e-2 0.0 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 305.0 6.0 0 0 3 2.7e-5 1.1e-5 2 4.9e-3 2.5e-3 1 0 0 0 0.0 1 489.0 40.0 0 0 2 2.2e-2 0.9e-2 2 4.2e-3 2.1e-3 1 0 0 0 0.0 1 737.0 4.0 0 0 3 2.2e-1 0.9e-1 2 8.2e-3 4.1e-3 1 0 0 0 0.0 1 818.4 3.1 0 0 3 3.6e-1 1.4e-1 3 9.4e-3 4.7e-3 1 0 0 0 0.0 1 990.0 4.0 0 0 2 6.3e0 2.5e0 2 0.01 5.0e-3 1 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 769.0 7.0 4 3.5e-4 0.0 4 0.0045 5.5e-3 2.8e-3 1 0 0 0 0 0 0.0 1 769.0 7.0 0 1.1e1 0.0 0 0.00457.7e-3 3.9e-3 1 0 0 0 0 0 0.0 1 1443.0 5.0 4 1.0e-1 0.0 4 0.0045 2.7e-2 1.4e-2 1 0 0 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .resonance energies calculated from excitation energies ( endt 1990 and tab .i of bardayan et al .2007 ) and q - value , except for er=489 kev , which is based on the imme ( see iliadis et al .the spin - parity assignments are not unambiguous ; they are based on experimental jp restrictions ( endt 1990 , bardayan et al .2007 ) and application of the imme ( iliadis et al .3 . proton partial widths are computed using c2s of mirror states from ( d , p ) transfer experiment ( mackh et al .gamma - ray partial widths are computed from measured lifetimes of 30si mirror states , except for er=990 kev for which gg=0.012 ev is a rough estimate ( application of ruls to the known g - ray branchings of the mirror state yields gg<0.1 ev for this level ) ..... .... 31p(p , g)32s * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 15 !ztarget 2 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 30.9737 !atarget 4.0026 !aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 0.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 8863.78 ! projectile separation energy ( kev ) 6947.82 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 1.86e2 -2.85e-2 0.0 0.4 1100.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 159.2 2.0 0 0 3 1.4e-11 0.6e-11 3 0.024 0.009 1 0.012 0.003 3 0.0 1 195.2 2.0 4.8e-7 1.6e-7 0 0 0 0 0 0 0 0 0 0 0.0 0 344.3 0.7 0 0 1 5.6e-3 0.9e-3 0 0.36 0.11 1 0 0 0 0.0 1 391.2 2.0 4.5e-4 0.7e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 425.6 0.5 2.5e-2 0.4e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 524.3 0.6 0.12 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 599.4 1.0 1.1e-3 0.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 622.2 0.7 5.8e-2 9.6e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 785.7 0.5 0.24 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 795.1 1.0 5.0e-2 1.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 846.7 0.5 2.3e-2 0.6e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 859.8 0.5 1.4e-2 0.4e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 866.3 0.5 7.4e-2 1.9e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 952.8 1.0 2.0e-2 0.5e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 984.0 3.0 7.1e-3 1.8e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1023.2 0.6 0.13 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1055.3 0.6 4.3e-2 1.1e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1085.4 0.6 0.29 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 1113.8 0.7 0.42 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 1118.7 0.6 0.15 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1211.9 0.6 1.1 0.13 0 0 0 0 0 0 0 0 0 0 0.0 0 1356.0 0.6 0.15 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1358.7 0.8 0.51 0.13 0 0 0 0 0 0 0 0 0 0 0.0 0 1366.9 0.6 0.16 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1393.0 0.7 1.2 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 1426.7 0.6 0.27 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 1507.6 0.6 0.98 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 1533.0 0.6 0.97 0.24 0 0 0 0 0 0 0 0 0 0 0.0 0 1645.4 1.0 0.13 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1708.6 1.0 0.10 0.025 0 0 0 0 0 0 0 0 0 0 0.0 0 1739.5 1.0 0.12 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1831.9 1.0 0.24 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 1836.3 1.0 0.28 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 1892.4 1.0 0.77 0.19 0 0 0 0 0 0 0 0 0 0 0.0 0 1914.8 1.0 0.43 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 1921.1 1.0 0.88 0.22 0 0 0 0 0 0 0 0 0 0 0.0 0 1928.2 1.0 0.58 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 1962.8 1.0 1.65 0.41 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 306.2 2.0 2 3.2e-6 0.0 2 0.0045 0.5 0.25 1 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf + - 372.2 2.0 1 3.6e-3 1.0e-3 1 0 0.17 0.03 1 0 7.8 3.0 1 0 0.0 1468.1 1.5 1 3800.0 600.0 1 0 0.39 0.11 1 0 2375.0 705.0 1 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .direct capture s - factor from iliadis et al .2 . resonance energies from endt 1998 .3 . assumed that doublet ex=9023 ( 3- ) + 9024 [ 6-(4- ) ] kev is dominated by former , natural parity , state .level at ex=9138 kev is disregarded since it originates from an unpublished study ( see endt 1998 ) ; it has not been observed in proton transfer work .5 . for er=306 kev assignments jp=2+ ( for upper limit contribution ) and gg=0.5 ev are guesses .er=372 kev ( ex=9236 kev ; 1- ) : partial widths are found from measured ( p , g ) , ( p , a ) and ( a , g ) strengths ; in particular , we disregard the other component in the doublet [ ex=9235 kev ; ( 2 + to 5 + ) ] for which the shell model assigns jp=5 + ( endt 1998 ) . 7 .er=1468 kev ( ex=10333 kev ; jp=1- ) : partial widths calculated from measured ( p , g ) and ( p , a ) strengths , together with measured gp ( kalifa et al .1978 ) . .... ....31p(p , a)28si * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 15 !ztarget 2 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 30.9737 !atarget 4.0026 !aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 0.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 8863.78 ! projectile separation energy ( kev ) 6947.82 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 3 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 5000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 159.2 2.0 0 0 3 1.4e-11 0.6e-11 3 0.0120.003 3 0.024 0.009 1 0.0 1 599.4 1.0 2.5e-2 6.3e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 622.2 0.7 1.35 0.33 0 0 0 0 0 0 0 0 0 0 0.0 0 846.8 0.5 1.40 0.35 0 0 0 0 0 0 0 0 0 0 0.0 0 952.8 1.0 0.55 0.14 0 0 0 0 0 0 0 0 0 0 0.0 0 984.0 3.0 14.0 1.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1118.7 0.6 4.75 0.50 0 0 0 0 0 0 0 0 0 0 0.0 0 1358.7 0.8 20.0 2.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1423.7 1.5 8.5 1.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1428.8 1.5 13.3 1.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1591.2 3.0 87.5 10.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1662.9 3.0 23.2 2.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1760.7 3.0 22.5 8.8 0 0 0 0 0 0 0 0 0 0 0.0 0 1836.3 1.0 2200.0 225.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1914.8 1.0 100.0 10.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1922.7 1.0 11.5 1.3 0 0 0 0 0 0 0 0 0 0 0.0 0 1928.2 1.0 52.5 5.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1961.5 1.0 1650.0 175.0 0 0 0 0 0 0 0 0 0 0 0.0 0 1962.8 1.0 97.5 10.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2051.3 3.0 275.0 28.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2186.9 3.0 205.0 22.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2360.2 3.0 21.8 2.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2370.0 3.0 525.0 50.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2582.0 3.0 135.0 15.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2621.7 3.0 72.5 7.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2719.5 3.0 160.0 17.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2722.5 3.0 82.5 10.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2739.9 3.0 13.0 1.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2742.8 3.0 2.5 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2744.7 3.0 925.0 175.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2765.1 3.0 1250.0 250.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2814.5 3.0 142.5 15.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2857.1 3.0 62.5 7.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2870.6 3.0 950.0 100.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2940.4 3.0 35.0 7.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2942.3 3.0 2500.0 500.0 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decmjr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 195.2 2.0 1 6.4e-7 2.1e-7 1 0 1.2e-3 0.0 1 0.010 0.5 0.25 1 0 0.0 0 201.2 2.0 4 1.0e-9 0.0 4 0.0045 0.1 0.05 4 0 6.3e-3 1.3e-3 1 0 0.0 1 391.2 2.0 2 3.6e-4 0.6e-4 2 0 0.08 0.0 2 0.010 1.0 0.5 1 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf + - 372.2 2.0 1 3.6e-3 1.0e-3 1 0 7.8 3.0 1 0 0.17 0.03 1 0 0.0 1468.1 1.5 1 3800.0 600.0 1 0 2375.0 705.0 1 0 0.39 0.11 1 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .resonance energies from endt 1998. 2 . for measured resonance strengths assumed uncertainty of 25% if no value is listed in endt 1998 . 3 . assumed that doublet ex=9023 ( 3- ) + 9024 [ 6-(4- ) ] kev is dominated by former , natural parity , state .gg=0.5 ev for er=195 kev [ ex=9059 kev ; jp=(1,2)- ] is a guess ; furthermore , jp=1- assumed since it better reproduces measured ( p , g ) resonance strength .5 . for er=201 kev ( 4 + ) : gg<<ga according to macarthur et al .1985 ; thus gg=6.3e-3 ev is given by measured ( a , g ) resonance strength , while ga is in the range of 0.03 - 0.2 ev .level at ex=9138 kev is disregarded since it originates from an unpublished study ( see endt 1998 ) ; it has not been observed in proton transfer work .no indication that ex=9170 kev level ( er=306 kev ) decays by emission of alpha - particles ( ross et al .1995 ) ; disregarded the level here in agreement with unnatural parity assignment from the shell model ( endt 1998 ) .8 . for er=391 kev ,gg=1 ev is a rough estimate ; ga / gg<0.08 ( ross et al .1995 ) yields then ga<0.08 ev . 9 .er=372 kev ( ex=9236 kev ; 1- ) : partial widths are found from measured ( p , g ) , ( p , a ) and ( a , g ) strengths ; in particular , we disregard the other component in the doublet [ ex=9235 kev ; ( 2 + to 5 + ) ] for which the shell model assigns jp=5 + ( endt 1998 ) . 10 .er=1468 kev ( ex=10333 kev ; jp=1- ) : partial widths calculated from measured ( p , g ) and ( p , a ) strengths , together with measured gp ( kalifa et al .1978 ) . .... ....30s(p , g)31cl * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 16 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 29.985 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 290.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 7000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * non - resonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 5.14e0 0.0 0.0 0.5 2000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 461.0 15.0 0 0 0.5 7.7e-1 3.8e-1 0 8.6e-4 4.3e-4 1 0 0 0 0.0 1 1463.0 2.0 0 0 2.5 2.70e1 1.35e1 2 1.0e-3 0.5e-3 1 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .er=461(15 ) kev energy from beta - delayed proton decay work of axelsson et al .er=1463(2 ) kev energy from beta - delayed proton decay work of fynbo et al .3 . see also wrede et al ..... .... 31s(p , g)32cl * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 16 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 30.9795 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 1574.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 7.51e1 -0.0346 3.1e-5 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 158 7 0 0 3 2.6e-11 1.0e-11 2 1.1e-3 0.55e-3 1 0 0 0 0.0 1 555 8 0 0 3 8.1e-4 3.2e-4 2 8.6e-3 4.3e-3 1 0 0 0 0.0 1 637 8 0 0 1 4.1e0 1.6e0 0 1.6e-2 0.8e-2 1 0 0 0 0.0 1 706 8 0 0 2 1.9e-1 0.8e-1 2 2.7e-3 1.35e-3 1 0 0 0 0.0 1 1101 12 0 0 2 4.9e0 2.0e0 2 5.5e-2 2.75e-2 1 0 0 0 0.0 1 1294 9 0 0 3 1.2e1 0.5e1 2 6.5e-3 3.25e-3 1 0 0 0 0.0 1 1377 9 0 0 2 2.7e3 1.1e3 1 4.0e-3 2.0e-3 1 0 0 0 0.0 1 1492 9 0 0 4 5.4e1 2.2e1 3 1.7e-3 0.85e-3 1 0 0 0 0.0 1 1602 9 0 0 3 5.6e1 2.2e1 3 2.4e-3 1.2e-3 1 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * .... .... 32s(p , g)33cl * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 16 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 31.972 ! atarget 0! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 2276.7 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 8000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 1.06e2 -1.94e-2 0.0 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3dg3 l3 exf int 75.3 0.5 0 0 1.5 4.9e-18 2.0e-18 2 6.6e-3 3.3e-3 1 0 0 0 0.0 1 409.0 0.6 3.7e-5 0.8e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 562.5 0.5 4.0e-2 0.5e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 569.8 0.5 1.2e-1 0.2e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 698.9 0.5 0.7e-4 0.3e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1539.6 0.5 2.6e-2 0.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1695.4 1.3 4.5e-2 0.9e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1703.9 1.2 1.9e-1 0.2e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 1822.7 1.4 9.5e-3 4.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1836.3 1.4 3.5e-2 1.0e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1840.5 2.0 9.2e-2 3.5e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 2161.7 1.6 1.5e-1 0.2e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 2186.9 1.6 7.0e-2 1.0e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 2469.8 1.7 7.0e-1 1.0e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * .... .... 31cl(p , g)32ar * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 17 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 30.9924 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 1.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 2420.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 7000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 5.4e1 0.0 0.0 0.5 10000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 1600 100 0 0 2 4.2e3 2.1e3 0 1.1e-4 0.55e-4 2 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .uncertainty of 100 kev for er=1600 kev is a rough estimate only .2 . dominant transition ( e2 ) to ground state assumed for resonance , according to data on 32si mirror ( endt and van der leun 1978 ) ..... .... 32cl(p , g)33ar * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 17 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 31.9857 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 1.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 3343.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 3.25e1 0.0 0.0 0.5 2000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 21 9 0 0 2.5 2.2e-43 1.1e-43 2 1.8e-2 0.9e-2 1 0 0 0 0 1 113 9 0 0 3.5 5.2e-16 2.6e-16 2 1.9e-3 0.95e-3 1 0 0 0 0 1 476 8 0 0 2.5 8.7e-4 4.35e-4 2 1.5e-2 0.75e-2 1 0 0 0 0 1 847 100 0 0 0.5 3.8e1 1.9e1 0 1.5e-1 0.75e-1 1 0 0 0 0 1 1387 100 0 0 1.5 8.7e1 4.35e1 0 8.5e-2 4.25e-2 1 0 0 0 0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .direct capture s - factor calculated using shell - model spectroscopic factors ( schatz et al . 2005 ) .2 . for er=21 , 113 and 476kev the spin - parity assignments are not known experimentally ( they are based on coulomb displacement energy calculations of herndl et al .the levels corresponding to er=847 and 1387 kev are not known experimentally ; the energies listed here are based on coulomb displacement energy calculations of herndl et al .1995 ; the uncertainty of 100 kev is a rough guess only . .... .... 35cl(p , g)36ar * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 17 !ztarget 2 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 34.9688 !atarget 4.0026 !aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 1.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 8506.97 ! projectile separation energy ( kev ) 6640.76 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 10000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 4.25e2 -8.57e-2 0.0 0.4 1200.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 48.5 0.6 2.1e-24 8.3e-25 0 0 0 0 0 0 0 0 0 0 0.0 0 165.0 3.0 3.2e-10 1.3e-10 0 0 0 0 0 0 0 0 0 0 0.0 0 299.4 1.8 1.5e-5 0.5e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 402.0 1.0 6.6e-5 3.3e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 414.0 3.0 1.2e-5 0.5e-5 0 0 0 0 0 0 0 0 0 0 0.0 0 431.0 0.7 1.3e-2 0.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 506.9 0.9 1.2e-3 0.3e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 517.3 0.6 3.2e-2 0.6e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 558.6 0.9 3.1e-2 0.6e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 610.3 1.1 7.6e-4 1.8e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 625.2 0.6 2.8e-2 0.6e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 637.4 0.7 1.5e-2 0.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 684.8 1.1 7.1e-3 1.4e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 712.5 0.7 9.8e-2 1.9e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 733.2 1.1 1.9e-2 0.4e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 741.1 1.1 1.9e-2 0.4e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 751.0 1.2 1.7e-2 0.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 793.2 0.3 8.6e-2 1.7e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 835.6 0.3 0.70 0.10 0 0 0 0 0 0 0 0 0 0 0.0 0 849.0 0.8 4.3e-2 0.9e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 858.9 0.8 0.10 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 867.1 1.3 0.13 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 872.9 1.3 0.21 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 886.4 1.0 4.3e-2 0.9e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 932.2 1.4 1.4e-3 0.3e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 941.1 0.9 2.0e-2 0.4e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 958.9 0.5 8.6e-2 1.7e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 967.0 0.8 5.7e-3 1.1e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 995.8 0.5 4.3e-2 0.9e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1002.6 0.6 1.4e-2 0.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1035.0 1.1 2.0e-2 0.4e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1043.3 0.5 0.128 0.026 0 0 0 0 0 0 0 0 0 0 0.0 0 1067.3 0.4 0.34 0.07 0 0 0 0 0 0 0 0 0 0 0.0 0 1088.4 0.7 5.7e-2 1.1e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1099.8 0.5 2.9e-2 0.6e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1160.1 1.0 4.3e-2 0.9e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1174.9 0.5 1.7e-2 0.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1230.5 0.8 0.27 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1257.5 0.5 1.3e-2 0.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1305.2 0.5 4.3e-2 0.9e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1355.6 0.5 7.1e-2 1.4e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1371.6 0.5 2.8e-2 0.6e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1382.3 0.5 7.1e-2 1.4e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1395.1 0.5 5.7e-2 1.1e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1420.4 0.5 4.3e-2 0.9e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1435.5 0.5 8.6e-2 1.7e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1449.9 0.5 0.46 0.09 0 0 0 0 0 0 0 0 0 0 0.0 0 1476.2 0.5 2.57 0.51 0 0 0 0 0 0 0 0 0 0 0.0 0 1485.4 2.0 7.1e-2 1.4e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1495.4 1.0 2.3e-2 0.5e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1537.4 1.2 1.2 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 1543.6 1.5 0.11 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1569.7 0.5 0.13 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1587.9 1.5 0.043 0.009 0 0 0 0 0 0 0 0 0 0 0.0 0 1592.4 0.6 0.23 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1632.5 0.9 0.057 0.011 0 0 0 0 0 0 0 0 0 0 0.0 0 1636.0 0.6 0.20 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 1642.6 0.5 0.10 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1660.4 0.5 0.428 0.086 0 0 0 0 0 0 0 0 0 0 0.0 0 1666.4 0.5 0.71 0.14 0 0 0 0 0 0 0 0 0 0 0.0 0 1686.6 1.0 0.043 0.009 0 0 0 0 0 0 0 0 0 0 0.0 0 1713.3 0.5 0.41 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 1749.0 1.0 0.11 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1750.5 1.0 0.17 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1760.3 0.5 0.17 0.03 0 0 0 0 0 0 0 0 0 0 0.0 0 1764.7 0.6 0.26 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1774.1 1.0 0.043 0.009 0 0 0 0 0 0 0 0 0 0 0.0 0 1794.5 0.9 0.27 0.05 0 0 0 0 0 0 0 0 0 0 0.0 0 1801.7 0.8 0.10 0.02 0 0 0 0 0 0 0 0 0 0 0.0 0 1812.5 1.5 0.086 0.017 0 0 0 0 0 0 0 0 0 0 0.0 0 1822.0 1.5 0.043 0.009 0 0 0 0 0 0 0 0 0 0 0.0 0 1913.8 1.0 1.86 0.37 0 0 0 0 0 0 0 0 0 0 0.0 0 1928.0 1.4 0.87 0.17 0 0 0 0 0 0 0 0 0 0 0.0 0 1993.2 0.5 0.18 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 2032.6 1.2 1.00 0.20 0 0 0 0 0 0 0 0 0 0 0.0 0 2055.0 0.9 0.30 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 2075.9 0.6 0.38 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 2108.6 0.7 1.43 0.29 0 0 0 0 0 0 0 0 0 0 0.0 0 2128.7 0.5 0.76 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 2168.9 1.0 1.38 0.28 0 0 0 0 0 0 0 0 0 0 0.0 0 2193.4 1.5 0.77 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 2301.9 1.2 0.57 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 2398.9 1.0 0.44 0.09 0 0 0 0 0 0 0 0 0 0 0.0 0 2448.7 1.2 1.0 0.2 0 0 0 0 0 0 0 0 0 0 0.0 0 2461.1 1.5 2.0 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 2520.7 1.5 4.3 0.9 0 0 0 0 0 0 0 0 0 0 0.0 0 2642.6 1.5 3.6 0.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2674.6 1.5 0.75 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 86.0 4.0 3.0 4.2e-17 0.0 1 0.0045 0.5 0.25 1 0 1.4e-2 0.0 3 0.010 0.0 0 232.0 4.0 3.0 1.8e-7 0.0 1 0.0045 0.5 0.25 1 0 6.0e-2 0.0 3 0.010 0.0 0 340.0 4.0 2.0 1.5e-4 0.0 0 0.0045 0.5 0.25 1 0 6.0e-1 0.0 2 0.010 0.0 0 379.9 4.0 2.0 5.0e-5 0.0 0 0.0045 0.5 0.25 1 0 4.0e-3 0.0 2 0.010 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .direct capture s - factor from iliadis et al .2 . measured resonance energies and strengths from johnson et al .1974 , endt and van der leun 1978 and iliadis et al .1994 , where the strengths have been normalized to value in tab . 1 of iliadis et al . 2001 .er=165 kev : spin / parity not known unambiguously , but jp=1- , 2 + , 3- most likely ( from info obtained in ( 3he , d ) and ( 6li , d ) studies ) . herewe assume jp=1- ; jp=2 + or 3- does not change gp much since the quantity ( 2j+1)c2s was measured in ( 3he , d ) work [ iliadis et al .kev : spin / parity ambiguous ; assumed jp=2 + ( s - wave ) for upper limit ; gp obtained with c2s<0.0067 from iliadis et al .1994 ; upper limit on ga is the single - particle value ; gg=0.5 ev is a rough guess .er=380 kev : upper limits for gp and ga are obtained from upper limits of ( p , g ) and ( a , g ) strengths ( iliadis et al .1994 ) , together with gg / g=1 from ross et al . 1995 ; gg=0.5 ev is a rough guess .spin / parity of 2 + assumed for s - wave resonanceer=414 kev : assume here that 8919 kev level ( roepke et al . 2002 ) is the same as 8923 kev state ( iliadis et al .1994 ) ; spin / parity can then be restricted to jp=(3 - 5)- .stripping data are best described by l=3 , although a small l=1 component can not be excluded ; the l=1 component from fit yields a ( p , g ) resonance strength in excess of experimental upper limit ; thus we assume pure l=3 transfer .er=86 and 232 kev : levels at ex=8593 and 8739 kev have been observed by roepke et al .2002 ; from gamma - ray decay it can be concluded that jp=(3-,4-,5- , ... ) ; for upper limit contribution we assume jp=3- ; levels are not observed in ( 3he , d ) study of iliadis et al .1994 , implying upper limits of c2s<0.01 ; values of ga represent single - particle estimates , while gg=0.5 ev is a rough guess . .... ....35cl(p , a)32s * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 17 !ztarget 2 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 ! aproj 34.9688 !atarget 4.0026 !aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 1.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 8506.97 ! projectile separation energy ( kev ) 6640.76 ! exit particle separation energy ( = 0 when only 2 channels open ) 1.25 ! radius parameter r0 ( fm ) 3 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0 ! minimum energy for numerical integration ( kev ) 10000 ! number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 2.8e-4 -2.5e-6 1.9e-7 0.4 850.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decmwg dwg jr g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 402.0 1.0 0 0 2.0 1.1e-4 0.6e-4 0 9.8e-3 3.6e-3 2 0.5 0.25 1 0.0 0 849.0 0.8 0.15 0.04 0 0 0 0 0 0 0 0 0 0 0.0 0 858.9 0.8 1.00 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 941.1 0.9 0.23 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 958.9 0.5 0.063 0.016 0 0 0 0 0 0 0 0 0 0 0.0 0 987.3 1.2 0.024 0.006 0 0 0 0 0 0 0 0 0 0 0.0 0 1088.4 0.7 1.5 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 1160.0 1.0 0.88 0.22 0 0 0 0 0 0 0 0 0 0 0.0 0 1196.2 1.4 0.25 0.06 0 0 0 0 0 0 0 0 0 0 0.0 0 1230.5 0.8 0.39 0.10 0 0 0 0 0 0 0 0 0 0 0.0 0 1382.3 0.5 1.8 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 1395.1 0.5 0.35 0.09 0 0 0 0 0 0 0 0 0 0 0.0 0 1475.6 1.6 2.75 0.69 0 0 0 0 0 0 0 0 0 0 0.0 0 1484.9 1.6 7.5 1.9 0 0 0 0 0 0 0 0 0 0 0.0 0 1537.4 1.2 3.0 0.8 0 0 0 0 0 0 0 0 0 0 0.0 0 1543.6 1.5 30.0 7.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1587.9 1.5 0.75 0.19 0 0 0 0 0 0 0 0 0 0 0.0 0 1592.4 0.6 22.5 5.6 0 0 0 0 0 0 0 0 0 0 0.0 0 1660.4 0.5 1.9 0.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1666.4 0.5 18.8 4.7 0 0 0 0 0 0 0 0 0 0 0.0 0 1694.2 2.0 10.0 2.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1753.5 2.0 1.00 0.25 0 0 0 0 0 0 0 0 0 0 0.0 0 1760.3 0.5 1.6 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 1774.1 1.0 0.33 0.08 0 0 0 0 0 0 0 0 0 0 0.0 0 1794.5 0.9 1.6 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 1812.5 1.5 10.0 2.5 0 0 0 0 0 0 0 0 0 0 0.0 0 1870.1 2.0 0.75 0.19 0 0 0 0 0 0 0 0 0 0 0.0 0 1932.3 2.0 31.3 7.8 0 0 0 0 0 0 0 0 0 0 0.0 0 1968.3 2.0 0.59 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 1980.9 2.0 7.5 1.9 0 0 0 0 0 0 0 0 0 0 0.0 0 2016.9 2.0 6.3 1.6 0 0 0 0 0 0 0 0 0 0 0.0 0 2051.8 2.0 17.5 4.4 0 0 0 0 0 0 0 0 0 0 0.0 0 2061.6 2.0 0.43 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 2085.9 2.0 27.5 6.9 0 0 0 0 0 0 0 0 0 0 0.0 0 2109.2 2.0 7.5 1.9 0 0 0 0 0 0 0 0 0 0 0.0 0 2143.6 1.1 125.0 31.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2156.8 2.0 5.0 1.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2167.5 2.0 2.4 0.6 0 0 0 0 0 0 0 0 0 0 0.0 0 2193.4 1.5 10.0 2.5 0 0 0 0 0 0 0 0 0 0 0.0 0 2194.7 1.2 23.8 6.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2231.7 10.0 42.5 10.6 0 0 0 0 0 0 0 0 0 0 0.0 0 2252.1 2.0 20.0 5.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2256.9 2.0 3.3 0.8 0 0 0 0 0 0 0 0 0 0 0.0 0 2273.5 2.0 7.5 1.9 0 0 0 0 0 0 0 0 0 0 0.0 0 2282.2 3.0 1.6 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 2309.4 3.0 0.46 0.11 0 0 0 0 0 0 0 0 0 0 0.0 0 2325.3 1.5 3.9 1.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2345.0 1.5 36.3 9.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2358.0 7.0 23.8 5.9 0 0 0 0 0 0 0 0 0 0 0.0 0 2395.3 3.0 48.8 12.2 0 0 0 0 0 0 0 0 0 0 0.0 0 2410.3 2.7 0.88 0.22 0 0 0 0 0 0 0 0 0 0 0.0 0 2427.1 2.7 0.88 0.22 0 0 0 0 0 0 0 0 0 0 0.0 0 2431.6 2.7 0.63 0.15 0 0 0 0 0 0 0 0 0 0 0.0 0 2453.3 2.5 103.8 25.9 0 0 0 0 0 0 0 0 0 0 0.0 0 2469.2 2.5 18.8 4.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2486.5 2.5 8.8 2.2 0 0 0 0 0 0 0 0 0 0 0.0 0 2522.7 3.0 100.0 25.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2536.7 2.6 20.0 5.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2543.1 3.0 62.5 15.6 0 0 0 0 0 0 0 0 0 0 0.0 0 2556.1 3.0 51.3 12.8 0 0 0 0 0 0 0 0 0 0 0.0 0 2583.6 2.8 3.0 0.8 0 0 0 0 0 0 0 0 0 0 0.0 0 2602.6 2.8 3.1 0.8 0 0 0 0 0 0 0 0 0 0 0.0 0 2613.0 3.5 13.8 3.4 0 0 0 0 0 0 0 0 0 0 0.0 0 2616.2 2.6 32.5 8.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2628.0 2.7 53.8 13.4 0 0 0 0 0 0 0 0 0 0 0.0 0 2652.9 3.0 52.5 13.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2699.2 2.9 1.6 0.4 0 0 0 0 0 0 0 0 0 0 0.0 0 2703.0 2.7 113.0 28.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2717.0 3.0 57.5 14.4 0 0 0 0 0 0 0 0 0 0 0.0 0 2735.5 3.0 500.0 125.0 0 0 0 0 0 0 0 0 0 0 0.0 0 2741.2 3.0 36.3 9.1 0 0 0 0 0 0 0 0 0 0 0.0 0 2770.6 2.8 27.5 6.9 0 0 0 0 0 0 0 0 0 0 0.0 0 2796.4 3.0 1.4 0.3 0 0 0 0 0 0 0 0 0 0 0.0 0 2804.9 2.8 18.8 4.7 0 0 0 0 0 0 0 0 0 0 0.0 0 2831.8 3.0 11.3 2.8 0 0 0 0 0 0 0 0 0 0 0.0 0 2837.3 3.0 17.5 4.4 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int 48.5 0.6 2.0 3.3e-24 1.3e-24 0 0 3.7e-2 0.0 2 0.010 0.5 0.25 1 0 0.0 0 86.0 4.0 3.0 4.2e-17 0.0 1 0.0045 1.4e-2 0.0 3 0.010 0.5 0.25 1 0 0.0 0 165.0 3.0 1.0 8.6e-10 3.4e-10 1 0 2.0e-3 0.0 1 0.010 0.5 0.25 1 0 0.0 0 232.0 4.0 3.0 1.8e-7 0.0 1 0.0045 6.0e-2 0.0 3 0.010 0.5 0.25 1 0 0.0 0 299.4 1.8 1.0 4.0e-5 1.3e-5 1 0 4.0e-2 0.0 1 0.010 0.5 0.25 1 0 0.0 0 340.0 4.0 2.0 1.5e-4 0.0 0 0.0045 6.0e-1 0.0 2 0.010 0.5 0.25 1 0 0.0 0 379.9 4.0 2.0 5.0e-5 0.0 0 0.0045 4.0e-3 0.0 2 0.010 0.5 0.25 1 0 0.0 0 414.0 3.0 3.0 1.4e-5 5.6e-6 3 0 3.0e-2 0.0 3 0.010 0.5 0.25 1 0 0.0 0 431.0 0.7 3.0 1.5e-2 0.3e-2 1 0 0.01 0.0 3 0.010 0.5 0.25 1 0 0.0 0 506.9 0.9 5.0 8.7e-4 2.2e-4 3 0 0.015 0.0 5 0.010 0.5 0.25 1 0 0.0 0 517.3 0.6 2.0 5.1e-2 1.0e-2 0 0 2.5 0.0 2 0.010 0.5 0.25 1 0 0.0 0 558.6 0.9 3.0 5.1e-2 2.0e-2 1 0 7.1e-3 0.0 3 0.010 0.035 0.007 1 0 0.0 0 610.3 1.1 1.0 9.1 0.0 1 0.0045 12.0 0.0 1 0.010 0.5 0.25 1 0 0.0 0 625.2 0.6 3.0 12.0 0.0 1 0.0045 1.5 0.0 3 0.010 0.5 0.25 1 0 0.0 0 637.4 0.7 2.0 42.0 0.0 0 0.0045 5.9 0.0 2 0.010 0.5 0.25 1 0 0.0 0 684.8 1.1 3.0 31.0 0.0 1 0.0045 2.3 0.0 3 0.010 0.5 0.25 1 0 0.0 0 733.2 1.1 1.0 59.0 0.0 1 0.0045 27.0 0.0 1 0.010 0.5 0.25 1 0 0.0 0 741.1 1.1 1.0 65.0 0.0 1 0.0045 28.0 0.0 1 0.010 0.5 0.25 1 0 0.0 0 751.0 1.2 3.0 78.0 0.0 1 0.0045 3.5 0.0 3 0.010 0.5 0.25 1 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 . measured resonance energies and strengths from bosnjakovic et al .1968 and endt and van der leun 1978 ; note that no carefully measured ( p , a ) strength exists for this reaction .er=49 kev : adopted upper limit of ga equal the single - particle estimate ; gg=0.5 ev is a rough estimate . 3 .er=165 kev : spin / parity not known unambiguously , but jp=1- , 2 + , 3- most likely ( from info obtained in ( 3he , d ) and ( 6li , d ) studies ) . here we assume jp=1- ( jp=2 + or 3- does not change gp much since the quantity ( 2j+1)c2s was measured in ( 3he , d ) work [ iliadis et al . 1994 ] ) ; upper limit for ga is derived from strength measured in 32s(a , g)36ar reaction , while gg=0.5 ev is a rough guess .er=299 kev : gp from measured ( p , g ) strength ; gg=0.5 ev is a rough estimate ; rough upper limit for ga is then obtained using ga / gg<0.08 from ross et al . 1995 . 5 .er=340 kev : spin / parity ambiguous ; assumed jp=2 + ( s - wave ) for upper limit ; gp obtained with c2s<0.0067 from iliadis et al .1994 ; upper limit on ga is the single - particle value ; gg=0.5 ev is a rough guess .er=380 kev : upper limits for gp and ga are obtained from upper limits of ( p , g ) and ( a , g ) strengths ( iliadis et al .1994 ) , together with gg / g=1 from ross et al . 1995 ; gg=0.5 ev is a rough guess .spin / parity of 2 + assumed for s - wave resonance . 7 .er=402 kev : only ( p , g ) and ( a , g ) strengths have been measured ; state not observed in ( 3he , d ) ; we derived gp and gg assuming that g = gg , but the assumption g = ga is certainly not excluded ; gg=0.5 ev is a rough estimate .better estimates can not be derived at present .er=414 kev : assume here that 8919 kev level ( roepke et al . 2002 ) is the same as 8923 kev state ( iliadis et al .1994 ) ; spin / parity can then be restricted to jp=(3 - 5)- .stripping data are best described by l=3 , although a small l=1 component can not be excluded ; the l=1 component from fit yields a ( p , g ) resonance strength in excess of experimental upper limit ; thus we assume pure l=3 transfer .value of gg=0.5 ev is a rough guess , while ga<0.03 ev is then obtained from ga / g ( ross et al .1995 ) . 9 .er=431 kev : from analog assignments , endt 1998 suggests jp=3- , implying l=1 + 3 ; ga is obtained with ga / gg<0.02 ( ross et al .1995 ) and gg=0.5 ev ( rough guess ) . 10 .er=507 kev : from energy arguments it can be concluded that most of the transfer strength arises from 9014 kev component of 9014 + 9024 kev doublet ; furthermore , from analog assignments endt 1998 suggests jp=5- , implying l=3 ; value for ga represents single - particle estimate , while gg=0.5 ev is a rough guess .er=517 kev : from analog assignments , endt 1998 suggests jp=2 + , implying l=0 + 2 ; we assume g = gg ; value of ga represents single - particle estimate , while gg=0.5 ev is a guess. 12 .er=559 kev : value for gg is obtained from measured ( p , g ) strength since g = gp ( ross et al . 1995 ) ; upper limit for ga is calculated from gp ( stripping ) and measured upper limit of ga / g ( ross et al .1995 ) . 13 . for upper limit resonances at er=610 - 751 kev, we adopted single - particle estimates as upper limits for gp and ga ; gg=0.5 ev is a rough guess .er=86 and 232 kev : levels at ex=8593 and 8739 kev have been observed by roepke et al .2002 ; from gamma - ray decay it can be concluded that jp=(3-,4-,5- , ... ) ; for upper limit contribution we assume jp=3- ; levels are not observed in ( 3he , d ) study of iliadis et al .1994 , implying upper limits of c2s<0.01 ; values of ga represent single - particle estimates , while gg=0.5 ev is a rough guess .nonresonant s - factor describes low - energy tails of broad resonances in the er=1231 - 2194 kev region ( info from endt and van der leun 1978 ) . .... ....34ar(p , g)35k * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 18 ! ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 33.9803 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 84.5 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 7000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 1.12e1 0.0 0.0 0.5 10000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 1469.0 5.0 0 0 0.5 2.2e3 1.1e3 0 2.1e-4 1.05e-4 1 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .q - value from new 35k mass excess of yazidjian et al2 . resonance energy from beta - delayed proton decay of 35ca ( trinder et al . 1999 ) . .... .... 35ar(p , g)36k * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 18 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 34.975 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 1.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 1668.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 5000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 1.24e2 -8.66e-2 9.96e-5 0.4 1000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int -0.1 21 0 0 2 0.0372 0.0149 1 2.7e-4 1.35e-4 1 0 0 0 0.0 1 224 21 0 0 2 5.7e-7 2.28e-7 0 1.0e-2 0.5e-2 1 0 0 0 0.0 1 604 31 0 0 3 4.2e-1 1.68e-1 1 4.7e-4 2.35e-4 1 0 0 0 0.0 1 744 31 0 0 2 2.5e0 1.0e0 0 1.1e-2 0.55e-2 1 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * comments : 1 .ficticious " er=-0.1 kev " resonance is actually located at er=2 + -21 kev [ using the latest q - value ] . since thislow energy causes problems in the calculation of the coulomb wave functions , the state is shifted just below threshold ..... .... 36ar(p , g)37k * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 !zproj 18 ! ztarget 0 ! zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 35.967 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 1857.6 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 20000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 1.2e2 -4.63e-2 3.0e-5 0.4 1500.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 312.4 0.2 7.0e-4 1.0e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 892.5 0.1 2.4e-1 0.2e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 1224.2 0.1 1.5e-2 0.2e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1381.5 0.2 6.9e-4 1.0e-4 0 0 0 0 0 0 0 0 0 0 0.0 0 1456.2 2.0 3.5e-2 0.4e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1764.2 3.0 1.6e-2 0.3e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 2132.2 20.0 8.0e-2 2.2e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 2420.2 5.0 2.6e-2 1.4e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 2555.2 0.4 1.9e-1 0.5e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 2574.6 0.3 1.3e-1 0.4e-1 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * .... .... 35k(p , g)36ca * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 19 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 34.988 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 1.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 2556.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 7000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 2.72e1 0.0 0.0 0.5 10000.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3dg3 l3 exf int 459 43 0 0 2 1.1e-3 0.55e-3 0 3.8e-4 1.9e-4 2 0 0 0 0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * .... .... 39ca(p , g)40sc * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 20 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 38.971 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 !jproj 1.5 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 538.0 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 8000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 3.75e1 0.0158 0.0 0.4 1200.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 234.1 3.2 0 0 2 2e-9 0.8e-9 1 1.64e-3 0.8e-3 1 0 0 0 0.0 1 355.5 3.3 0 0 5 3.2e-7 1.3e-7 3 5.3e-4 2.65e-4 1 0 0 0 0.0 1 1132.7 3.4 0 0 2 219 88 1 1.34e-3 0.67e-3 1 0 0 0 0.0 1 1165.2 3.7 0 0 1 394 158 1 8.77e-4 4.39e-4 1 0 0 0 0.0 1 1259.0 3.6 0 0 3 0.11 0.044 1 2.92e-3 1.46e-3 1 0 0 0 0.0 1 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * .... .... 40ca(p , g)41sc * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1 ! zproj 20 !ztarget 0 !zexitparticle ( = 0 when only 2 channels open ) 1.0078 !aproj 39.962 !atarget 0 ! aexitparticle ( = 0 when only 2 channels open ) 0.5 ! jproj 0.0 !jtarget 0.0 ! jexitparticle ( = 0 when only 2 channels open ) 1085.1 ! projectile separation energy ( kev ) 0.0 !exit particle separation energy ( = 0 when only 2 channels open ) 1.25 !radius parameter r0 ( fm ) 2 ! gamma - ray channel number ( = 2 if ejectile is a g - ray ; =3 otherwise ) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 1.0! minimum energy for numerical integration ( kev ) 8000 !number of random samples ( > 5000 for better statistics ) 0 != 0 for rate output at all temperatures ; = nt for rate output at selected temperatures * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * nonresonant contribution s(kevb ) s'(b ) s''(b / kev ) fracerr cutoff energy ( kev ) 1.92e1 0.011 0.0 0.4 1200.0 0.0 0.0 0.0 0.0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * resonant contribution note : g1 = entrance channel , g2 = exit channel , g3 = spectator channel ! !ecm , exf in ( kev ) ; wg , gx in ( ev ) ! !note : if er<0 , theta^2=c2s*theta_sp^2 must be entered instead of entrance channel partial width ecm decm wg dwg j g1 dg1 l1 g2 dg2 l2 g3 dg3 l3 exf int 631.4 0.1 1.8e-3 0.2e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1010.8 0.5 0 0 1.5 7.6e-1 3.0e-1 2 9.0e-7 4.5e-7 1 0 0 0 0.0 1 1329.6 0.5 0 0 1.5 2.6e2 1.0e2 1 1.2e-4 0.6e-4 1 0 0 0 0.0 1 1503.0 0.1 1.0e-2 0.15e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 1581.5 0.1 9.0e-3 1.0e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1634.1 0.1 2.0e-3 0.5e-3 0 0 0 0 0 0 0 0 0 0 0.0 0 1797.2 0.1 0.14 0.015 0 0 0 0 0 0 0 0 0 0 0.0 0 1886.9 0.2 3.1e-2 0.4e-2 0 0 0 0 0 0 0 0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * upper limits of resonances note : enter partial width upper limit by chosing non - zero value for pt , where pt=<theta^2 > for particles and ... note : ...pt=<b > for g - rays [ enter : " upper_limit 0.0 " ] ; for each resonance : # upper limits < # open channels ! ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf int ! 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * interference between resonances [ numerical integration only ] note : + for positive , - for negative interference ; + - if interference sign is unknown ecm decm jr g1 dg1 l1 pt g2 dg2 l2 pt g3 dg3 l3 pt exf !+ - 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 0.0 0 0 0.0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * .... 00 a. s. adekola , ph .d. thesis , ohio university ( 2009 ) .j. h. aitken et al . , can .48 ( 1970 ) 1617 .f. ajzenberg - selove , nucl .phys . a 190 ( 1972 ) 1 .f. ajzenberg - selove , nucl .phys . a 523 ( 1991 ) 1 .r. almanza et al .phys . a 248 ( 1975 ) 214 .n. anantaraman et al .phys . a 279 ( 1977 ) 474 . c. angulo et al .phys . a 656 ( 1999 ) 3 .a. anttila , j. keinonen and m. bister , j. phys .g 3 ( 1977 ) 1241 .a. arazi et al . , phys .c 74 ( 2006 ) 025802 .m. arnould and s. goriely , nucl .phys . a 777 ( 2006 ) 157 .g. audi , a. h. wapstra and c. thibault , nucl .phys . a 729 ( 2003 ) 337 .l. axelsson et al . , nucl .phys . a 634 ( 1998 ) 475 .d. w. bardayan et al .c 62 ( 2000 ) 055804 .d. w.bardayan et al . , phys .c 63 ( 2001 ) 065802 .d. w. bardayan et al .( 2002 ) 262501 .d. w. bardayan et al .c 65 ( 2002 ) 032801(r ) .d. w. bardayan et al .c 70 ( 2004 ) 015804 .d. w. bardayan , r. l. kozub and m. s. smith , phys .c 71 ( 2005 ) 018801 .d. w. bardayan et al .c 74 ( 2006 ) 045804 .d. w. bardayan et al .c 76 ( 2007 ) 045803 .g. a. bartholomew et al . , can .33 ( 1955 ) 441 .n. bateman et al .c 63 ( 2001 ) 035803 .a. m. baxter and s. hinds , nucl .a 211 ( 1973 ) 7 .h. w. becker et al ., z. phys . a 305 ( 1982 ) 319 .h. w. becker et al ., z. phys . a 351 ( 1995 ) 453 .w. benenson et al .c 13 ( 1976 ) 1479 . w. benenson et al . , phys .c 15 ( 1977 ) 1187 .u. e. p. berg and k. wiehard , nucl .phys . a 318 ( 1979 ) 453 .j. bommer et al . , nucl .phys . a 251 ( 1975 ) 246 .b. bosnjakovic et al . , nucl .phys . a 110 ( 1968 )l. buchmann , j. m. dauria and p. mccorquodale , astrophys . j. 324 ( 1988 ) 953 .j. a. caggiano et al . , phys .c 64 ( 2001 ) 025802 .j. a. caggiano et al . , phys .c 65 ( 2002 ) 055801 .k. y. chae et al . , phys .c 74 ( 2006 ) 012801(r ) .a. chafa et al . , phys .c 75 ( 2007 ) 035810 .a. e. champagne and m. pitt , nucl .phys . a 457 ( 1986 ) 367 .a. e. champagne et al .phys . a 487 ( 1988 ) 433 .a. e. champagne , p. v. magnus and m. s. smith , nucl .phys . a 512 ( 1990 ) 317 .a. e. champagne , b. a. brown and r. sherr , nucl .phys . a 556 ( 1993 ) 123 .r. chatterjee , a. okolowicz and m. ploszajczak , nucl .phys . a 764 ( 2006 ) 528 .g. chouraqui et al . , j. de phys . 31 ( 1970 ) 249 .c. chronidou et al . , eur .j. a 6 ( 1999 ) 303 .r. r. c. clement et al .92 ( 2004 ) 172502 .h. comisel et al . , phys .c 75 ( 2007 ) 045807 .r. coszach et al .c 50 ( 1994 ) 1695 .m. couder et al .c 69 ( 2004 ) 022801 .g. cowan , _ statistical data analysis _ ( oxford university press , new york , 1998 ) .j. cseh et al . , nucl .phys . a 385 ( 1982 ) 43 .a. cunsolo et al . , phys.rev .c 24 ( 1981 ) 476 .dababneh et al . , phys .c 68 ( 2003 ) 025801 .j. c. dalouzy et al .c , in print ( 2009 ) . j. m. dauria et al . , phys .c 69 ( 2004 ) 065804 .b. davids et al .c 67 ( 2003 ) 065808 .f. de oliveira et al . , nucl .phys . a 587 ( 1996 ) 231 ; f. de oliveira , thesis , universit paris sud , unpublished ( 1995 ) . f. de oliveira et al .c 55 ( 1997 ) 3149 . c. m. deibel , ph .d. thesis , yale university ( 2008 ) .p. descouvemont , phys .c 38 ( 1988 ) 2397 .p. descouvemont , astrophys .j. 543 ( 2000 ) 425 .p. descouvemont et al . , at .data nucl .88 ( 2004 ) 203 . w. r. dixon and r. s. storey , can .49 ( 1971 ) 1714 .w. r. dixon et al .( 1971 ) 1460 .w. r. dixon and r. s. storey , nucl .phys . a 284 ( 1977 ) 97 .p. doornenbal et al . , phys .b 647 ( 2007 ) 237 . j. p. draayer et al . , phys .b 53 ( 1974 ) 250 . m. dufour and p. descouvemont , nucl .phys . a 672 ( 2000 ) 153 .m. dufour and p. descouvemont , nucl .phys . a 730 ( 2004 ) 316 .m. dufour and p. descouvemont , nucl .phys . a 785 ( 2007 ) 381 .m. endt , at .data nucl .data tab . 19 ( 1977 ) 23 .m. endt , nucl .phys . a 521 ( 1990 ) 1 .m. endt , nucl .phys . a 633 ( 1998 ) 1 .p. m. endt and j. g. l. booten , nucl .phys . a 555 ( 1993 ) 499 .p. m. endt and c. van der leun , nucl .phys . a 310 ( 1978 ) 1 .p. m. endt and c. rolfs , nucl .phys . a 467 ( 1987 ) 261 .s. engel et al . , nucl .instr . meth . a 553 ( 2005 ) 491 .a. j. ferguson and h. e. gove , can .37 ( 1959 ) 660 . c. l. fink and j. p. schiffer , nucl .phys . a 225 ( 1974 ) 93 .r. b. firestone , nucl .data sheets 103 ( 2004 ) 269 .r. b. firestone , nucl .data sheets 108 ( 2007 ) 2319 .l. k. fifield et al . , nucl .phys . a 309 ( 1978 ) 77 ; nucl .phys . a 322 ( 1979 ) 1 .a. formicola et al . , j. phys .g 35 ( 2008 ) 014013 .h. t. fortune , r. sherr and b. a. brown , phys .c 61 ( 2000 ) 057303 . c. fox et al .c 71 ( 2005 ) 055801 .j. b. french , s. iwao and e. vogt , phys .122 ( 1961 ) 1248 .h. o. u. fynbo et al . , nucl .phys . a 677 ( 2000 ) 38 .a. gade et al . , phys .c 77 ( 2008 ) 044306 .m. gai et al . , phys .c 36 ( 1987 ) 1256 .u. giesen et al . , nucl .phys . a 567 ( 1994 ) 146 .j. grres et al .c 62 , 055801 ( 2000 ) . v. z. goldberg et al . , phys .c 69 ( 2004 ) 024602 .t. gomi et al . , j. phys .g 31 ( 2005 ) s1517 .j. grres et al .phys . a 408 ( 1983 ) 372 .j. grres et al .phys . a 517 ( 1990 ) 329 .j. grres et al .phys . a 548 ( 1992 ) 414 .s. goriely , s. hilaire and a. j. koning , astron .astrophys . 487( 2008 ) 767 , and _private communication_. p. j. graff et al . , j. de phys . 29 ( 1968 ) 141 .s. graff et al . , nucl .phys . a 510 ( 1990 ) 346 .m. b. greenfield et al . , nucl .phys . a 524 ( 1991 ) 228 .b. guo et al . , phys .c 73 ( 2006 ) 048801 .k. i. hahn et al . , phys .c 54 ( 1996 ) 1999 .s. e. hale et al .c 65 ( 2001 ) 015801 .s. e. hale et al .c 70 ( 2004 ) 045802 .v. y. hansper et al . , phys .c 61 ( 2000 ) 028801 .s. harissopulos et al . , eur .j. a 9 ( 2000 ) 479 .j. j. he et al .c 76 ( 2007 ) 055802 .h. herndl et al . , phys .c 52 ( 1995 ) 1078 .h. herndl et al . , phys .c 58 ( 1998 ) 1798 .g. j. highland and t. t. thwaites , nucl .phys . a 109 ( 1968 ) 163 .r. d. hoffman et al ., astrophys .j. 521 ( 1999 ) 735 .a. j. howard et al .phys . a 152 ( 1970 ) 317 . c. iliadis , diplom thesis ( univ . of mnster , 1989 ) . c. iliadis et al .phys . a 512 ( 1990 ) 509 . c. iliadis et al .phys . a 559 ( 1993 ) 83 . c. iliadis et al .phys . a 571 ( 1994 ) 132 . c. iliadis et al .c 53 ( 1996 ) 475 . c. iliadis , nucl .a 618 ( 1997 ) 166 . c. iliadis et al ., astrophys .j. 524 ( 1999 ) 434 . c. iliadis et al ., astrophys .j. suppl . 134( 2001 ) 151 .c. iliadis and m. wiescher , phys .c 69 ( 2004 ) 064305 . c. iliadis , _ nuclear physics of stars _ ( wiley - vch , weinheim , 2007 ) . c. iliadis et al .c 77 ( 2008 ) 045802 .d. g. jenkins et al .92 ( 2004 ) 031101 .d. g. jenkins et al .c 73 ( 2006 ) 065802 .p. m. johnson , m. a. meyer and d. reitmann , nucl .phys . a 218 ( 1974 ) 333 .j. kalifa et al . , phys .c 17 ( 1978 ) 1961 .r. kanungo et al .c 74 ( 2006 ) 045803 .j. keinonen , m. riihonen and a. anttila , phys .c 15 ( 1977 ) 579 .w. e. kieser , r. e. azuma and k. p. jackson , nucl .phys . a 331 ( 1979 ) 155 .h. m. kuan , c. j. umbarger and d. g. shirk , nucl .phys . a 160 ( 1971 ) 211 ; erratum : nucl .phys . a 196 ( 1972 ) 634 .h. m. kuan and d. g. shirk , phys .c 13 ( 1976 ) 883 . s. kubono et al . , nucl .phys . a 537 ( 1992 ) 153 .s. kubono , t. kajino and s. kato , nucl .phys . a 588 ( 1995 ) 521 .m. la cognata et al . , phys .( 2008 ) 152501 .k. h. langanke et al . , astrophys .j. 301 ( 1986 ) 629 .t. k. li et al . , phys .c 13 ( 1976 ) 55 .r. longland et al ., in print ( 2009 ) . h. lorentz - wirzba et al .phys . a 313 ( 1979 ) 346 .g. lotay et al . , phys .c 77 ( 2008 ) 042802(r ) .g. lotay et al . , phys .102 ( 2009 ) 162502 .m. lugaro et al . , astrophys .j. 615 ( 2004 ) 934 .b. lyons et al .phys . a 130 ( 1969 ) 25 .z. ma et al . , phys .c 76 ( 2007 ) 015803 .j. w. maas et al . , nucl .phys . a 301 ( 1978 ) 213 .h. mackh et al . , nucl .phys . a 202 ( 1973 ) 497 .j. d. macarthur et al . , phys .c 22 ( 1980 ) 356 . j. d. macarthur et al . , phys .c 32 ( 1985 ) 314 .p. v. magnus et al .phys . a 470 ( 1987 ) 206 .h. b. mak et al . , nucl .phys . a 304 ( 1978 ) 210 .z. q. mao , h. t. fortune and a. g. lacaze , phys .74 ( 1995 ) 3760 .z. q. mao , h. t. fortune and a. g. lacaze , phys .c 53 ( 1996 ) 1197 .a. mayer , ph .d. thesis ( universitt stuttgart , 2001 ) .b. h. moazen et al . , phys rev .c 75 ( 2007 ) 065801 .p. mohr , phys .c 72 ( 2005 ) 035803 .j. y. moon et al .phys . a 758 ( 2005 ) 158c .m. mukherjee et al . , phys .lett . , 93 ( 2004 ) 150801 .m. mukherjee et al . , eur .j. a 35 ( 2008 ) 37 .g. murillo et al . , nucl .phys . a 318 ( 1979 ) 352 .a. st . j. murphy et al .c 73 ( 2006 ) 034320 .s. mythili et al . , phys .c 77 ( 2008 ) 035803 . c. d. nesaraja et al .c 75 ( 2007 ) 055809 .j. r. newton et al .c 75 ( 2007 ) 055808 .j. r. newton , r. longland and c. iliadis , phys .c 78 ( 2008 ) 025805 .m. niecke et al . , nucl .phys . a 289 ( 1977 ) 408 .h. orihara , g. rudolf and ph .gorodetzky , nucl .phys . a 203 ( 1973 ) 78 .a. parikh et al . , phys .c 71 ( 2005 ) 055804 .s. h. park et al .c 59 ( 1999 ) 1182 .y. parpottas et al . , phys .c 70 ( 2004 ) 065805 ; and phys .c 73 ( 2006 ) 049907(e ) .j. r. powers et al .c 4 ( 1971 ) 2030 .d. c. powell et al .phys . a 660 ( 1999 ) 349 .w. h. press , s. a. teukolsky , w. t. vetterling and b. p. flannery , numerical recipes ( cambridge university press , cambridge , 1992 ) .d. m. pringle and w. j. vermeer , nucl .phys . a 499 ( 1989 ) 117 .j. j. ramirez , r. a. blue and h. r. weller , phys .c 5 ( 1972 ) 17 .t. rauscher and f .- k .thielemann , at .data nucl .75 ( 2000 ) 1 .h. rpke , j. brenneisen and m. lickert , eur .j. a 14 ( 2002 ) 159 .d. w. o. rogers , j. h. aitken and a. e. litherland , can .50 ( 1972 ) 268 .d. w. o. rogers , r. p. beukens and w. t. diamond , can .50 ( 1972 ) 2428 .d. w. o. rogers et al ., can . j. phys . 54( 1976 ) 938 . c. rolfs et al .phys . a 241 ( 1975 ) 460 . c. rolfs , i. berka and r. e. azuma , nucl .a 199 ( 1973 ) 306 . c. rolfs, a. m. charlesworth and r. e. azuma , nucl .phys . a 199 ( 1973 ) 257 .j. g. ross et al . , phys .c 52 ( 1995 ) 1681 . c. rowland et al .c 65 ( 2002 ) 064609 . c. rowland et al . ,j. 615 ( 2004 ) l37 . c. ruiz et al .c 71 ( 2005 ) 025802 . c. ruiz et al .96 ( 2006 ) 252501 .p. schmalbrock et al . , nucl .phys . a 398 ( 1983 ) 279 .s. schmidt et al . , nucl .phys . a 591 ( 1995 ) 227 .h. schatz et al . , phys .79 ( 1997 ) 3845 .h. schatz et al . , phys .c 72 ( 2005 ) 065804 .d. l. sellin , h. w. newson and e. g. bilpuch , ann . of phys .51 ( 1969 ) 461 . s. seuthe et al . , nucl .phys . a 514 ( 1990 ) 471 .d. seweryniak et al .94 ( 2005 ) 032501 .d. seweryniak et al .c 75 ( 2007 ) 062801(r ) .p. j. m. smulders , physica 31 ( 1965 ) 973 .p. j. m. smulders and p. m. endt , physica 28 ( 1962 ) 1093 .m. a. stephens , j. amer . stat .69 ( 1974 ) 730 . f. stegmller et al . , nucl .phys . a 601 ( 1996 ) 168 .e. strandberg et al . , phys .c 77 ( 2008 ) 055801 .t. j. symons et al . , j. phys .g 4 ( 1978 ) 411 .w. p. tan et al .c 72 ( 2005 ) 041302 .t. tanabe et al . , phys .c 6 ( 1981 ) 2556 .t. tanabe et al . , nucl .phys . a 399 ( 1983 ) 241 .w. j. thompson and c. iliadis , nucl .phys . a 647 ( 1999 ) 259 .d. r. tilley et al . , nucl .phys . a 564 ( 1993 ) 1 .d. r. tilley et al . , nucl .a , 595 ( 1995 ) 1 .d. r. tilley et al . , nucl .phys . a 636 ( 1998 ) 249 .r. timmermann et al .phys . a 477 ( 1988 ) 105 .i. tomandl et al . , phys .c 69 ( 2004 ) 014312 .h. p. trautvetter , nucl .phys . a 243 ( 1975 ) 37 .h. p. trautvetter et al . , nucl .phys . a 297 ( 1978 ) 489 .w. trinder et al .b 459 ( 1999 ) 67 . b. y. underwood et al . , nucl .phys . a 225 ( 1974 ) 253 .s. utku et al . , phys .c 57 ( 1998 ) 2731 .g. vancraeynest et al . , phys .c 57 ( 1998 ) 2711 .d. w. visser et al . , phys .c 76 ( 2007 ) 065803 .d. w. visser et al . , phys .c 78 ( 2008 ) 028802 .r. b. vogelaar , ph.d .thesis ( caltech , 1989 ) .r. b. vogelaar et al . , phys .c 42 ( 1990 ) 753 .r. b. vogelaar et al . , phys .c 53 ( 1996 ) 1945 .m. wiescher et al .phys . a 349 ( 1980 ) 165 .s. wilmes et al .c 66 ( 2002 ) 065802 . c. wrede et al .c 76 ( 2007 ) 052802(r ) . c. wrede et al .c 79 ( 2009 ) 045808 . c. wrede , phys .c 79 ( 2009 ) 035803 .k. yagi , j. phys .japan 17 ( 1962 ) 604 . c. yazidjian et al .c 76 ( 2007 ) 024308 .h. yokota et al . , nucl .phys . a 383 ( 1982 ) 298 .k. yoneda et al . , phys .c 74 ( 2006 ) 021303(r ) .
the nuclear physics input used to compute the monte carlo reaction rates and probability density functions that are tabulated in the second paper of this series ( paper ii ) is presented . specifically , we publish the input files to the monte carlo reaction rate code ` ratesmc ` , which is based on the formalism presented in the first paper of this series ( paper i ) . this data base contains overwhelmingly experimental nuclear physics information . the survey of literature for this review was concluded in november 2009 . , ,
although the notion of spherical symmetry is an ideal one , it is known that many systems such as globular clusters , galactic bulges and dark matter haloes can be modelled as being roughly spherically symmetric . in order to model the distribution of matter in such systems in a self - consistent manner ,one solves the vlasov poisson system , and the equilibrium solutions are known to be functions of the integrals of motion via jeans s theorem .the usual method in the literature involves the ` to ' approach . here, one starts off with a given potential density pair and certain assumptions about the velocity structure , and proceeds to find the distribution function via an appropriate integral transform .a less common method is the ` to ' approach which postulates a certain functional form for the distribution function , and solving the poisson equation . in general , solving this non - linear differential equation for analytical solutions is very difficult .the self - consistent approach continues to be widely used , especially in the context of dark matter haloes .the equilibrium distribution functions in a spherically symmetric setting depend on the two integrals of motion , namely the magnitude of the angular momentum and the energy .methods for generating two - integral distribution functions have been widely studied in the literature and originated with , who noticed that whenever the density profile could be written as a sum of monomials of ( in cylindrical coordinates ) and in the axisymmetric setting , it was possible to construct a distribution function expressible as a sum of monomials in angular momentum and energy .this procedure was generalized by different authors to spherical and axial symmetry .relativistic treatments along the same lines have also been undertaken by several authors , see e.g. and .the approach that we make use of in this paper is akin to the ones mentioned above .however , it must be mentioned that the powers of the energy and angular momentum thus obtained in spherical symmetry differ from those obtained in an axisymmetric setting . moreover, the decomposition into powers of and in the spherical case is non - unique , as opposed to the axisymmetric one . owing to this non - uniqueness, one can construct a wide range of dynamic properties such as the velocity dispersions and anisotropy parameters .in addition , finding such a decomposition for an arbitrary potential density pair is rather difficult , with no well - defined algorithm .the method of double - power distribution functions was employed by , who used it to generate two - term double - power distribution functions , and it was shown that a wide class of potential density pairs could thereby be obtained . the double - power approach introduced in this paper is an example of a hybrid approach since it starts off with a known functional form of the distribution function , which is a sum of anisotropic polytropes , but also assumes a priori knowledge of the potential density pair .once the augmented density has been found , one could also obtain the distribution function through a different approach which would involve performing a fractional derivative inversion .the two approaches are equivalent and the method that we use in this paper is exactly the same as the use of fractional derivatives . the standard method of constructing distribution functions from a given density profile is via integral transforms .the simplest such transform is given by eddington s formula , which is used for constructing isotropic distribution functions : where is the negative of the newtonian potential , is the binding energy and is found by substituting into . even in this case , the integral is often non - trivial . as a result ,the distribution functions and their properties such as the velocity dispersions and anisotropy parameters can not be easily reduced to analytical expressions .the situation is worsened for the case of anisotropic distribution functions , which depend on both energy and angular momentum . in this case , the fundamental integral relation yields a function ( for the isotropic case , it is only a function of ) which we will refer to as the augmented density , and an analogue of eddington s formula exists but is substantially more complicated .first , we perform a double - integral transform to construct a function : .\ ] ] then , we analytically continue the second argument to the complex domain , and the distribution function is found by taking the imaginary part of the result : .\ ] ] in addition , a further disadvantage of this approach stems from the fact that the above inversion scheme , as pointed out by is numerically unstable . derived an elegant family of models , which possessed hyperviriality , i.e. they satisfied the virial theorem locally .this aspect has important ( positive ) consequences for stability , as the authors point out , because it constrains the number of high - velocity stars that can escape and give rise to instability .the importance of the local virial theorem , i.e. hyperviriality , in self - gravitating systems has also been studied by other authors .subsequently , went on to derive a two - component family of hypervirial distribution functions , which were presented as candidates for modelling dark matter haloes .the family of models that are described by the two different hypervirial families include a wide range of commonly used models such as the plummer ( eddington in 1916 discovered the hypervirial nature of its distribution function ) and hernquist models , but there are several other commonly used potential density pairs which do not have a known hypervirial distribution function such as the hnon isochrone , jaffe , dehnen and nfw models . derived a very general theorem relating two global properties , the cusp slope and the central anisotropy through the relation .other work that connects the cusp slope to global quantities includes , which relates it to the tsallis entropic index . proved that this holds true for distribution functions with constant anisotropy , and generalized this to systems with non - constant anisotropy via the series expansion .this theorem was generalized by , who studied the global equivalent , termed the global density - slope anisotropy inequality ( gdsai ) , for separable augmented densities , and established the conditions under which it held true .we emphasize that some of the augmented densities constructed through the double - power approach in this paper are not separable , although they can be expressed as a sum of separable augmented densities .the outline of the paper is as follows . in section [ secti ], we introduce the double - power approach , which involves constructing an augmented density that is expressible as a sum of monomials in and . by using this augmented density, we construct the corresponding distribution functions and associated properties , such as velocity dispersions , and perform an asymptotic analysis to deduce some of its general features .this procedure is explicitly carried out for the veltmann model , and some of the simpler distribution functions are calculated .this procedure is also extended to the plummer and hernquist models in appendix [ appa ] .the second part of the paper is concerned with investigating the behaviour of hypervirial distribution functions which satisfy the virial theorem locally . in section [ sectii ], we present a generic method of constructing augmented densities that are hypervirial for a wide range of physical models from a basic ansatz .we make use of this procedure to find the corresponding hypervirial augmented densities and the distribution functions for well - known models such as the hnon isochrone , nfw , jaffe and dehnen models . in section [ sectiii ] ,we take the generic hypervirial augmented density and derive universal properties , which include an algebraic relation between the density slope and the anisotropy parameter in the inner regime .in this section , we develop the formalism and illustrate it by deriving various families of distribution functions for the degenerate veltmann isochrone models . even though most distribution functions discussed in this section are found in the literature , a fewhave never been explicitly written down to the authors knowledge .the degenerate veltmann isochrone models comprise of the following one - parameter family of potential density pairs : ^{-(1/m)-2},\ ] ] where the parameter is positive .these models constitute the limiting case of the more general family described by veltmann , and we shall henceforth abbreviate them as the veltmann models .two of the best known spherical models of astrophysics are included in the family : the plummer model ( for ) and the hernquist model ( for ) , and the case is a good approximation to the nfw profile commonly used to model dark matter haloes .we will find it convenient to define and . a distribution function of the anisotropic polytropic form for this potential density pair was derived in and : where is an unimportant normalization factor , and it is implicitly understood that the distribution function vanishes whenever . as discussed in , this distribution function possesses the remarkable property that the virial theorem holds locally ( i.e. hyperviriality ) . in this section ,we proceed to find distribution functions of the more general form : where we restrict to be positive constants , so that the distribution function is manifestly nonnegative .we will variously refer to a distribution function of the above form as a multi component ( anisotropic ) polytrope , or , following , as a double - power distribution function .obviously , the distribution function ( [ hypervirialdf ] ) is a special case consisting of one component .the distribution function ( [ doublepowerdf ] ) gives rise to the poisson equation : through the evaluation of integrals of the form where the first integral above converges for and the second converges for and .it follows that the density is finite provided and for all .our task now will be to come up with some augmented density for the veltmann potential density pair , expand the augmented density as a series in and to recast it in the form of equation ( [ mastereq ] ) , and deduce the powers and coefficients appearing in equation ( [ doublepowerdf ] ) .it is well known that solving for the distribution function for a given potential density pair is a degenerate problem , and there are infinitely many ways to come up with an augmented density .once an augmented density is known , it is also possible to calculate the velocity dispersions without having to resort to finding the explicit form of the distribution function .such a procedure is discussed in , and the corresponding expressions are given by d\psi',\ ] ] and it must be noted that .this emerges from the fact that , when the expectation values of and are computed , they result in integrals of the form and , which are exactly equal to each other .the expectation values of , and are all zero as well , which arises from the fact that .hence , the relations hold true for .the anisotropy parameter is in this subsection , we will introduce an augmented density ansatz that comprises of a finite number of components .we perform an asymptotic analysis to obtain the limiting values of the anisotropy parameter .in addition , we shall explicitly set down some expressions for the distribution functions and velocity structure profiles for the nfw - like profile ( ) discussed by .we first focus on the case where the augmented density consists of a finite number of components . from equation ( [ phiveltmann ] ), we obtain the relation where .this immediately leads to the one - component augmented density which corresponds to the distribution function ( [ hypervirialdf ] ) next , we multiply and divide the right - hand side of equation ( [ rho1term ] ) with ^q ] for some positive integer .we then have ^{n_{k}}.\ ] ] by using the binomial theorem again , the augmented density becomes we will perform an asymptotic analysis of the above augmented density ansatz to obtain the limiting values of the anisotropy parameter , which is important in investigating the gdsai .most of the work involving the gdsai was undertaken for separable distribution functions , but it must be noted that this ansatz is not separable .one can use equations ( [ sigmar2])([betafromv ] ) to compute the velocity dispersion profiles and the anisotropy parameter for all values of .the resulting function is a function of and , of which the latter itself is a function of .now , in the limit it is worth noting that only the lowest powers of dominate the other terms . in this limit , , which is a constant , which simplifies matters a great deal . by considering the _ smallest _ powers of in the numerator and the denominator of the velocity structure profiles, it is found that the remarkable property of the above expression is that the anisotropy parameter is completely independent of the choice of the and under the limit .in fact , for any distribution function of the form , it is relatively straightforward to argue that as , .this implies that the total number of terms in our distribution function is irrelevant when it comes to determining the central value of the anisotropy parameter . on the other hand ,the leading power for the density profile at small is ; thus , for , we have an inner cusp . from this, we can conclude that the cusp slope - central anisotropy inequality is saturated , and one has .we can perform a similar asymptotic analysis in the limit .however , in this limit , one must note that falls off as .thus , one must be more careful in evaluating the anisotropy parameter in this region because also has an -dependence .as we are interested in , we must take into consideration the highest powers of ( inclusive of the contribution from ) in the denominator and the numerator of the velocity structure profiles , since they dominate the expression in this limit .the velocity structure profiles take on the form this is again a fairly simple expression that is independent of most of the free parameters inherent in the ansatz . as before , for a distribution function of the form , it is straightforward to argue that as , one has .the above expression can also be rewritten as since we know that and are both positive , it immediately follows that for all values of .this can also be established in the following manner .we have earlier stated that and that .from these two conditions , it is evident that .the cases of the plummer and hernquist model have been extensively analysed in the literature ( see , for example , ) .we will supplement these analyses by studying in some detail the distribution functions for the nfw - like profile ( ) discussed by . for ,we have the one - component augmented density which gives rise to the hypervirial distribution function ( [ hypervirialdfnfw ] ) .next , consider the two - component model ( and ) .the augmented density is the corresponding distribution function is given by there are three ways to construct three - component distribution functions for the nfw - like model .the first ( and ) is found from the augmented density and the distribution function is given by the second distribution function is obtained by setting , and . the augmented density is and the corresponding distribution function is for the third distribution function , the parameters are , and and the augmented density takes on the form the corresponding distribution function is given by in fig .[ contournfwlike ] , the contour plots for some of these distribution functions have been plotted as functions of the dimensionless binding energy and dimensionless angular momentum .{contour29.pdf } & \includegraphics[width=6.8cm]{contour31.pdf } \\ \includegraphics[width=6.8cm]{contour33.pdf } & \includegraphics[width=6.8cm]{contour35.pdf } \end{array}\ ] ] we will now proceed to compute the velocity dispersion profiles and the anisotropy parameter from the distribution functions obtained above for the veltmann model .{nradveldisp.pdf } & \includegraphics[width=5.2cm]{ntanveldisp.pdf } & \includegraphics[width=5.2cm]{nbeta.pdf}\\ \quad\quad(a ) & \quad\quad(b ) & \quad\quad(c)\\ \end{array}\ ] ] the simplest of all the models is the one with the hypervirial distribution function .it takes on the form given by equation ( [ nfw1 ] ) , and the velocity dispersion profiles are as follows : the two - component distribution function corresponds to a density that has the form ( [ nfw2 ] ) . for this model ,the velocity dispersions and anisotropy parameter are we move on to the three - term distribution functions .the first of these , represented by the relation ( [ nfw3a ] ) , has the following velocity dispersion profiles and anisotropy parameter : the next three - term distribution function is given by equation ( [ nfw3b ] ) .for this model , the corresponding values are for the last of the three - term nfw models , the density can be expressed as in equation ( [ nfw3c ] ) . for this model , one proceeds to calculate the velocity dispersions and the anisotropy parameter , which are given by the radial and tangential velocity dispersions , as well as the anisotropy parameter , for the distribution functions for the nfw - like model discussed in this section have been plotted in fig .this class of models have in common the fact that they are tangentially anisotropic in the outer regions .also , we note that the models constructed here are stable to radial perturbations by the doremus baumann theorem since all of them are decreasing functions of the energy .distribution functions which give rise to constant anisotropy parameters have been explored in numerous papers , and in particular , such distribution functions have been written down for the veltmann models in , for instance , , , , and . in this section , we briefly re - derive those results from our approach and we provide explicit expressions for the case . to start , we multiply and divide the augmented density ( [ rho1term ] ) by some power of : where is positive .using equation ( [ veltmanneq ] ) , we obtain the augmented density ^{-s}.\ ] ] the case is trivial , since in this case we recover equation .also , it is easily seen that the parameter is related to the constant value of the anisotropy parameter by the right - hand side can now be expanded if , using a generalized binomial theorem : the series converges since everywhere , for all veltmann models . from this expansion , we can use our recipe to construct a distribution function in a series form and , if possible , sum up the series to obtain a closed form expression . for the model , we find the following family of distribution functions : \ , \left(\frac{\mathcal{e}}{\phi_0}\right)^{(s+5)/4 } l^{(s-3)/2}.\end{aligned}\ ] ] the choice reduces to the hypervirial distribution function discussed in and : the choice gives the model .\ ] ] other values of where the distribution function can be written in terms of elementary functions are and .in this section , we derive the generic form of an augmented density which has the property of hyperviriality , and then apply the recipe above to construct hypervirial distribution functions for a few well - known potential density pairs .we start with the statement of hyperviriality , which is that using the formulae to compute the velocity dispersion from the augmented density , this can be written as \psi'=\frac{1}{2}\psi.\ ] ] differentiating both sides with respect to , we obtain the governing partial differential equation ( pde ) for the augmented density : =\psi\frac{\partial\rho}{\partial\psi}.\ ] ] on further simplification , we can cast the above equation in the form note that the pde is linear , so it can be solved using the series method .it is easily seen that any augmented density of the following form is a solution : where the coefficients and the powers are freely specifiable .note that this can be rewritten as a series in : this suggests a procedure to construct a hypervirial distribution function given an arbitrary potential density pair . if we can express as a function of : then , through the taylor series for , we can compute the coefficients and powers for the augmented density , and use the double - power recipe to compute the distribution function in series form ( the recipe only works if all are positive , or equivalently , if all powers in the taylor series for are larger than 1 ) . at this point , we point out a caveat : in general the function can not be defined at all radii .this is because to construct , we need to invert the function as a function of , but for all the models considered below is never a monotonic function . instead , this function always has one maximum . as a result, the function has two branches forming a loop , one branch for the inner region and another branch for the outer region .the best we can do , then , is to compute hypervirial distribution functions that approximate either the inner region or the outer region well , but not both .+ note that the veltmann models are quite special with respect to hyperviriality : indeed , for this family of potential density pairs , the two branches coincide and we have a well - defined at all radii : , and we recover the usual , one - term hypervirial family . in the remainder of the section , we put this formalism to use to compute hypervirial distribution functions for some of the most commonly used profiles in astrophysics : the hnon isochrone model , the nfw profile , the jaffe and another particular case of the dehnen models .this can also be used to construct specific cases of the potential density pair introduced by and whose relevance was further explored by and .+ we will work with a truncated augmented density that comprises of a finite number of terms .the aim of this multi term augmented density , generated from a suitable double - power distribution function , is to model potential density pairs of known models to an arbitrary degree of accuracy . depending on the level of desired accuracy, we can choose the order of truncation and adjust the accuracy to an arbitrarily high order simply by including a sufficiently high number of terms .the coefficients in the augmented density are found by expanding the functions and to obtain series in the two asymptotic limits , around and , and matching the powers to find the coefficients .it is important to note that some of these terms may be negative , and this could very well lead to an invalid distribution function in these two regimes .hence , it is necessary to establish the positivity and convergence of the augmented density ansatz defined by equation ( [ hypervirialrho ] ) and the corresponding distribution function . when studying each model explicitly , we shall address these issues and establish the validity of the distribution function over an appropriate ( finite ) range in the two asymptotic limits .we are also interested in determining the relative error , which measures the deviation of the augmented density from the exact one .the relative error is defined to be by evaluating in the two limits , one can determine the accuracy of the augmented density ( and the distribution function ) in modelling the exact density . since the paper relies on expanding the potential density pair as a series in a given asymptotic regime , the plots that we present of several quantities such as the velocity structure profiles are plotted only in the ranges in which these expressions are valid , i.e. for and , respectively .finally , a few remarks regarding this approach are in order .it is not an approach that is always guaranteed to work since the matching of coefficients in equation ( [ hypervirialrho ] ) may lead to internal inconsistencies . in some other cases , it may turn out that there is no series expansion of the potential density pair . for these reasons, it was found that profiles such as the einasto profile and the singular isothermal sphere could not be modelled using this approach , owing to the former and latter reasons , respectively .the potential density pair for the hnon isochrone model is : where . from the first equation above, we have : where we defined the dimensionless quantities and . as claimed above, is not monotonic , and is multi valued .we will work with the inner branch of .inverting , we find three roots , which , when substituted into give the following expansions : in order to tell which of the three expansions is the correct one for the inner branch , we consider the leading terms .the leading terms in the first and third expansions give the hernquist distribution function , and therefore give rise to infinite density at the origin .on the other hand , the second expansion starts with the plummer distribution function , and therefore has a finite central density ( all subleading terms are distribution functions for shell - like models , whose central density vanishes ) .since the isochrone model has a core in the centre rather than a cusp , the second expansion is correct .+ the augmented density is using the recipe from section [ secti ] , the corresponding distribution function is found to be at this point , a remark concerning the convergence of the series above is in order . while it is clear that the series ( [ rhoisochronesmallr ] ) converges to the function , it is important to keep in mind that convergence only happens within a finite radius around ( i.e. around the origin ) .ideally , we would like to compute the radius of convergence but such a task is inordinately difficult , since it involves deriving a formula for all the coefficients in the series .the same issue applies to the series ( [ fisochronesmallr ] ) .the latter clearly converges within a sufficiently small neighbourhood of the origin ( since as while is bounded by at all radii ) , and it is also clearly positive definite sufficiently close to the origin , since the dominant term is positive . for large enough values of ,the series ( [ fisochronesmallr ] ) becomes negative , but it is unclear whether this happens within the radius of convergence or outside this radius .. the expansion to first order becomes negative for , but this happens outside of the radius of convergence , and the actual function is always positive . ] we will therefore content ourselves with working in the small limit .+ to illustrate the fact that the result above does not approximate the outer region well , note that while the plummer model and the isochrone model have the same inner cusp ( ) , the outer falloff of the isochrone model ( ) is less steep than that of its hypervirial approximation ( ) . finally , for the isochrone model near the origin, the value of to leading order is we see that the approximation is excellent .the velocity dispersion tensor is and with the following anisotropy parameter the velocity dispersion , the anisotropy parameter and are plotted in fig .[ fig2 ] to various orders of approximation .as can be seen from this figure , the convergence in the velocity structure profiles is excellent as long as we remain close to the origin .furthermore , we also see that incorporating successive terms leads to a boost in the accuracy of these models , particularly near .we also plot the dimensionless distribution function as a function of the dimensionless binding energy , for differing fixed values of the angular momentum in fig .[ fig3dot1 ] .we see that as increases , the radius of convergence gets smaller ( for it is about 0.8 , for it is about 0.5 , and for it is about 0.3 ) but there is indeed a non - zero radius of convergence .a second crucial point is that the plots go negative , but this happens at each time outside of the radius of convergence .this provides compelling evidence that the exact distribution function may be positive definite .a final remark is due regarding the chosen values of we are working with functions of and , but the latter is bounded from above . since the former is very small , it follows that the values of will also turn out to be accordingly small .+ {isochronedeltasmallr.pdf } & \includegraphics[width=6.8cm]{isochronevr2smallr.pdf } \\\includegraphics[width=6.8cm]{isochronevt2smallr.pdf } & \includegraphics[width=6.8cm]{isochronebetasmallr.pdf } \end{array}\ ] ] {isochronedfsmallr1.pdf } & \includegraphics[width=6.8cm]{isochronedfsmallr2.pdf } \\ \includegraphics[width=6.8cm]{isochronedfsmallr3.pdf } & \includegraphics[width=6.8cm]{isochronedfsmallr4.pdf } \end{array}\ ] ] next , we consider the outer branch . by repeating the procedure and following a similar line of reasoning, we find the following augmented density and distribution function : the dominant term in equation ( [ fisochronelarger ] ) is the hernquist model .this is expected , since the isochrone and the hernquist profiles do have a density that goes as in the limit .again , a remark concerning the convergence of equations ( [ rhoisochronelarger ] ) and ( [ fisochronelarger ] ) is in order .while it is clear that the former converges sufficiently far from the origin , the convergence of the latter is more subtle .this follows from the fact that , as , is not bounded from above but the upper - bound for tends to zero , because of the cutoff imposed on the velocity . to investigate what happens in the large limit , we first factor out the dominant term in equation ( [ fisochronelarger ] ) .we then have an expression of the form : ,\ ] ] where .converting the factor inside the square bracket to variables , and writing for some , the above becomes : .\ ] ] finally , substituting the isochrone potential ( [ psiisochrone ] ) in the above , and expanding around infinity , we find , , etc .thus , we have established that the terms in the expansion above become smaller and smaller , and the series converges ( and is clearly positive ) .+ the property is actually very general : it is common to all models of finite total mass . to see this , notice that for such a model the mass contained inside a radius tends to a constant as , and is the total mass .thus , in this limit , being just the potential of a point mass , and the property follows .+ the value of is the velocity dispersion profiles are given by the anisotropy parameter is the velocity dispersion , anisotropy parameter and for the outer branch of the isochrone model are plotted in fig .[ fig3dot5 ] . as before , we have plotted the quantities as a function of and including successively higher order terms to illustrate the convergence of these models , and the increase in accuracy . in fig .[ fig5dot5 ] , the distribution functions have been plotted as functions of the dimensionless binding energy for differing fixed values of the angular momentum . it must be noted that can both be small or large , even though is large . this is because of the fact that one can choose to be as small as one wishes , which has motivated our choices of . in all the figures, we see that the distribution function remains identically positive , and that there exists a finite radius of convergence .it is also seen that this radius of convergence diminishes as one moves to higher values of .once again , this is an expected result because a larger value of implies that becomes larger , and that from equation ( [ asymptoticiso ] ) is chosen to be as high as possible .but , these choices result in simultaneously reducing the dimensionless binding energy , because falls off with and becomes smaller as well . for these reasons, one sees a successively decreasing radius of convergence as one increase the value of .{isochronedeltalarger.pdf } & \includegraphics[width=6.8cm]{isochronevr2larger.pdf } \\\includegraphics[width=6.8cm]{isochronevt2larger.pdf } & \includegraphics[width=6.8cm]{isochronebetalarger.pdf } \end{array}\ ] ] {isochronedflarger1.pdf } & \includegraphics[width=6.8cm]{isochronedflarger2.pdf } \\\includegraphics[width=6.8cm]{isochronedflarger3.pdf } & \includegraphics[width=6.8cm]{isochronedflarger4.pdf } \end{array}\ ] ] the nfw profile remains one of the most commonly used profiles in modelling dark matter haloes , and was first introduced by .the potential density pair for the nfw profile is it should be noted that the parameter this time is not the total mass ( which is infinite ) .technically , distribution functions do not make sense for models with infinite mass since such a distribution function is not normalizable .nevertheless , we can truncate the system at some radius , which we can interpret as the scale radius and the mass contained within this radius can be interpreted as the mass .this will introduce a discontinuity in the distribution function , but such a discontinuity will be ` small ' if the truncation happens at a very large radius . +this time , we can not analytically obtain as a function of , so we will proceed differently . expanding the potential density pair as a taylor series , we have by inspection , it is easy to see that only the coefficients with odd in equation ( [ hypervirialrho ] ) are non - zero . using the expansions ( [ nfwaugd ] ) and ( [ nfwaugr ] ) , and collecting like powers , we can solve for the coefficients . in the end , we find the following augmented density and distribution function by truncating them to include the first five non - zero terms .the augmented density is using the recipe of section [ secti ] , the corresponding distribution function is found to be again , the convergence of the series ( [ nfwaugdens ] ) and ( [ nfwdf ] ) sufficiently close to is clear ( by the same reasoning as for the isochrone model ) . note that the leading term is the hernquist model ; this is not surprising since the nfw profile has a cusp whose slope is exactly equal to that of the hernquist profile . the quantity , to lowest order , takes on the form we proceed to compute the velocity dispersions from the six - term distribution function .they are found to be and the corresponding anisotropy parameter is {nfwdelta.pdf } & \includegraphics[width=6.8cm]{nfwvr2.pdf } \\\includegraphics[width=6.8cm]{nfwvt2.pdf } & \includegraphics[width=6.8cm]{nfwbetasmallr.pdf } \end{array}\ ] ] the plots of the velocity dispersion , anisotropy parameter and are presented in fig .[ fig4dot5 ] .we see that the velocity structure remains virtually identical even when additional terms are incorporated which is suggestive of convergence . in fig .[ nfwdfplots ] , the distribution function has been plotted for several ( fixed ) values of as a function of the dimensionless binding energy .we note that the same line of reasoning presented in our discussion of the small isochrone distribution functions , as plotted in fig .[ fig3dot1 ] , is also applicable here .{nfwdf1.pdf } & \includegraphics[width=6.8cm]{nfwdf2.pdf } \\ \includegraphics[width=6.8cm]{nfwdf3.pdf } & \includegraphics[width=6.8cm]{nfwdf4.pdf } \end{array}\ ] ] the potential density pair for the jaffe model was first obtained by and is given by this time , due to the singular nature of at the origin , we will perform the expansions around infinity instead . curiously , these expansions are exactly the same as for the nfw model , with the replacement of by . proceeding as for the nfw model , we find the exact same augmented density ( [ nfwaugdens ] ) and distribution function ( [ nfwdf ] ) , but this time thisis supposed to be an approximation for the outer region of the model .indeed , the outer falloff of both the jaffe and hernquist models are the same ( ) but their inner cusps differ ( for the jaffe model and for the hernquist model ) . the convergence of the distribution function around infinity for the jaffe model can be established by following the same reasoning as the one used for the isochrone outer branch .one can compute the value of as defined before , but it must be noted that this must be evaluated for large values of . to leading order , we find that has the following asymptotic behaviour this expression is in agreement with equation ( [ nfwdelta ] ) because the two expressions are exactly the same if one replaces by and vice versa , which also ensures that the augmented densities and the distribution functions for these two models are exactly the same , albeit in different regimes .the plots for the velocity dispersion , anisotropy parameter and are presented in fig .the convergence of the velocity structure profiles as one includes higher order terms of the distribution function is also evident . in fig .[ jaffedf ] , we plot the distribution functions for the jaffe model as a function of the dimensionless binding energy , at different fixed values of .the choices of , and the behaviour seen in these plots can be explained by following a similar line of reasoning to the one employed in discussing the large isochrone model , plotted in fig .[ fig5dot5 ] .{jaffedelta.pdf } & \includegraphics[width=6.8cm]{jaffevr2.pdf } \\\includegraphics[width=6.8cm]{jaffevt2.pdf } & \includegraphics[width=6.8cm]{jaffebeta.pdf } \end{array}\ ] ] {jaffedf1.pdf } & \includegraphics[width=6.8cm]{jaffedf2.pdf } \\\includegraphics[width=6.8cm]{jaffedf3.pdf } & \includegraphics[width=6.8cm]{jaffedf4.pdf } \end{array}\ ] ] our final example of a hypervirial distribution function is a specific case of the dehnen models .anisotropic distribution functions of the dehnen models have been computed , for example , in .the dehnen family of models have the following potential density pair : ,\ ] ] we will only consider the model with since this model is known to closely approximate de vaucouleurs law law .we repeat the procedure outlined in the above sections and expand and about . upon doing so , we obtain the augmented density and the corresponding distribution function is that the series above converge near the origin can be seen following a reasoning completely analogous to the previous models .it is interesting to note that the leading term is the veltmann nfw - like model ( with ) discussed in the previous section , which has the same inner cusp as the dehnen model ( ) but a slightly different outer falloff for the dehnen model , and for the hypervirial approximation .note also that all the subleading terms do not contribute to the outer falloff of the hypervirial approximation .the value of for small values of is found to be the velocity dispersions for the above augmented density are found to be and the anisotropy parameter is the resulting dispersions and anisotropy parameter are plotted in fig .[ fig6 ] along with the value of .the dehnen model grows slowly , i.e. one finds that including successive terms does play a major role in increasing the accuracy of but do not significantly alter the structure of the velocity dispersion profiles . in fig .[ dehnendfsmallrplots ] , we plot the small distributions functions for the dehnen model , holding fixed .the insights that can be gleaned from this figure are similar to the ones derivable from fig .[ fig3dot1 ] that deals with the small isochrone distribution functions and the discussion for the latter can be found in the subsection on the isochrone models .{dehnendeltasmallr.pdf } & \includegraphics[width=6.8cm]{dehnenvr2smallr.pdf } \\\includegraphics[width=6.8cm]{dehnenvt2smallr.pdf } & \includegraphics[width=6.8cm]{dehnenbetasmallr.pdf } \end{array}\ ] ] {dehnendfsmallr1.pdf } & \includegraphics[width=6.8cm]{dehnendfsmallr2.pdf } \\\includegraphics[width=6.8cm]{dehnendfsmallr3.pdf } & \includegraphics[width=6.8cm]{dehnendfsmallr4.pdf } \end{array}\ ] ] expanding around infinity , we find the augmented density and distribution function : again , the convergence of the series around infinity hinges on the fact that as . the value for is found to be : the six - term distribution has the following velocity dispersion profiles and the corresponding anisotropy parameter is the velocity dispersion , anisotropy parameter and are plotted in fig .[ fig7 ] . as before, the convergence of the multi term velocity dispersion profiles is excellent . in fig .[ dehnendflarger ] , we plot the large dehnen models as a function of the dimensionless binding energy , while holding the angular momentum fixed . many of the features that are evident from this graph , such as the existence of a finite radius of convergence that decreases as one increases , can be explained by using the same methodology as the one employed for the large isochrone models .in particular , we refer to fig .[ fig5dot5 ] and its corresponding discussion in the subsection on the hypervirial isochrone models .{dehnendeltalarger.pdf } & \includegraphics[width=6.8cm]{dehnenvr2larger.pdf } \\\includegraphics[width=6.8cm]{dehnenvt2larger.pdf } & \includegraphics[width=6.8cm]{dehnenbetalarger.pdf } \end{array}\ ] ] {dehnendflarger1.pdf } & \includegraphics[width=6.8cm]{dehnendflarger2.pdf } \\\includegraphics[width=6.8cm]{dehnendflarger3.pdf } & \includegraphics[width=6.8cm]{dehnendflarger4.pdf } \end{array}\ ] ]in this section , we investigate the velocity structure of a generic hypervirial model , in particular the asymptotic values of the anisotropy parameter as . from the augmented density ( [ hypervirialrho ] ) and the statement of hyperviriality ( [ hyperviriality ] ) , it follows that before we proceed to investigate the behaviour of in the two asymptotic regimes , two important assumptions are made .first , we assume that the potential is finite at the origin , and secondly , we assume that it goes to zero at infinity . it must be emphasized that there exist models such as the isothermal sphere or models with central black holes whose does not satisfy these properties .but , the majority of the models used in the literature such as the dehnen , hnon isochrone , veltmann and nfw models do possess this property . to investigate the behaviour of the potential density pair in the limit , we shall use the method of dominant balance from asymptotic analysis .we begin with the poisson equation now , we investigate the behaviour of the poisson equation in the limit .we will suppose that the terms in the augmented density that have higher powers of drop out , and that only the term in the augmented density which dominates is the one with the lowest power of .if we make this assumption , the poisson equation reduces to where is the smallest value of , and is the corresponding value of .this differential equation can be solved exactly , and one can show that the solution with the boundary conditions on discussed above has been derived by .it is given by and the laplacian scales as follows ^{2p_{\kappa}+1}.\ ] ] now , we need to establish that the assumption we made , namely that the higher order terms in that appear in the augmented density go to zero , is indeed valid . in short ,we need to show that the following equations hold true . by using equations ( [ pot0 ] ) and ( [ density0 ] ) as well as the relation for all values of , it is immediately evident that equations ( [ condition1r0 ] ) and ( [ condition2r0 ] ) are indeed satisfied .since we have an asymptotic expression for in the limit , we can substitute it in equation ( [ betahypervirialallmodels ] ) to find the inner anisotropy parameter .after some algebra , it is found that by using equation ( [ density0 ] ) , it is evident that the inner slope is given by . on combining this relation with the above expression, one obtains the following equality for these models : which happens to be exactly the limiting case of the cusp slope central anisotropy theorem .the cusp slope central anisotropy states that the central slope ( ) and the central anisotropy parameter ( ) are related through the inequality , and we observe that the hypervirial augmented density ansatz is found to saturate this inequality regardless of the values chosen for the parameters . in the previous subsections , we derived an anisotropy equality for this family of models in the regime which relates the density slope in that region to the value of the anisotropy parameter in the same region .it was found that all these models obey equation ( [ slopeanisotropy1 ] ) .the most striking aspect of this relation is that they are completely independent of the choice of the and the and the number of terms in the hypervirial augmented density ansatz , which proves their generality .furthermore , one discovers a second remarkable result by considering equation ( [ betahypervirialallmodels ] ) .suppose that we restrict ourselves to those models which have finite total mass .as previously discussed , the potential falls off as in the limit . by using this property in conjunction with equation ( [ betahypervirialallmodels ] ), it can be shown that .this might lead us to conclude that , but this is _ not _ the case .the reason is simple , and stems from the non - monotonic nature of which was used to construct the augmented density . because of this property , one ends up with two different branches , and consequently , the augmented density in the region is not the same as the augmented density in the region .hence , the value of for these models is also not identical which in turn implies that in general .but , if one does have models where the augmented density is the same for all values of , one can indeed see that the equality is preserved . from the expression for the anisotropy parameter , given by equation ( [ betahypervirialallmodels ] ) ,it is evident that is a function of , except in the special case where . if , the anisotropy parameter is independent of , and one does indeed have for these models .these models also happen to be the one - term hypervirial models discussed in the previous sections , where the two branches coincide .another set of models for whom this equality is preserved are the two - term hypervirial models studied by where the augmented density is applicable for all values of .this paper presents a hybrid approach that combines the knowledge of the potential density pair with a distribution function comprising of double - power terms .a major advantage of this approach is that one can bypass integral transforms , except for equation ( [ centralintegral ] ) , and one can instead sum the series to obtain closed form expressions. it must be emphasized , however , that not all potential density pairs lend themselves to such an approach , but a large number of the physically relevant models are amenable to such a treatment .it is also important that the free parameters be chosen such that they ensure that the distribution functions thus generated satisfy the phase - space consistency requirements .the second advantage of this approach lies in the fact that it presents a means of generating a reasonably diverse range of distribution functions for several models .the double - power distribution functions are themselves generated from a double - power augmented density ansatz , which has additional advantages of its own .in particular , one can construct an appropriate double - power augmented density ansatz that is always hypervirial , irrespective of the choice of the potential density pair , and use it to generate the potential density pairs of known models to arbitrary accuracy . in section [ secti ] ,we illustrate the method of constructing a more complex augmented density from a basic one by performing simple algebraic operations on the simpler augmented density .the usefulness of this approach is demonstrated by applying it to a concrete example , namely the veltmann model .this model is of relevance because of its similarities to the nfw profile , and for the reason that the basic augmented density is hypervirial . in this section ,we explicitly compute some of the simpler distribution functions and their velocity dispersions for this model . in an associated appendix ( appendix [ appa ] ), we also list some of the corresponding expressions for the plummer and hernquist models , which can also be generated using alternative methodologies currently existing in the literature ( see , for example , ) .however , the primary aim of the paper is to generate hypervirial distribution functions for a wide array of models . in order to do so , we start with an appropriate double - power augmented density ansatz in section [ sectii ] that satisfies the hypervirial property for any given potential density pair .the next step involves expanding the potential density pair as a power series , and matching coefficients with the expression for the augmented density . by doing so, we present a method that gives rise to hypervirial distribution functions that can model a given potential density pair to an arbitrarily high degree of accuracy .the non - monotonicity of requires two different distribution functions in the two regimes and .this method is employed to construct hypervirial distribution functions for commonly used potential density pairs in the literature including the hnon isochrone , nfw , jaffe and dehnen models . after we construct the distribution functions, we also compute the velocity dispersions and the anisotropy parameter , as well as the relative error which measures the deviation of the augmented density from the actual density . in section [ sectiii ] , we take the generic hypervirial augmented density that was used to generate the distribution functions for the above models , and investigate its behaviour as and .we demonstrate the existence of a universal relation between the anisotropy parameter and the density slope in the asymptotic regime . in other words , we show that all the hypervirial distribution functions thus constructed ( from the hypervirial augmented density ansatz ) saturate the cusp slope central anisotropy theorem derived by .furthermore , it is found that the asymptotic values of the anisotropy parameter , and , are equal for all models that possess a finite total mass and can be described by a single augmented density in all regions . for such models ,we show that the asymptotic values of the anisotropy parameter are functionally very simple . owing to the existence of these universal properties for the hypervirial augmented densities, it seems reasonable to suppose that observations which establish any of the above properties may be strongly indicative of an underlying hypervirial distribution function .however , it must be emphasized that any potential observational evidence that satisfies any of the above properties will necessarily indicate hyperviriality because these properties hold true for the hypervirial ansatz but the converse is not necessarily valid .nonetheless , we believe that further investigations along these lines constitute a promising line of enquiry .we thank richard matzner and philip morrison for their support and guidance .we are deeply grateful to jin an for his insightful suggestions , detailed comments and for pointing out new avenues that we had hitherto been unaware of .this material is based on work supported by the department of physics and the texas cosmology center , which is supported by the college of natural sciences and the department of astronomy at the university of texas at austin and the mcdonald observatory .99 agon c.a . ,pedraza j.f . , & ramos - caro j. , 2011 , phys .d , 83 , 123007 an j. , 2011 , mnras , 413 , 2554 an j. , 2011 , apj , 736 , 151 an j. & evans n.w . , 2005 , a&a , 444 , 45 an j. & evans n.w . , 2006 ,j. , 131 , 782 an j. & evans n.w ., 2006 , astrophys .j. , 642 , 752 an j. , van hese e. & baes m. , 2012 , mnras , 422 , 652 an j. & zhao h. , 2013 , mnras , 428 , 2805 baes m. & dejonghe h. , 2002 , a&a , 393 , 485 baes m. & van hese e. , 2007 , a&a , 471 , 419 bender c.m . &orszag s.a . , 1999 ,advanced mathematical methods for scientists and engineers i : asymptotic methods and perturbation theory .springer - verlag , berlin binney j. & tremaine s. , 1987 , galactic dynamics , princeton series in astrophysics , princ .univ . press , princeton , nj buyle p. ,hunter c. & dejonghe h. , 2007 , mnras , 375 , 773 ciotti l. & morganti l. , 2010a , mnras , 408 , 1070 ciotti l. & morganti l. , 2010b , in bertin g. , de luca f. , lodato g. , pozzoli r. , rom m. , eds , aip conf .1242 , plasmas in the laboratory and the universe : interactions , patterns , and turbulence .phys . , new york , p. 300de souza r.s . & ishida e.e.o . , 2011 ,a&a , 524 , a74 de vega h.j . &sanchez n.g . , 2014 ,preprint ( arxiv:1401.0726 ) dehnen w. , 1993 , mnras , 265 , 250 dejonghe h. , 1986 , phys ., 133 , 217 dejonghe h. , 1987 , mnras , 224 , 13 eddington a.s . , 1916 , mnras , 76 , 572 evans n.w . , 1993 , mnras , 260 , 191 evans n.w . , 1994 , mnras , 267 , 333 evans n.w .& an j. , 2005 , mnras , 360 , 492 evans n.w . & an j.h . , 2006 , phys .d , 73 , 023524 fornasa m. & green a.m. , 2013 , phys . rev .d , 89 , 063531 fricke w. , 1952 , astron .280 , 193 hansen , s. h. , egli d. , hollenstein l. & salzmann , c. 2004 , new astron . , 10 , 379 hnon m. , 1959 , ann ., 22 , 126 henriksen r. n. , 2009 , apj , 690 , 102 hernquist l. , 1990 , apj , 356 , 359 hunter c. & qian e. , 1993 , mnras , 262 , 401 iguchi o. , sota y. , nakamichi a. & morikawa m. , 2006 , phys . rev .e , 73 , 046112 jaffe w. , 1983 , mnras , 202 , 995 jarvis b.j . & freeman k.c ., 1985 , apj , 295 , 314 jiang z. & ossipkov l. , 2007a , celest .astron . , 97 , 249 jiang z. & ossipkov l. , 2007b , mnras , 379 , 1133 jiang z. & ossipkov l. , 2007c , mnras , 382 , 1971 kalnajs a.j . , 1976 , apj , 205 , 751 lynden - bell d. , 1962 , mnras , 123 , 447 merafina m. & alberti g. , 2014 , preprint ( arxiv:1402.0756 ) merritt d. , 1985 , aj , 90 , 1027 mestel l. , mnras , 126 , 553 mo h. , van den bosch , f. & white s. , 2010 , galaxy formation and evolution , cambridge university press , cambridge navarro j.f ., frenk c.s . & white s.d.m . , 1996 , apj , 462 , 563 nguyen p.h .& pedraza j.f . , 2013 , phys .d , 88 , 064020 nguyen p.h . &lingam m. , 2013 , mnras , 436 , 2014 osipkov l.p ., 1979 , sov.astron.lett . ,5 , 42 pedraza j.f . , ramos - caro j.f . &gonzalez g.a . , 2008a , mnras , 390 , 1587 pedraza j.f . , ramos - caro j.f . & gonzalez g.a . , 2008b , mnras , 391 , l24 pedraza j.f . , ramos - caro j.f . & gonzalez g.a . , 2008c , preprint ( arxiv:0806.4275 ) plummer h.c . , 1911 , mnras , 71 , 460 prendergast k.h .& tomer e. , 1970 , aj , 75 , 674 ramos - caro j. , agon c.a . & pedraza j.f . , 2012 , phys .d , 86 , 043008 raspopova n. & ossipkov l. , 2010 , variable stars , the galactic halo and galaxy formation .sternberg astronomical institute of moscow university , universitetskij , p. 149 rindler - daller t. , 2009 , mnras , 396 , 997 rowley g. , 1988 , apj , 331 , 124 siutsou i. , argelles c.r . & ruffini r. , preprint ( arxiv:1402.0695 ) sota y. , iguchi o. , morikawa m. & nakamichi a. , 2006 , prog . theor .suppl , 162 , 62 sota y. , iguchi o. , tashiro t. & morikawa m. , 2008 , phys .e , 77 , 051117 van hese e. , baes m. & dejonghe h. , 2011 , apj , 726 , 80 veltmann - i.k ., 1979a , sva , 23 , 551 veltmann - i.k . , 1979b , astron ., 56 , 976 white r.w . , 2010 , asymptotic analysis of differential equations , imperial college press , london zhao h. , 1996 , mnras , 278 , 488in this appendix , we apply the method developed for the veltmann model to write down a few simple distribution functions for the plummer and hernquist models .the augmented density with one - component is and the corresponding distribution function is nothing but the isotropic distribution function .next , we look for two - component distribution functions .the only such model is this is a particular case of the one - parameter family presented in .finally , we proceed to derive three - component distribution functions for the plummer sphere .this can actually be done in three different ways .the first possible choice is this distribution function is yet another special case of the family presented in . the second choice of augmented density and distribution function is .\ ] ] the third possibility is .\ ] ] as usual , we will compute the velocity structure of the models , i.e. the tangential and radial velocity dispersions as well as the anisotropy parameter .for the isotropic plummer model ( [ plum1 ] ) , we find for the two - term distribution function , the density is given by equation ( [ plum2 ] ) .the velocity dispersions and anisotropy parameter are there are three different three - term distribution functions .the first of these has a density dependence given by equation ( [ plum3a ] ) . for this model ,the dispersions and anisotropy parameter are for the second three - term distribution function , the density is given by ( [ plum3b ] ) .the velocity dispersions are computed for this model . for the last plummer model , which is given by ( [ plum3c ] ), we obtain the anisotropy parameters for all the plummer models presented above are either constant or decreasing functions of , and they exhibit tangential anisotropy for all values of .models with tangential anisotropy have been studied in the context of analysing the substructure of dark matter haloes ( see e.g. ) .the radial and tangential velocity dispersions , as well as the anisotropy parameter , for the distribution functions for the plummer model discussed in this section have been plotted in fig .[ figa1 ] . in fig .[ contourplum ] , the contour plots for some of the plummer distribution functions are depicted , where the distribution functions depend on both the dimensionless binding energy and the dimensionless angular momentum .next , we repeat for the hernquist model . again , we start with the one - component distribution function , whose augmented density is this augmented density corresponds to the hypervirial distribution function discussed in section [ secti ] .next , we proceed to the two - component distribution function for the hernquist model : as for the plummer case , the three - component hernquist models can be constructed in three distinct ways . the first augmented density and distribution function is alternatively , we have , we also have the simplest model for the hernquist profile has a density profile that takes on the functional dependence given by equation ( [ her1 ] ) . for this model , the velocity dispersions and anisotropy parameter are the same as the ones obtained for the hypervirial family . the second simplest model is described by equation ( [ her2 ] ) .one can proceed in a manner similar to the plummer models , and we obtain now , we move on to the three term distribution functions .the first of these has the simplest functional dependence of the density and is given by equation ( [ her3a ] ) . for this model , the quantitiesare found to be the second three - term distribution function has a density given by equation ( [ her3b ] ) .the dispersions and the anisotropy parameters are the third three - term distribution function is given by the relation ( [ her3c ] ) .the velocity dispersions and the anisotropy parameters are the radial and tangential velocity dispersions , as well as the anisotropy parameter , for the distribution functions for the hernquist model discussed in this section have been plotted in fig .[ figa2 ] .we also present contour plots for some of these distribution functions in fig .[ contourher ] and it must be noted that they are functions of both the dimensionless binding energy and angular momentum . in this final subsection , we present another wide class of distribution functions for the veltmann models .we start by multiplying and dividing the augmented density ( [ rho1term ] ) by a factor of for some constant , then rewrite the denominator using the relation to obtain the new augmented density expanding , the augmented density becomes the double - power distribution function that corresponds to the above augmented density is .\end{aligned}\ ] ] the anisotropy parameter can be computed from the augmented density . it is seen that the value of decreases as one moves radially outwards .
in this paper , we present two simple approaches for deriving anisotropic distribution functions for a wide range of spherical models . the first method involves multiplying and dividing a basic augmented density with polynomials in and constructing more complex augmented densities in the process , from which we obtain the double - power distribution functions . this procedure is applied to a specific case of the veltmann models that is known to closely approximate the navarro frenk white ( nfw ) profile , and also to the plummer and hernquist profiles ( in the appendix ) . the second part of the paper is concerned with obtaining hypervirial distribution functions , i.e. distribution functions that satisfy the local virial theorem , for several well - known models . in order to construct the hypervirial augmented densities and the corresponding distribution functions , we start with an appropriate ansatz for the former and proceed to determine the coefficients appearing in that ansatz by expanding the potential density pair as a series , around and . by doing so , we obtain hypervirial distribution functions , valid in these two limits , that can generate the potential density pairs of these models to an arbitrarily high degree of accuracy . this procedure is explicitly carried out for the hnon isochrone , jaffe , dehnen and nfw models and the accuracy of this procedure is established . finally , we derive some universal properties for these hypervirial distribution functions , involving the asymptotic behaviour of the anisotropy parameter and its relation to the density slope in this regime . in particular , we show that the cusp slope central anisotropy inequality is saturated . [ firstpage ] gravitation methods : analytical galaxies : bulges galaxies : clusters : general galaxies : haloes dark matter
extending the methods found in previous work on quantum knots and quantum graphs , we describe a general procedure for quantizing a large class of mathematical structures which includes , for example , knots , graphs , groups , algebraic varieties , categories , topological spaces , geometric spaces , and more .this procedure is different from that normally found in quantum topology .we then demonstrate the power of this method by using it to quantize braids .we should also mention that this general method produces a blueprint of a quantum system which is physically implementable in the same sense that shor s quantum factoring algorithm is physically implementable . moreover , mathematical invariants become objects that are physically observable .the above mentioned general quantization procedure consists of two steps : * mathematical construction of a motif system , and * mathematical construction of a quantum motif system from the system . * caveat . * the term `` motif '' used in this paper should not be confused with the use of the term `` motive '' ( a.k.a ., `` motif '' ) found in algebraic geometry .we now outline a general procedure for quantizing mathematical structures. one useful advantage to this quantization procedure is that the resulting system is a multipartite quantum system , a property that is of central importance in quantum computation , particularly in regard to the design of quantum algorithms . in a later section of this paper ,we illustrate this quantization procedure by using it to quantize braids .examples of the application of this quantization procedure to knots , graphs , and algebraic structures can be found in .let be a finite set of symbols , with a distinguished element , called the * trivial symbol * , and with a linear ordering denoted by ` ' .let be the the -fold cartesian product of with an induced lex ordering also denoted by ` ' , and let be the group of all permutations of . for positive integers and ( ) , let be the injection defined by next , let be a monotone strictly increasing infinite sequence of positive integers . for each positive integer ,let be a subset of such that lies in , i.e. , . moreover , for each non - negative integer , let be a subgroup of the permutation group having as an invariant subset , and such that the injection induces a monomorphism , also denoted by .we define a * motif system * of * order * as the pair , where is called the * set of motifs * , and where is called the * ambient group*. finally , we define a * nested motif system * as the following sequence of sets , groups , injections , and monomorphisms: there is also one more symbolic motif system that is often of use , the * direct limit motif system * defined by where denotes the direct limit .let be a motif system of order .two motifs and of the set are said to be of the * same * **-motif type * * , written if here exists an element of the ambient group which takes to , i.e. , such that the motifs and are said to be of the * same motif type * , written if there exists a non - negative integer such that we now wish to answer the question : * question*. _ what is meant by a motif invariant ? _ let be a motif system , and let be some yet to be chosen mathematical domain . by an -*motif invariant * , we mean a map such that , when two motifs and are of the same -type , i.e. , then their respective invariants must be equal , i.e. , in other words , is a map that is invariant under the action of the ambient group , i.e. , for all elements of in .we now use the nested motif system to construct a nested sequence of quantum motif systems . for each non - negative corresponding **-th order * * * * quantum motif system** consists of a hilbert space , called the * quantum motif space * , and a group , also called the * ambient group*. the quantum motif space and the ambient group are defined as follows : * the * quantum motif space * is the hilbert space with orthonormal basis the elements of are called * * quantum motifs** * the * ambient group * is the unitary group acting on the hilbert space consisting of all linear transformations of the form where is the linear transformation defined by {rcc}\widetilde{g}:\mathcal{m}^{(n ) } & \longrightarrow & \mathcal{m}^{(n)}\\ \left\vert m\right\rangle \quad & \longmapsto & \left\vert gm\right\rangle \end{array}\ ] ] since each element in is a permutation , each permutes the orthonormal basis of .hence , is automatically a unitary transformation .it follows that and are isomorphic as groups .we will often abuse notation by denoting by , and by .next , for each non - negative integer , let and respectively denote the hilbert space monomorphism and the group monomorphism induced by the injection and the group monomorphism finally , we define the * nested quantum motif system * as the following sequence of hilbert spaces , groups , hilbert space monomorphisms , and group monomorphisms: we should also mention one other quantum motif system that can be useful , namely , the * quantum direct limit motif system * defined by where denotes the direct limit .this quantum system is often also physically implementable .let be a quantum motif system of order .two quantum motifs and of the hilbert space are said to be of the * same * **-motif type * * , written if there exists an element of the ambient group which takes to , i.e. , such that the quantum motifs and are said to be of the * same motif type * , written if there exists a non - negative integer such that we consider the following question : * question : * _ what do we mean by a physically observable quantum motif invariant ? _ we answer this question with a definition .let be a quantum motif system of order , and let be an observable , i.e. , a hermitian operator on the hilbert space of quantum motifs .then is a * quantum motif * **-invariant * * provided is left invariant under the big adjoint action of the ambient group , i.e. , provided for all in .if is a real valued -motif invariant , then is a quantum motif observable which is a quantum motif -invariant .much more can be said about this topic . for a more in - depth discussion of this issue ,we refer the reader to .we now illustrate the quantization procedure defined above by using it to quantize braids .for each integer , let denote the following set of the symbols{rtmmn.jpg } } , \cdots\text{,}\ { \includegraphics [ natheight=6.000100 in , natwidth=3.000000 in , height=0.627 in , width=0.3269 in ] { rtm2n.jpg } } , \{ \includegraphics [ natheight=6.000100 in , natwidth=3.000000 in , height=0.627 in , width=0.3269 in ] { rtm1n.jpg } } , \ { \includegraphics [ natheight=6.000100 in , natwidth=3.000000 in , height=0.627 in , width=0.3269 in ] { rt0n.jpg } } , \ { \includegraphics [ natheight=6.000100 in , natwidth=3.000000 in , height=0.627 in , width=0.3269 in ] { rtp1n.jpg } } , \ { \includegraphics [ natheight=6.000100 in , natwidth=3.000000 in , height=0.627 in , width=0.3269 in ] { rtp2n.jpg } } , \cdots\text{,}\ { \includegraphics [ natheight=6.000100 in , natwidth=3.000000 in , height=0.627 in , width=0.3269 in ] { rtpmn.jpg } } \ ] ] called -*stranded braid tiles * , or -*tiles * , or simply * tiles*. we also denote these tiles respectively by the symbols as indicated in the table given below:{c}\begin{tabular } [ c]{||c||c||c||c||c||c||c||c||c||}\hline\hline & & & & & & & & \\\hline & & & & & & & & \\\hline\hline \end{tabular } \\\textbf{-stranded braid tiles}\end{tabular } \ \ ] ] an **-braid mosaic * * is defined as a sequence of -stranded braid tiles of length . we let denote the * set of all * **-braid mosaics**. an example of a -braid mosaic is given below {c} \\ the -braid mosaic \end{tabular } \ \ \ ] ] please note that the set of all -braid mosaics is a finite set of cardinality .let and be positive integers such that .an -braid mosaic is is said to be an **-braid submosaic * * of an -braid mosaic provided is a subsequence of consecutive tiles of .the -braid submosaic is said to be at * position * in if the first ( leftmost ) tile of is the -th tile of from the left .we denote the -braid submosaic of at location by .the number of -braid submosaics of an -braid mosaic is .two examples of braid submosaics of the -braid mosaic are given above are:{c} \\ \textbf{the } \textbf{-braid submosaic}\\ \textbf{of } \textbf { at position 2}\end{tabular } \ \ \ \ \text { \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } \begin{tabular } [ c]{c} \\ \textbf{the } \textbf{-braid submosaic}\\ \textbf{of } \textbf { at position 5}\end{tabular } \ \ \ \ ] ]let and be positive integers such that .for any two -braid mosaics and , we define the -*braid mosaic move * * at location * on the set of all -braid mosaics , denoted by as the map defined by{ll}\beta\text { with } \beta^{p:\ell^{\prime}}\text { replaced by } \gamma^{\prime } & \text{if } \beta^{p:\ell^{\prime}}=\gamma\\ & \\ \beta\text { with } \beta^{p:\ell^{\prime}}\text { replaced by } \gamma & \text{if } \beta^{p:\ell^{\prime}}=\gamma^{\prime}\\ & \\ \beta & \text{otherwise}\end{array } \right\vert\ ] ] as an example , consider the -braid mosaic move at position defined by{cc}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm23.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp13.jpg } } \end{array } \overset{3}{\longleftrightarrow}\begin{array } [ c]{cc}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm23.jpg } } \end{array}\ ] ] then{ccccc}\hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp23.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm23.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp13.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp23.jpg } } \end{array } \right ) = \ \begin{array } [ c]{ccccc}\hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp23.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm23.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp23.jpg } } \end{array } \ \ \left ( \begin{tabular } [ c]{c}braid\\ submosics\\ switched \end{tabular } \ \ \ \ \ \right)\]]{ccccc}\hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp23.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm23.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp23.jpg } } \end{array } \right ) = \ \begin{array } [ c]{ccccc}\hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp23.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm23.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp13.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp23.jpg } } \end{array } \ \ \left ( \begin{tabular } [ c]{c}braid\\ submosics\\ switched \end{tabular } \ \ \ \ \ \right)\]]{ccccc}\hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp23.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp13.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm23.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp13.jpg } } \end{array } \right ) = \ \begin{array } [ c]{ccccc}\hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp23.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp13.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm23.jpg } } & \hspace{-0.1in}{\includegraphics [ natheight=3.000000 in ,natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp13.jpg } } \end{array } \\ \left ( \begin{tabular } [ c]{c}braid\\ mosaic\\ unchanged \end{tabular } \ \ \ \ \ \right)\ ] ] the following proposition is an almost immediate consequence of the definition of a braid move .each braid move is a permutation on the set of -braid mosaics .in fact , it is a permutation which is a product of disjoint transpositions .our next objective is to translate all the standard topological moves on braids into braid mosaic moves . to accomplish this, we must first note that there are two types of standard topological moves , i.e. , those which do not change the topological type of the braid projection , called * planar isotopy moves * , and those which do change the typological type of the braid projection but not of the braid itself , called * reidemeister moves*. we begin with the planar isotopy moves . for braid mosaics , there are two types * planar isotopy moves * , i.e. , types and , which are defined below as:{|c|}\hline \\\hline \textbf{definition of a type } \textbf { planar isotopy move}\\\hline \end{tabular}\ ] ] and{|c|}\hline \\\hline \textbf{definition of a type } \textbf { planar isotopy move}\\\hline \end{tabular}\ ] ] examples of and moves are respectively given below:{c}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp13.jpg } } {\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp13.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } \\ \end{tabular } \\text { \ \ and \ \ } \begin{tabular } [ c]{c}{\includegraphics [ natheight=3.999800 in , natwidth=3.000000 in , height=0.4272 in , width=0.3269 in ] { rtp14.jpg } } { \includegraphics [ natheight=3.999800 in , natwidth=3.000000 in , height=0.4272 in , width=0.3269 in ] { rtp34.jpg } } {\includegraphics [ natheight=3.999800 in ,natwidth=3.000000 in , height=0.4272 in , width=0.3269 in ] { rtp34.jpg } } { \includegraphics [ natheight=3.999800 in , natwidth=3.000000 in , height=0.4272 in , width=0.3269 in ] { rtp14.jpg } } \\ \end{tabular } \ ] ] the number of and moves are respectively and .there are two types of topological moves , i.e. , and .the * reidemeister * moves are defined as{c}\\ where\end{tabular}\ ] ] an example of a reidemeister 2 move is given below{rtm13.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp13.jpg } } \overset{\lambda}{\longleftrightarrow}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } \ ] ] the number of moves is the * reidemeister * moves are defined for , and given below:{|c|}\hline \\\hline \\\hline \\\hline \\\hline \\\hline \\\hline \\\hline \\\hline \\\hline \\\hline \\\hline \\\hline \\\hline \end{tabular } \ ] ] two examples of reidemeister are given below:{rtm23.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm13.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm23.jpg } } \overset{\lambda}{\longleftrightarrow}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm13.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm23.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm13.jpg } } \ ] ] {rtm13.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm23.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rt03.jpg } } \overset{\lambda}{\longleftrightarrow}{\includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm23.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm13.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtm23.jpg } } { \includegraphics [ natheight=3.000000 in , natwidth=3.000000 in , height=0.3269 in , width=0.3269 in ] { rtp13.jpg } } \ ] ] the number of reidemeister 3 moves is given by{ccc}n\left ( n-2\right ) \left ( 6\ell-21\right ) & \text{if } & \ell\geq6\\ & & \\ n\left ( n-2\right ) \left ( 5\ell-16\right ) & \text{if } & \ell=5\\ & & \\ n\left ( n-2\right ) \left ( 3\ell-8\right ) & \text{if } & \ell=4\\ & & \\ n\left ( n-2\right ) \left ( \ell-2\right ) & \text{if } & \ell=3\\ & & \\ 0 & \text{if } & \ell<3 \end{array } \right.\ ] ] at this point , we can define what is meant by the ambient group and the resulting braid mosaic system .we begin reminding the reader of a fact noted earlier in this paper , namely the fact that each braid move is a permutation on the set of -braid mosaics .thus , since planar isotopy and reidemeister moves are permutations , we can make the following definition : we define the ( **-braid mosaic * * ) * ambient group * as the group of all permutations on the set of -braid mosaics generated by -braid planar isotopy and reidemeister moves .we need one more definition , before we can move to the objective of this section .we define the * * braid mosaic injection** as the map for each -braid mosaic in .it immediately follows that the braid mosaic injection induces a monomorphism from the -braid ambient group to the -braid ambient group .this monomorphism is called the * braid mosaic monomorphism*. we define an * braid system * of * order * as the pair , where is called the * set of * -*braid mosaics * , and where is called the * ambient group*. finally , we define a * nested motif system * as the following sequence of sets , groups , injections , and monomorphisms: our next objective is to define what it means for two braid mosaics to represent the same topological braid .two braid mosaics and of the set are said to be of the * same * **-braid mosaic type * * , written if there exists an element of the ambient group which takes to , i.e. , such that the braid mosaics and are said to be of the * same braid mosaic type * , written if there exists a non - negative integer such that we now wish to answer the question : * question*. _ what is meant by a braid mosaic invariant ? _ let be a braid system , and let be some yet to be chosen mathematical domain . by an -*braid mosaic invariant * , we mean a map such that , when two braid mosaics and are of the same -type , i.e. , when then their respective invariants must be equal , i.e. , in other words , is a map that is invariant under the action of the ambient group , i.e. , for all elements of in .we now use the nested braid mosaic system to construct a nested sequence of quantum braid mosaic systems . for pair of non - negative integers and the corresponding **-th order * * * * quantum braid system** consists of a hilbert space , called the * quantum mosaic space * , and a group , also called the * ambient group*. the quantum motif space and the ambient group are defined as follows : * the * quantum motif space * is the hilbert space with orthonormal basis the elements of are called * * quantum braids** * the * ambient group * is the unitary group acting on the hilbert space consisting of all linear transformations of the form where is the linear transformation defined by {rcc}\widetilde{g}:\mathcal{b}^{(n,\ell ) } & \longrightarrow & \mathcal{b}^{(n,\ell)}\\ \left\vert \beta\right\rangle \quad & \longmapsto & \left\vert g\beta \right\rangle \end{array}\ ] ] since each element in is a permutation , each permutes the orthonormal basis of . hence , is automatically a unitary transformation .it follows that and are isomorphic as groups .we will often abuse notation by denoting by , and by .next , for each pair of non - negative integers and , let and respectively denote the hilbert space monomorphism and the group monomorhism induced by the injection and the group monomorphism finally , we define the * nested quantum braid system * as the following sequence of hilbert spaces , groups , hilbert space monomorphisms , and group monomorphisms: let be a quantum motif system of order .two quantum braids and of the hilbert space are said to be of the * same * **-braid type * * , written if there exists an element of the ambient group which takes to , i.e. , such that the quantum motifs and are said to be of the * same braid type * , written if there exists a non - negative integer such that we consider the following question : * question : * _ what do we mean by a physically observable quantum braid invariant ? _ we answer this question with a definition .let be a quantum braid system of order , and let be an observable , i.e. , a hermitian operator on the hilbert space of quantum braids .then is a * quantum braid * **-invariant * * provided is left invariant under the big adjoint action of the ambient group , i.e. , provided for all in .if is a real valued -braid invariant , then is a quantum motif observable which is a quantum motif -invariant .much more can be said about this topic .for more examples of the application of the quantization procedure discussed in this paper , we refer the reader to . for knot theory and the braid group ,we refer the reader to ; for topological quantum computation , ; and for quantum computation and information , .lomonaco , samuel j. , and louis h. kauffman , * quantum knots and mosaics * , quantum information processing , vol . 7 , nos . 2 - 3 , ( 2008 ) , 85 - 115. republished in * `` quantum information science and its contributions to mathematics , '' * ams psapm/68 , american mathematical society , ( providence , ri , ( 2010 ) , 177 - 208 .lomonaco , samuel j. , and louis h. kauffman , * `` quantum knots and lattices , or a blueprint for quantum systems that do rope tricks , '' * ams psapm/68 , american mathematical society , providence , ri , ( 2010 ) , 209 - 276 .sarma , sankar das , michael freedman and chetan nayak , * topologically protected qubits from a possible non - abelian fractional quantum hall state , * phys .letters , vol .94 , ( 2005 ) , pp 166802 - 1168802 - 4 .
extending the methods from our previous work on quantum knots and quantum graphs , we describe a general procedure for quantizing a large class of mathematical structures which includes , for example , knots , graphs , groups , algebraic varieties , categories , topological spaces , geometric spaces , and more . this procedure is different from that normally found in quantum topology . we then demonstrate the power of this method by using it to quantize braids . this general method produces a blueprint of a quantum system which is physically implementable in the same sense that shor s quantum factoring algorithm is physically implementable . mathematical invariants become objects that are physically observable . = 1
electromagnetic waves in free space constitute an important part of the calculus based introductory physics course .in addition to other properties , all the textbooks we have surveyed include a discussion of the transverse nature of the waves . specifically , that ( plane ) electromagnetic waves have the electric field strength and the magnetic induction perpendicular to the propagation direction of the wave and to each other so that is an even permutation of , where is the poynting vector .the extent of the coverage and the arguments brought in the textbooks vary widely .while we have not checked all textbooks , none of the ones we have surveyed include a full and detailed theoretical argument for these important properties , even though all textbooks include the infrastructure needed for that purpose , i.e. , the ( integral ) maxwell equations .most textbooks derive the wave equations for the transverse components of the and fields , but do not show that these wave equations are unique , i.e. , that a longitudinal component violates the maxwell equations .that is , they show that transverse waves are _ consistent _ with the maxwell equations , but not that they are required .we make the case here that the theoretical argument is important for students of the calculus based introductory course .much research has shown that understanding and retention of physics is enhanced if important concepts are not brought just as factoids , but rather derived from theoretical and physical argumentation . in the case of transversality of ( plane ) electromagnetic waves our case is even strengthened , as the argument is certainly at the right level for students of such courses . indeed ,a general demonstration without assuming planar symmetry is clearly beyond the scope of the introductory course , and rightfully belongs to more advanced courses .however , the physics and mathematics needed for the demonstration for plane electromagnetic waves are readily available for students of the introductory course , and in fact are already included in most textbooks we surveyed .therefore , no new physics or mathematics are needed to demonstrate the transversality of ( plane ) electromagnetic waves in the introductory course .all that is needed is to arrange the arguments that are already in the student s toolkit to achieve deeper level of understanding .the textbooks we have surveyed ( and again , we emphasize our literature search was not exhaustive , as the number of textbooks is indeed very large ; we do believe we have found the tendencies of existing textbooks ) fall under the following categories .a : textbooks that postulate the transversality ( but do nt derive it ) , and then show consistency with the ( either all four or just the dynamical ) maxwell equations ; category b : textbooks that only use the two dynamical equations ( faraday s law and the ampre maxwell law ) to derive the wave equations for the ( transverse ) components of the and fields , but do nt show that _ only _ transverse fields are allowed ; and category c : textbooks that add to the treatment of category b also a demonstration that a longitudinal component of the field is disallowed ( but not that a transverse component is consistent with the maxwell equations ) . interestingly ,even introductory books that discuss the differential maxwell equations do not include a demonstration of the transversality of electromagnetic waves ( we have not surveyed more advanced books , appropriate for upper division courses on electromagnetism ) .none of the textbooks we have surveyed include physical arguments for the transversality of electromagnetic waves , even though all of them discuss polarization . on the other hand ,none of the textbooks we have surveyed include a discussion of the impossibility of genuine spherical electromagnetic waves ( either monopole of a combination of multipoles ) .either polarization or hypothetical spherical electromagnetic waves may be used in physical arguments for the transversality of electromagnetic waves .this omission is surprising , as building strong physical intuition is widely considered a primary goal of such courses .many of the properties of electromagnetic waves can be demonstrated also for the algebra based introductory course , but examination of these arguments and their use in textbooks are beyond the scope of this paper .our theoretical argument is by no means original , and surely has been made numerous times .it is however absent in its entirety from all textbooks we have surveyed .even if it does appear in textbooks we have not surveyed , we believe it is fair to state that most textbooks do not include the full argument , and that similarly most calculus based introductory physics courses do not discuss it . in advanced coursesthe transversality of electromagnetic waves is easily shown from the maxwell equations and the wave equation for the vector potential in the lorenz gauge , and the skew symmetry of the maxwell faraday tensor .consider , say , a vector potential .the only nonvanishing independent components of are and , and .these and fields manifestly satisfy the aforementioned properties , although the formality of the demonstration in addition to being inappropriate for the introductory course is devoid of physical insight .( one may of course try to impose longitudinality : take .one then finds that is non - zero , and one might be tempted to identify it with .however , the postulated vector potential is not a solution for the maxwell equations in the lorenz gauge , as can be readily verified . )one can also show the transversality without invoking tensor analysis , using only vector analysis methods .the demonstration using elementary methods is important , as it involves more physical insight than the advanced , more formal proof .notably , electromagnetic waves in matter do not have to be transverse . longitudinal electromagnetic waves ( or combinations of longitudinal and transverse waves )indeed exist in inhomogeneous media and in cavities ( because of the cavity s boundary conditions ) .one may also have in vacuum electromagnetic waves with , but those are standing waves ( with vanishing poynting vector ) .these interesting cases are typically beyond the scope of the introductory course . in this paperwe present a full argument for the transversality of ( plane ) electromagnetic waves in vacuum .we are hopeful that instructors of calculus based introductory physics courses present a full argument in their classes .we use si units despite their awkwardness ( for the description of electrodynamics ) , as this appears to be the nearly universal practice in textbooks .we use only the integral form of the maxwell equations , as nearly all introductory textbooks ( for _ calculus_based physics ) refrain from introducing differential operators . in section [ phys_arg ]we describe physical arguments for the transversality of electromagnetic waves , based on polarization and on ( lack of ) monopole radiation . in section [ theor_arg ]we present a full argument at a level appropriate for the introductory calculus based physics course , as follows : first , we show that can not have a longitudinal component , and that may have a transversal component .we then repeat the arguments for the field .we then show that the and fields are orthogonal to each other and to the direction of propagation , and finally , we show that the and fields satisfy the wave equation .we emphasize that the only assumption we make is that of planar symmetry . except for this assumption ( which makes the derivation possible for the introductory course )our argument is general .many introductory courses include a demonstration of polarization that includes two polaroid films , so that one can be rotated relative to the other .when the polarization directions of the two polaroids are orthogonal , no light is going through the polaroids , and the image on the screen is dark .assume there were a longitudinal component to the field .this component may be transmitted through either polaroid , so that a longitudinal component would arrive to the screen through both polaroids even when they are orthogonal .therefore , with a longitudinal component the screen will never get totally dark when the two polaroids are in a plane perpendicular to the direction of propagation of the light .a variant of this argument was given by schutz .schutz considers the two polarizers rotating relative to each other in the plane orthogonal to the direction of propagation . asthe brightness on the screen oscillates with the polarizers , one must conclude the electric field acts across its motion , i.e. , transversally . in order to make our argument compelling ,students need to have at least a qualitative understanding of how simple dichroic devices such as a wire grid polaroid work ( most introductory texts do not discuss in depth other polarizers , such as beam splitting polarizers , birefringent polarizers , or polarization by reflection ) . such understanding may be expected from students of the relevant course : there are three microscopic mechanism that filter out the component of the field in the plane of the polaroid parallel to the direction of the wires ( or polyvinyl alcohol polymers doped with an iodine solution in a commercial polaroid ) : first , by the lorentz force law , conduction electrons are accelerated along the wires , and oscillate with the oscillating field .when colliding with lattice atoms , kinetic energy is transferred to the lattice , thereby transferring energy from the parallel component of the field to the lattice ( joule heating " ) .second , the oscillating conduction electrons reradiate in all directions , so that half the reradiation is backward ( reflection ) and the remainder forward .the backward reradiation takes away more energy from the parallel component of the field .the forward reradiation is not entirely in the direction of the original incident plane wave , so that it scatters and little of it arrives at the detector .most importantly , the forward reradiation is generally out of phase with the incident field ( phase difference of radians ) , thereby reducing it further by destructive interference .some textbooks explain the polarization effect by stretching a mechanical model too far . in such texts the electric fieldis modeled by a tension wave in a string that is passing through a fence with parallel beams .the mechanical wave naturally is transmitted only if the string oscillates parallel to the beams .this argument might imply the wrong component of the field is transmitted .for this reason , those texts refer to the polarization direction of the polarizer , stating that perpendicular components are filtered out . but this argument may then persuade some students to believe a longitudinal component of the field is also filtered out , such that this sort of argumentation , in addition to not empowering the students with an understanding of simple dichroic devices work , also might prevent them from understanding a simple physical argument for why electromagnetic waves are transverse , and give the students a misleading physical picture of electromagnetic fields oscillating in the space between the wires . consider a hypothetical monopole electromagnetic wave .then , a radially pulsating spherical charge distribution would emit such spherical outgoing waves .as the fundamental equations of physics are symmetrical under time reversal ( ) , an incoming imploding spherical monopole wave would make an otherwise static spherical charge distribution pulsate radially .each charge pulsates radially because of a radial force acting on it .the form of the lorentz force law then implies there were a radial field acting on the charges .but a radial field is perpendicular to the imploding spherical wave front and therefore in the same direction of its motion , so that the field would have a longitudinal component .this qualitative argument implies that a monopole electromagnetic wave is necessarily longitudinal .( while students of the introductory course generally have not studied multipole expansions of the electromagnetic field , the argument may still be used in reference to a truly spherical wave front . )we do know , however , that outside a spherical charge distribution the electromagnetic field is static , as the monopole piece of the electromagnetic field is non - radiative .students are exposed to this argument also in newtonian gravity : because of the inverse square force law ( common to both newton s and coulomb s laws ) the field strength of any spherical charge ( or mass ) distribution is the same as if all the charge ( or mass ) were concentrated at the center .specifically , the electric field outside a radially pulsating spherical charge distribution is static .therefore , the longitudinal component of the field can not exist , as such a longitudinal component would have to be radiated by a pulsating spherical charge distribution .this physical argument may be cast in a more mathematical form , using a theorem from vector calculus that states that no unit two dimensional continuous vector field on a sphere may exist on the entire sphere ( , ) .this theorem is normally beyond the scope of the introductory course .in addition , it requires the assumption of transversality ( namely , that is a two - dimensional vector field on the sphere , or ) .however , it may be used to argue that any such spherical electromagnetic wave would have to be longitudinal .consider a plane electromagnetic wave traveling in vacuum in the direction of .( our coordinates may be rotated to this orientation . ) because of the planar symmetry the most general electric field strength is . in what followswe omit the explicit time dependence of fields .we first show there can be no component : construct a gaussian surface enclosing a volume as in fig .[ gauss1 ] . according to gauss s law , where is the total charge inside the gaussian surface , and is the permittivity of vacuum . here, is a surface element normal to the ( orientable ) surface , defined conventionally such that it is positive when pointing outward .calculate the integral on the lhs : take . then \,dy\,dz=0\ ] ] as there are no charges .the components and identically do not contribute to the lhs integral , as these components are not functions of or , respectively , and as they are in the plane of the faces of the surface whose normals are in the direction of .notice that there is no flux of through the faces of the gaussian surfaces in the or directions because of the orthogonality of to the surface s normals .this immediately implies , or does not obey a non - trivial wave equation .therefore , the most general electric field is . without loss of generality, we may rotate our coordinate system so that .our demonstration depends on the lack of charges .presence of charges requires a non - vanishing gradient of .indeed , gauss s law would not be violated for longitudinal waves without assuming vacuum , as there are charges and inhomogeneities in matter .therefore , this demonstration does not rule out longitudinal waves in matter or inhomogeneous media .components of either or have to be independent of .the arrows indicate a case in which they are not .then , the surface integral must be nonzero , and proportional to their derivatives . in the case of implies free charges inside the box , i.e. , a continuous charge distribution , which contradicts the vacuum assumption . in the case of distribution would be of magnetic monopoles.,width=326 ] consider next an electric field , as in fig .[ gauss2 ] . applying gauss s lawdoes not yield a restriction on , because the lhs of the integral vanishes identically , as : \,dx\,dz=0\ , .\ ] ] this step may of course be combined with the preceding one by taking , and showing that gauss s law becomes . in conclusion ,a transverse component for is consistent with gauss s law .of course , were there not a transverse component , the solution would become trivial ( no electromagnetic wave ) .therefore , for any electromagnetic wave a transverse field is necessary . or leads to no contradiction : as the fields are functions of only the spatial coordinate ( in addition to the time ) , the surface integral of the fields vanish along all faces , either because of the field being parallel to the face ( faces parallel to the and planes ) or because the field exiting the surface completely balances the field entering ( faces parallel to the plane ) .it is therefore important that the fields are only functions of , otherwise the cancellation would not be exact . ,width=326 ] we next use similarly the magnetic gauss law , the same argument as in step 1 implies that , so that also does not have a longitudinal component .therefore , .but we have no more freedom to rotate the coordinate system , a freedom which we have already exhausted in rotating it so that . we may therefore not set any of the components of equal to zero . as there are no magnetic monopoles , the field has to be transverse also in matter .consider next a field .applying gauss s law does not yield a restriction on or on , because the lhs of the integral vanishes identically , as and : \,dx\,dz\\ & + & [ b_z(x , z+\,dz)-b_z(x , z)]\,dx\,dy=0\ , .\end{aligned}\ ] ] consider now an ampre loop enclosing an area as in fig .[ ampere1 ] . the ampre maxwell law is where is the permeability of free space , is the total conduction current through the loop , and is the flux of the field through it . here , is an element of the curve along the loop , conventionally taken to be positive counterclockwise . doing the integral on the lhs , \,dy=\frac{\,\partial b_y}{\,\partial x}\,dx\,dy=0\ , , \ ] ] as the component is perpendicular to the entire loop , and as there are no electric field flux through the loop ( because the field in in the direction of ) and also no free currents .therefore , .therefore , we find that .we therefore find that , i.e. , the fields are orthogonal to each other and also to the direction of propagation .in addition , , i.e. , the cross product of the fields is in the direction of propagation of the wave .component of the field is perpendicular to the entire loop , so that it does not contribute to the line integral around it . when applied to the ampre maxwell law , an electric field in the direction of has zero flux through the loop .the line integral around the loop is proportional to the derivative of the component of .when applied to the faraday law , the derivative of the component of is proportional to the time derivative of the component of .the arrow indicates a component to either or .,width=326 ] this part of the argument appears in most textbooks we have surveyed .however , without the preceding parts of the argument it is merely a demonstration of consistency of the transverse wave equation with the maxwell equations , not their unavoidability .we now use the loop in fig .[ ampere2 ] to evaluate the ampre maxwell law . on the lhs , \,dz=\frac{\,\partial b_z}{\,\partial x}\,dx\,dz\ , .\ ] ] the flux of the field is simply , so that the rhs is evaluated to equal so that the ampre maxwell law yields component of the field is in the plane of the loop .the line integral around the loop is proportional to the derivative of the component of , which by faraday s law is proportional to the time derivative f the electric field s flux through it .the arrow indicates the component of the field . the field ( which is in the direction of ) is not shown.,width=326 ]we next consider again the ampre loop in fig .[ ampere1 ] , but this time apply it to the faraday law , here , is the flux of the field through the loop .the lhs is evaluated as \,dy=\frac{\,\partial e_y}{\,\partial x}\ , dx\,dy\ , .\ ] ] the rhs is so that many introductory texts show that by differentiating eqs .( [ eq1 ] ) and ( [ eq2 ] ) and eliminating mixed second derivative terms , one can obtain wave equations . withthe identification these equations lead to and to show the relation between the magnitudes of the and fields we present an adaptation of an argument originally made by feynman .consider an infinite planar current sheet , carrying current density . a current sheet is a two dimensional surface carrying current , and can be approximated by a large number of wires , all carrying current in the same direction , so that the total current per unit length ( perpendicularly to the wires ) is , where is the total current included in the length .current sheets are a very useful model in magnetohydrodynamics ( mhd ) and in heliophysics .we first recall the field of a stationary current sheet .consider a current sheet aligned as in fig .[ current_sheet ] . to find the field outside the current sheet we construct an ampre loopwhose plane is perpendicular to the plane of the current sheet. the field must be parallel to the plane of the current sheet , and perpendicular to the direction of the current .( one may be convinced of that result by considering the elementary problem of current wire , and considering the current sheet as the limiting case of infinitely many such current wires . )as this problem is stationary , we may use the original form of the ampre law ( no time changing flux of an electric field ) , i.e. , where is the total current going through the loop , whose length in the is . because the current sheet is infinite , the magnitude of the field strength is constant along the legs of the loop in the .the lhs of the ampre law then is , and the rhs is simply .putting the two expressions together , one finds that .notably , is independent of the distance from the current sheet .plane , carrying current density .the direction of the field is as indicated .the ampre loop is in the plane ., width=326 ] consider next a current sheet that is abruptly turned on at time .that is , , where is the heaviside step function .before the current is turned on , the field is zero everywhere . following the abrupt turning on of the current , a non zero field is starting to fill up space , the wavefront propagating at the speed .therefore , we have a propagating wavefront , so that in front of the wavefront the field vanishes , and behind it is uniform as shown above . to find the accompanying field we use the faraday law for the ampre loop in fig .[ wave_front ] .only the part of the ampre loop behind the wavefront has non - vanishing flux . as the wavefront is propagating with speed , the area of the loop where there is a field is increasing , so that there is a time changing flux through the loop .the field between the wavefront and the current sheet is parallel to the current sheet , and counter - parallel to the current .that is , .( see fig .[ wave_front ] . ) using the faraday law , one finds that the rhs equals , and the flux of the field through the loop at the time is , so that the rhs of the faraday law equals .setting the two hand sides equal to zero , we find , or although our argument was made only for the configuration of a current sheet that is abruptly turned on , the result that for an electromagnetic wave is general .the general result , however , is beyond the scope of the introductory course .an important consequence is that in an electromagnetic wave the and fields are in phase .that is , as at any given time , when is maximal so is , when vanishes is zero too , etc .the result that appears in many introductory texts ( e.g. , in ) .however , in such books an extra assumption is typically made , namely that the fields are harmonic and are in phase .students of the introductory course , who normally are not familiar with fourier theory , often find it unconvincing to base an important general physical result on the mathematical properties of sinusoidal waves , in addition to having to memorize yet another factoid , namely that the fields are in phase . at this point oneis in a position to make the following discussion .we found that .this means that the ratio for an electromagnetic wave equals , which depends in magnitude on the system of units .e.g. , in si units this ratio is , and in cgs units is it .it is therefore natural to use units that put the and fields on equal footing , that is use units in which distance is measured in light - seconds . in such units the speed of light is ( light-)second / second , or simply .that is , in these units .this choice of units becomes further motivated physically when one considers the energy density in the electromagnetic field ( not necessarily that of an electromagnetic wave ) , .for an electromagnetic wave , so that the energy densities stored in the and fields are equal .it is therefore suggestive to make the two fields symmetrical by using units in which .it is often instructive to remind students that si or cgs units were developed because of their convenience in describing everyday phenomena involving humans , and while all unit systems are in principle equivalent , these are not necessarily the units in which physical phenomena take their simplest or most natural form .plane , carrying current density as in fig .[ current_sheet ] .the current sheet is turned on abruptly , so that plane waves propagate with speed to the right of the current sheet in the direction , and to its left in the direction .the wavefront ( wf ) is shown only to the right of the current sheet . at time the wave front touched the dashed segment in the direction of the ampre loop .the space between the wave front and the current sheet has uniform field , and ahead of the wavefront the ( and ) field vanishes .the only part of the loop with non - vanishing field is behind the wave front , shown shaded.,width=326 ] we therefore showed that electromagnetic waves are transverse , and that the electric and magnetic fields are perpendicular to each other .also the directions are so that is in the direction of propagation .we have also shown they have the same speed , that they are in phase , and that the magnitudes of the and is related by .this demonstration makes use of only the integral maxwell equations , and is appropriate for the level of most calculus based physics courses , and includes only arguments already available for the perspective students .we believe its main strength is that it avoids giving the students factoids without deep understanding , and instead empowers the student to gain deeper insight .the author wishes to thank richard price for discussions .this work has been supported by nasa / gsfc grant no .ncc5580 and by nasa / ssc grant no .nnx07al52a , and by nsf grant no .phy0757344 .physics for scientists and engineers with modern physics , a strategic approach _ ( pearson addison wesley , 2004 ) ; h.d .young and r.a .freedman , _ sears and zemansky s university physics _, 12 ed .( pearson addison wesley , 2007 ) .r. wolfson and j.m .pasachoff , _ physics with modern physics for scientists and engineers _ , 3 ed .( addison wesley , 1999 ) ; p.a .tipler and g.p .mosca , _ physics for scientists and engineers _ , 5 ed .freeman , 2004 ) ; s.m .lea and j.r .burke , _ physics : the nature of things _ ( brooks / cole , 1997 ) ; p.m. fishbane , s. gasiorowicz , and s.t .thornton , _ physics for scientists and engineers _ , 3 ed .( addison wesley , 2005 ) ; r.l .reese , _ university physics _ ( brroks / cole , 2000 ) ; r.a .serway and r.j .beichner , _ physics for scientists and engineers _ , 5 ed .( saunders , 2000 ) .purcell , _ electricity and magnetism : berkeley physics course _ , volume two , 2 ed .( mcgraw hill , 1985 ) .feynman , r.b .leighton , and m.l .sands _ the feynman lectures on physics _ ( addison wesley , 1989 ) .
introductory calculus based physics textbooks state that electromagnetic waves are transverse and list many of their properties , but most such textbooks do not bring forth arguments why this is so . both physical and theoretical arguments are at a level appropriate for students of courses based on such books , and could be readily used by instructors of such courses . here , we discuss two physical arguments ( based on polarization experiments and on lack of monopole electromagnetic radiation ) , and the full argument for the transversality of ( plane ) electromagnetic waves based on the integral maxwell equations . we also show , at a level appropriate for the introductory course , why the electric and magnetic fields in a wave are in phase and the relation of their magnitudes .
accurate inferences of solar meridional flow are important for modelling the solar dynamo ( e.g. , , ) . at or near the surface, the magnitude of the meridional flow is known to a good extent ( e.g. , , ) from a variety of methods such as doppler - shifts and feature tracking ( e.g. , ) , time - distance helioseismology ( e.g. , , ) ring - diagram analysis ( e.g. , ) , and fourier - hankel analysis .the nature of meridional flows in deeper layers below about 0.9 solar radii still remains under debate .such measurements were obtained using tracking of supergranules and helioseismologic techniques , such as time - distance helioseismology ( ; ) , fourier - hankel analysis , and global helioseismology .the conclusions on the global pattern of the meridional flow are very different , ranging from multiple cells in depth and multiple cells in depth and latitude to a single - cell picture with a return flow starting in rather shallow layers ( ; , at about 0.9 solar radii ) or in deeper regions ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* below 0.85 solar radii ) .however , there are several details to keep in mind concerning these inversion results , e.g. , that uncertainties in the results due to systematic effects are likely to be larger than the random errors in the inversion results , that the results were obtained using different instruments , and that they cover different periods in time .furthermore , the impact of the radial flow component on the travel times was not taken into account in the results obtained by and . in time - distance helioseismology , flows can be inferred using sensitivity functions ( kernels ) , which are a model for the impact of the flows on travel - time measurements of acoustic waves .measurements of deep meridional flow have so far been done using the rather classical ray approximation ( ; ) . in this model , travel times are assumed to be sensitive to flows only along an infinitely thin ray path which connects the two observation points .recently , extended a born approximation model for the travel - time measurements from cartesian to spherical geometry .an alternative approach for computing born kernels was proposed very recently by .these developments permit the use of born approximation kernels for inferring the deep meridional flow . in the born approximation ( e.g. , , ), the full wave field in the solar interior is modelled using a damped wave equation which is stochastically excited by convection .this wave equation is solved in zero - order and in its first - order perturbation , which includes advection in the presence of a flow field . when modelling travel times using the born approximation , the advection and first - order scattering of the wave field at any location inside the sun is thereby taken into account .the ray approximation is expected to be accurate if the underlying flow field does not vary at length scales which are smaller than a wavelength ( e.g. , * ? ? ?* ) . in the case of flows at the bottom of the convection zone, this scale was estimated to be of the order of ( and references therein ) .if the flow varies on smaller length scales , the born approximation is thought to be more accurate ( e.g. , , , , and ) .in addition to modelling the perturbation to the full wave field in the solar interior , an advantage of the born approximation is that it also provides a model for additional observational quantities , such as disc - averaged cross - covariances and mean power spectra , which is not the case for the ray approximation .the accuracy of the model can therefore easily be validated .several methods for inferring the deep solar meridional flow have been validated using a linear numerical simulation of the solar interior wave field by , which includes a standard single - cell meridional flow profile .the same simulation is to be used in this paper in order to validate the use of spherical born kernels for inferring solar meridional flows . in section [ secborn ], we will give a short introduction to the born approximation as used in time - distance helioseismology .section [ secfast ] includes the details of a numerical optimization , which was necessary in order to obtain the required number of kernels in an acceptable amount of time .the main comparison of the born approximation model with artificial data obtained from the simulation is presented in section [ secfwdcomp ] . in section [ secradial ], we discuss the relevance of the radial flow component on the measured travel times .furthermore , we show inversion results of this data obtained with the born kernels in section [ secinversions ] .conclusions are presented in section [ secconclusions ] .we first briefly summarize the born approximation model introduced by , which is to be validated in the following sections . following ,the measurement process in time - distance helioseismology is modelled as closely to observations as possible .the objective of the model is to give a linear relationship between a small - magnitude flow field in the solar interior , which perturbs a mean observed or background model travel time , , according to = \int_\sun { { \bf k}}({{\mathbf r } } ) { { \ , \bm \cdot \,}}{{\bf v}}({{\mathbf r } } ) \ , { { \mathrm d}}^3 { { \mathbf r } } , \label{eqkernelgoal}\ ] ] where denotes the expectation value of a stochastic quantity .vector quantities are printed in bold throughout this work . once an expression for the sensitivity function is found and travel times are measured , equation can be used to infer solar interior flows . in order to achieve this goal , a model is first developed for an unperturbed spherically symmetric non - rotating sun ( zero - order problem ) .the stochastically driven and damped wave equation for solar oscillations is solved by expansion into solar eigenmodes .the resulting wave field at location and time can be used to model the observed doppler signal via , where the line - of - sight is assumed to be radial .after a spherical harmonic transform and a fourier transform ( indicating the fourier transform by the use of the variable instead of ) , a power spectrum can be computed both from observations and from the model , where the filtered were obtained by multiplying by a filter function . after reconstructing the filtered dopplergram time series , , cross - covariancesare computed by which are used for fitting travel - times from observations , see , ( * ? ? ?* hereafter gb04 ) , and .as our goal is to measure flows , we use travel - time differences , and the terms travel - time and travel - time difference are used interchangeably in this work . in order to model the effect of a flow field on the travel times, the zero - order model for the mean sun is perturbed .the flow is assumed to be small and thus only linear effects are taken into account .the perturbation of the flow is introduced in the wave equation by adding an advection term .this first - order wave equation is solved for the perturbation to the wave field , . taking only first - order terms ( i.e. linear terms ) into account, this corresponds to modelling the first - order scattering of the modes in the solar interior due to the flow field , see .the perturbation to the wave field is then used to obtain the perturbation to the cross - covariance and the travel - time shift according to equation .the result is where the last term is identical to the previous term , apart from complex conjugation and exchange of indices 1 and 2 . in equation ,the sum is taken over all pairs of eigenmodes and the quantities and describe the scattering of mode into mode due to a flow at location , which is propagated to the observation points at the surface .each mode is identified by its harmonic degree and radial order .see appendix [ appendixoptimization ] for a recap of the definitions of and .as the numerical evaluation of equation is quite costly for obtaining an accurate kernel ( about 2 days on 32 cpu cores for a three - dimensional kernel covering only about 5 % of the solar volume with a coarse spatial grid ) , it is necessary to optimize its computation .this requires a little more mathematical detail , which the reader may well skip in order to understand the main results of this paper .we here propose an approach in which we make use of the separability of the eigenfunctions into a horizontal and a radial dependence .this is a consequence of the separation of variables performed when solving the oscillation equations ( e.g. , ) .the horizontal dependence of the eigenfunction is then also separable from the radial order of the mode .this allows us to rewrite the kernel formula ( see appendix [ appendixoptimization ] for details ) here , the quantity includes the horizontal dependence of the kernel from .the quantity includes the sum over and , the factor as well as the -dependent factors and the radial dependence from in equation .if is evaluated before the main loop over , , and the spatial grid in equation , the computational burden of the sum over and does not play a considerable role in the total computation time anymore . in practice , for a given , the number of radial orders to be summed over is about 10 for probing the deep solar interior .the approach presented here thus decreases the computation time by about two orders of magnitude . computing a full 3d sensitivity kernel using harmonic degrees and a spatial grid with grid points ( , covering depths until and the complete horizontal domain ) , therefore , takes about 1 day on 8 cpus .a complete set consists of one kernel per travel distance , which can be reprojected to different latitudes and used to obtain , e.g. , kernels for point - to - arc travel times , see examples presented in figure [ figkernels ] .computing such a set with 126 kernels as used in sections [ secfwdcomp ] - [ secinversions ] takes about 3 weeks on 96 cpus .very recently , developed an alternative approach for computing born kernels .this approach is based on numerically solving for the green s functions for individual modes and individual frequencies .it allows for a rather flexible computation of kernels for different quantities such as sound - speed , density , flows , and damping properties , as well as a possible inclusion of axisymmetric perturbations to the model . while a detailed comparison of both methods is left to further studies , we consider as an example the computation of the radial solar model case from table 2 in using the frequency resolution from this study ( about 92,000 grid points spread over a range of , see also ) and 149 modesthis would thus take about eight times longer than using our method .in general , the computation time for our method scales with the number of modes squared , and the method of scales with the number of modes times the number of frequency bins used . as a very fine frequency resolution is necessary to adequately sample the frequency domain in the case of deeply - penetrating modes , our method is thus advantageous for computing kernels for filtered observations of deep flows , and theirs for computing kernels using many modes or a coarser frequency resolution .furthermore , it is possible to reach an even greater optimization when computing individual kernels which were integrated over the azimuthal coordinate . for inferring flow fields which are rotationally symmetric such as meridional flow or differential rotation, one can rewrite equation as & = \iint { { \bf v}}(r,\theta ) \cdot \vec{\mathbb{k}}({{\mathbf r}}_1,{{\mathbf r}}_2;r,\theta ) \ ,r \ , { { \mathrm d}}\theta \ , { { \mathrm d}}r , \label{eqintkernelgoal}\end{aligned}\ ] ] where the integrated kernel is defined as we thus have where is the integral over the -coordinate of the variable in equation .as the numerical evaluation of the integral of over is independent of the radial coordinate , for each latitude , the integration over and the loop over the radius become independent . as a consequence ,the computation of an integrated kernel scales with the maximum of the number of radial and azimuthal grid points rather than with their product . for the example presented in section [ secopt3d ], the computation of an integrated 2d kernel is thus about 75 times faster compared to its 3d version .a drawback of this approach , however , is that 2d integrated kernels can not be projected to different latitudes due to the nature of the coordinate system used .therefore , this approach turns out to be advantageous if a smaller number of kernels is to be computed .it is possible to extend the previous approach for computing 2d integrated kernels to the case of travel times which were averaged over multiple points , e.g. in a point - to - arc geometry , if all individual point - to - point measurements were obtained for the same travel distance .indicating with and averages of and over all pairs of observation points , we have the following , we use a numerical simulation of helioseismic wave propagation in the solar interior by , which includes a standard single - cell meridional flow profile and which has been used before for validation purposes ( see , , , and references therein ) .as explained in , the flow was amplified by a factor of about 36 to a maximum amplitude of in order to increase the signal - to - noise ratio .as the simulation is linear , this increase does not alter the physics .the radial displacement of the oscillations in the simulation at are used as dopplergrams .they are our input artificial data .power spectra , cross - covariance functions , and travel times were obtained from this artificial data by using a time - distance measurement technique with phase - speed filters . in the following , these observational quantitiesare compared to the zero- and first - order models to be validated in this paper .cccccc l035 & 342.9 & 2.93 & 37.2 & 3.05 & 38.9 + l040 & 308.5 & 2.93 & 41.5 & 3.05 & 43.4 + l045 & 274.6 & 2.93 & 46.7 & 3.05 & 49.0 + l050 & 248.8 & 2.92 & 51.8 & 3.05 & 54.5 + l065 & 199.4 & 2.91 & 65.1 & 3.05 & 69.0 + l080 & 158.7 & 2.90 & 82.8 & 3.05 & 88.5 + l100 & 130.7 & 2.87 & 101.0 & 3.04 & 108.9 + l120 & 106.9 & 2.78 & 121.5 & 2.96 & 130.1 + l150 & 78.5 & 2.63 & 138.8 & 2.81 & 143.9 + l170 & 30.5 & 2.16 & 157.3 & 2.24 & 156.4 + filtered power spectra obtained from the simulation and from the zero - order model are compared in figures [ figpowercomp ] and [ figpowercomp2 ] .figure [ figpowercomp ] shows cuts through the power spectra at the central harmonic degree of each filter ( top ) and power spectra summed over harmonic degree ( bottom ) .figure [ figpowercomp2 ] shows power spectra integrated over frequency .see table [ tabfilters ] for details on the filters used . in the following ,the simulated data is compared to our model using two different sets of mode frequencies and damping rates . as a first - guess case ,the zero - order model power spectrum , displayed as a green dotted line , was computed with frequencies from model s and damping rates from mdi ( `` unmatched '' in the legends appearing in this paper ) .damping rates were provided by j. schou ( 2006 , private communication ) .it can be seen that the peaks in the model power spectrum are systematically different from the ones in the simulation .the widths of the peaks are systematically smaller in the model power spectrum and the central frequencies of the peaks differ , especially at frequencies above about ( best observable in the top right panel ) . as was discussed in (* section 3.3 ) , the shape of a cut through the model power spectrum matches a lorentzian function centered at the input mode frequency and with the damping rate as its half width at half maximum .the difference in the location and shape of the peaks between simulation and model thus shows that the mode frequencies and damping rates from the simulated data are different from model s frequencies and observed damping rates .as the computation of sensitivity functions is strongly dependent on accurately modelling the data power spectra ( e.g. , ) , with real solar observations , one would use mode frequencies and damping rates as close to solar values as possible . in order to achieve a good match between model and data power spectra for the validation presented in this paper , we therefore fitted frequencies and damping rates to the simulated power spectrum .these fitted values were used as parameters for the zero - order power spectrum shown as a red dashed curve in figures [ figpowercomp ] and [ figpowercomp2 ] ( `` matched '' in the legends ) .it can be seen that both the locations and the widths of the peaks of the simulated and zero - order modelled power spectra now match well ( figure [ figpowercomp ] , top ) .the amplitude of each peak , however , is not a free parameter in the zero - order model and therefore can not be adjusted to the simulation . additionally , the source correlation time , a free parameter modelling the sources in , was fine - tuned individually for each matched filtered zero - order model power spectrum in order to obtain a power - weighted mean frequency identical to the one from the simulation .this is necessary for guaranteeing that the mean sensitivity of the kernel is correctly adjusted ( , ) .in addition , the power spectra for the kernel were corrected by an -dependent factor which accounts for a different behavior of the frequency - integrated power in the simulation and in the zero - order model , see figure [ figpowercomp2 ] .see also table [ tabfilters ] for a summary of properties for the power spectra obtained with matched and unmatched frequencies and damping rates , for each individual filter used in this work . in the following , we will compare simulated data to the zero- and first - order models obtained using both sets of frequencies and damping rates . born approximation sensitivity functions in time - distance helioseismology as in , , and obtained for a travel - time fit which involves a reference cross - covariance function . on the data analysis side ,it is advantageous to use some kind of disc- and time - averaged cross - covariance function as a reference .this assures that travel - times measured at individual locations at a certain point in time correspond to local changes compared to the mean solar cross - covariance for a particular travel distance . on the modelling side ,the zero - order cross - covariance function is usually used as a reference function .figure [ figcrefmatched ] shows a comparison of these reference cross - covariance functions for different exemplary travel distances .it can be seen that the model fits the data very well once the fitted frequencies and damping rates were used .travel times fitted to the cross - covariances obtained from the simulated data can be compared to forward - modelled travel times , which can be obtained from the born kernels using equation or as the underlying flow field is known .figures [ figfwdtts2d ] and [ figfwdtts1d ] show a comparison of such travel times .the displayed measured travel - times were obtained by rebinning an original set of travel times for 598 central latitudes and 126 distances to 66 latitudes and 30 distances .the corresponding averaging was applied to the kernels used for the forward travel times marked with `` avg . '' in figures [ figfwdtts2d ] and [ figfwdtts1d ] . for the forward - modeled travel timesmarked with `` not avg . '', however , the kernels have not been averaged . instead, we have just computed one center - to - arc kernel for each of the 66 times 30 distances .furthermore , we also show forward travel - times from kernels obtained with an unmatched power spectrum . in figure [ figfwdtts2d ], it can be seen that the travel times from the kernels generally fit very well to the measured ones .the forward - modelled travel times , which were obtained using properly averaged born kernels and a matched power spectrum ( top right panel ) , also reproduce some of the jumps visible in the measured travel times as a function of distance at mid - latitudes .these jumps are introduced by a change of phase speed from filter to filter .it can also be seen that there do not seem to be significant differences between the gabor and gb04 fits .horizontal cuts through the panels displayed in figure [ figfwdtts2d ] are shown in figure [ figfwdtts1d ] , where the match between forward - modelled and measured travel times can be inspected in more detail .it can be seen that forward - modelled travel times fit the measured ones within the measurement errors for most distances .the agreement is particularly good for travel distances of about 8 - 20 degrees with measurement errors increasing with increasing travel distance . at very low travel distances ( about ), it can be seen that the forward - modelled travel times do not fit the measured ones .the reason for this effect lies in the fact that the simulated data only incorporates modes with harmonic degree .this cut - off acts as an additional filter at a harmonic degree at which the filter with the lowest phase speed ( l170 ) is centered .the modes selected by the filter thus do not form a proper wave packet , which may lead to artifacts in the cross - correlations ( see , e.g. , ) .additionally , it is noteworthy that the quality of the match of the power spectrum ( compare matched vs. unmatched ) does not have a large effect on the travel times at higher travel distances ( ) . at smaller travel distances ( ) , however , one can observe a substantial effect of the quality of the match of the power spectrum on the travel times .we note here that this conclusion may be different for other flow models .we also note that the absence of a proper averaging of multiple kernels in distance and latitude has a considerable effect for smaller travel distances ( ) but seems to vanish for the largest distances in the case of the flow model considered here . for comparison ,forward travel times using ray kernels are also shown in figures [ figfwdtts2d ] and [ figfwdtts1d ] .it can be seen that the magnitude of the ray kernel travel times is , in general , very similar to those obtained using born kernels. however , born kernels seem to better reproduce some features in the measured travel times , e.g. , jumps in the magnitude of the travel times from one filter to another such as from to ( see the middle panel in figure [ figfwdtts1d ] ) . as the given flow model varies on rather large length scales , a relatively good performance of the ray kernelsis expected .this may be different for meridional flow models which vary on smaller length scales , see section [ secintro ] .we also note here that the measured travel times obtained using the filter with the highest phase speed ( l035 ) encounter a spurious constant offset , see figure [ figfwdttsnooffset ] .this offset was corrected for by substracting the mean travel time at each distance .such an offset is not present in real solar data ( , ) . in ,this offset was not noticed in the simulated data , as a few distances had been dropped from the analysis , which resulted in an equivalent correction .the origin of this offset is not completely understood but may be connected to the relatively coarse spatial resolution of the simulated data ( 256 pixels on 180 degrees ) .as the influence of the radial flow component on the travel times has not been taken into account by and , we evaluate its impact in the following using our born approximation model , see the green dashed line in figure [ figfwdtts1d ] . for small travel distances ,the contribution from the radial flow to the total travel time is small and increases with travel distance .e.g. , for a latitude of and a travel distance of , the impact of the radial flows relative to the horizontal flows is .this ratio increases to for a travel distance of .however , for all distances , the magnitude of the contribution of the radial flows is smaller than the measurement errors of the travel times .this result is valid for the special case of the simulated data and the flow profile used in this study .nevertheless , it indicates that it may be worthwhile to study the impact of including kernels for the radial flow component in meridional flow inversions .in order to show that spherical born approximation kernels can be used for inferring solar meridional flows , a standard sola inversion procedure for the horizontal flow component is carried out following the approach of using only the horizontal component of the born kernels .the travel times for the two smallest travel distances belonging to the lowest phase - speed filter ( l170 ) were excluded from the inversion as they can not be trusted , see the discussion in section [ sectts ] and figure [ figfwdtts1d ] . in figure [ figinvmeas ] ,the inversion results are shown . the target flow profile ( first panel )was obtained by convolving the target kernels with the flow profile from the simulation .the second panel shows a flow map resulting from the inversion of noiseless forward - modelled travel times .it can be seen to match qualitatively very well the target flow profile .the inversion result for the noisy measured travel times ( third panel ) matches the target flow in a coarser sense , recovering some but not all of the flow pattern from the simulation .figure [ figinvtargetks ] shows averaging and target kernels for two example target locations .the misfit of the averaging kernels shown in the right panel of figure [ figinvtargetks ] was obtained with where is the averaging kernel and is the target kernel for a particular target location .we note that the weights and thus the averaging kernels are identical for both inversions presented here .in this paper , we have presented the validation of spherical born kernels for inferring the deep solar meridional flow with time - distance helioseismology .we showed that it is possible to efficiently compute spherical born kernels for measuring the deep solar meridional flow .to do so , we used the recently developed approach of , which was further optimized for computational efficiency , either for obtaining two - dimensional integrated or full three - dimensional kernels .the numerical optimization was based on the horizontal variation of the eigenfunctions being separable from the radial dependence and the radial order of the mode .compared to a recently developed method by , the numerical efficiency of our method is found to be similar , with some advantages in the case of filtered kernels or in the case of a fine frequency resolution as needed for deep flow measurements . using a spherical born approximation model , it is possible to accurately model observational quantities relevant for time - distance helioseismology such as the mean power spectrum , disc - averaged cross - covariances , and first - order travel times perturbed by a given flow field .we also show that the match of the reference cross - covariance between model and observations depends on the match of the model power spectrum to the observed one .the agreement is very good if the mode frequencies and damping rates entering the model are extracted from the measured power spectrum .the match between observed and modelled travel times , however , does not seem to depend significantly on the match in the power spectrum for travel distances larger than for the flow model considered here . for travel distancessmaller than , we found a noticeable dependence of the forward travel times on the match of the power spectrum to observations . using a standard 2d sola inversion of travel times measured from the simulated data for the horizontal flow component, we can recover most features of the input meridional flow profile in the inverted flow map .the agreement is particularly good for the inversion of noiseless forward - modelled travel times .when inverting noisy measured travel times , we obtain a coarse agreement between inverted and target flows .this shows that born kernels can be used for inferring the deep solar meridional flow if the noise level in the data is small enough .the born approximation is thus a promising method for inferring large - scale solar interior flows .we note , however , that an extensive study is needed in order to compare the use of born and ray kernels in inversions of the deep meridional flow in more detail .acknowledgements : the research leading to these results has received funding from the european research council under the european union s seventh framework programme ( fp/2007 - 2013 ) / erc grant agreement n. 307117 .this work was supported by the solarnet project ( www.solarnet-east.eu ) , funded by the european commission s fp7 capacities programme under the grant agreement 312495 .j.j . acknowledges support from the national science foundation under grant number 1351311 .s.k . was supported by nasa s heliophysics grand challenges research grant 13-gcr1 - 2 - 0036 .the authors thank thomas hartlep for providing the simulated data .v.b . thanks kolja glogowski for computing eigenmodes for model s and helping with many science - related it issues .the authors acknowledge fruitful discussions during the international team meeting on `` studies of the deep solar meridional flow '' at issi ( international space science institute ) , bern .the authors thank the referee for constructive comments which improved the paper .our starting point is from equations ( 38 - 40 ) in , where { \bm { { \bm \nabla } } } _ { { { \mathbf r } } } \bigg [ { { \mathcal o}}_k^{ln } ( { { \mathbf r } } ) \big [ p_{l}(\cos \delta_1 ) \big ] \bigg ] } , \label{eqzij } \\ { j_{ij}({{\mathbf r}}_1,{{\mathbf r}}_2 ) } & = ( 2l+1)(2\bar l + 1 ) r_{ln}(r_s ) r_{\bar l \bar n}(r_2)\nonumber \\ & \ , \ , \ , \times { \sum_{n ' } r_{ln'}(r_s ) r_{ln'}(r_1 ) \int_{-\infty}^\infty \frac{{{\rm i}}\omega^3 { { w^*_\text{diff}}}({{\mathbf r}}_1,{{\mathbf r}}_2,\omega ) m(\omega ) \ , f(l,\omega ) f(\bar l,\omega)}{4\pi \ , ( \sigma^{2}_{ln}-\omega^2 ) ( \sigma^{2*}_{ln'}-\omega^2)(\sigma^{2}_{\bar l \bar n}-\omega^2 ) } \ , { { \mathrm d}}\omega } .\label{eqjij}\end{aligned}\ ] ] it is now possible to separate the horizontal and radial dependence of the eigenfunctions imprinted in the operators in equation ( see , eq .( 17 ) , for their definition ) so that one obtains using appropriate definitions for . from equations and we derive , bearing in mind , where we defined 41ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1146/annurev - astro-081913 - 040012 [ * * , ( ) ] link:\doibase 10.12942/lrsp-2010 - 3 [ * * , ( ) ] link:\doibase 10.12942/lrsp-2005 - 6 [ * * , ( ) ] link:\doibase 10.12942/lrsp-2005 - 1 [ * * , ( ) ] link:\doibase 10.1088/0004 - 637x/725/1/658 [ * * , ( ) ] link:\doibase 10.1088/0004 - 637x/760/1/84 [ * * , ( ) ] link:\doibase 10.1038/36294 [ * * , ( ) ] link:\doibase 10.1086/342636 [ * * , ( ) ] link:\doibase 10.1086/381489 [ * * , ( ) ] link:\doibase 10.1086/339631 [ * * , ( ) ] link:\doibase 10.1086/432413 [ * * , ( ) ] link:\doibase 10.1007/s11207 - 008 - 9264-y [ * * , ( ) ] link:\doibase 10.1088/2041 - 8205/713/1/l16 [ * * , ( ) ] link:\doibase 10.1088/0004 - 637x/717/1/488 [ * * , ( ) ] link:\doibase 10.1086/311727 [ * * , ( ) ] link:\doibase 10.1002/asna.200610745 [ * * , ( ) ] link:\doibase 10.1088/2041 - 8205/774/2/l29 [ * * , ( ) ] , link:\doibase 10.1088/0004 - 637x/805/2/133 [ * * , ( ) ] , link:\doibase 10.1088/0004 - 637x/813/2/114 [ * * , ( ) ] , link:\doibase 10.1088/2041 - 8205/778/2/l38 [ * * , ( ) ] , link:\doibase 10.1038/362430a0 [ * * , ( ) ] link:\doibase 10.1086/309989 [ * * , ( ) ] in link:\doibase 10.1007/978 - 94 - 011 - 5167 - 2_26 [ _ _ ] , , vol . , ( ) pp .link:\doibase 10.3847/0004 - 637x/824/1/49 [ * * , ( ) ] , http://adsabs.harvard.edu/abs/2016arxiv161101666g [ * * , ( ) ] \doibase 10.1023/a:1005283526062 [ * * , ( ) ] link:\doibase 10.1086/340015 [ * * , ( ) ] link:\doibase 10.1086/324766 [ * * , ( ) ] link:\doibase 10.1086/303680 [ * * , ( ) ] link:\doibase 10.1086/424961 [ * * , ( ) ] link:\doibase 10.1086/500103 [ * * , ( ) ] link:\doibase 10.1002/asna.200610724 [ * * , ( ) ] , link:\doibase 10.1051/0004 - 6361/201526971 [ * * , ( ) ] , link:\doibase 10.1088/0004 - 637x/762/2/132 [ * * , ( ) ] , link:\doibase 10.1086/423367 [ * * , ( ) ] http://adsabs.harvard.edu/abs/2010aste.book.....a[__ ] ( ) link:\doibase 10.1088/0004 - 637x/784/2/145 [ * * , ( ) ] , link:\doibase 10.1086/522914 [ * * , ( ) ] , link:\doibase 10.1088/0004 - 637x/788/2/127 [ * * , ( ) ] , link:\doibase 10.1088/0004 - 637x/794/1/18 [ * * , ( ) ] , in http://adsabs.harvard.edu/abs/2006esasp.617e..48z[__ ] , , vol .
accurate measurements of deep solar meridional flow are of vital interest for understanding the solar dynamo . in this paper , we validate a recently developed method for obtaining sensitivity functions ( kernels ) for travel - time measurements to solar interior flows using the born approximation in spherical geometry , which is expected to be more accurate than the classical ray approximation . furthermore , we develop a numerical approach to efficiently compute a large number of kernels based on the separability of the eigenfunctions into their horizontal and radial dependence . the validation is performed using a hydrodynamic simulation of linear wave propagation in the sun , which includes a standard single - cell meridional flow profile . we show that , using the born approximation , it is possible to accurately model observational quantities relevant for time - distance helioseismology such as the mean power spectrum , disc - averaged cross - covariance functions , and travel times in the presence of a flow field . in order to closely match the model to observations , we show that it is beneficial to use mode frequencies and damping rates which were extracted from the measured power spectrum . furthermore , the contribution of the radial flow to the total travel time is found to reach 20% of the contribution of the horizontal flow at travel distances over . using the born kernels and a 2d sola inversion of travel times , we can recover most features of the input meridional flow profile . the born approximation is thus a promising method for inferring large - scale solar interior flows .
most recent studies dealing with the magnetic structure of the solar corona above active regions use different force - free model approaches ( see recent review of * ? ? ?* ) and base the modeling on photospheric vector magnetic field data from either the helioseismic and magnetic imager ( hmi ) on board the solar dynamics observatory ( sdo ) or the spectropolarimeter ( sp ) of the solar optical telescope ( sot ) on board the hinode spacecraft .such model approaches are used to compensate the lack of routine direct measurements of the coronal magnetic field vector . used sp data as lower boundary condition to an optimization approach to analyze the coronal magnetic field associated to a white light flare .the modeling suggested that the flare originated from sheared and twisted field lines with low altitudes bridged by a set of higher magnetic field lines . applied a mhd relaxation method based on sp data to investigate the buildup and release of magnetic twist and suspected the importance of the relative handedness of twisted field lines and the ambient field . employed an optimization method based on hmi data to model the temporal evolution of the coronal field of an active region over five days .the modeling displayed distinct stages of the build - up and release of magnetic energy and analyzed the association to changes in the magnetic field .in a subsequent study , used the same method for a detailed analysis of the field topology during a series of eruptions observed from hmi . above the apexes of cusp - like loops observed in coronal images , the modeling result suggested the presence of a coronal null point .comparisons of the outcome of _ different _ force - free model algorithms based on the _ same _ lower boundary conditions have been performed in the past too ( e.g. , * ? ? ?* ; * ? ? ?these studies revealed that order - of - magnitude estimates of these models based on large enough fields - of - view and high enough spatial resolution of the vector magnetic field data can be expected to be reliable .the model outcome of the _ same _ reconstruction algorithm based on data from _ different _ instruments has been studied recently by , using data from hmi and the vector - spectromagnetograph ( vsm ) of the solis project .they found agreements of the resulting force - free models in form of , e.g. , the relative amount of energy to be set free during an eruption but also found a considerable difference in the absolute model energy estimates .in particular , they found the estimated energy content of the vsm model being about twice of that of the hmi model . in the present study we use vector magnetic field data of hmi and sp as an input to the same force - free model andcompare the results .this is motivated due to the data of these two instruments being widely used recently and in the future expected to be frequently used as an input for the modeling of the coronal magnetic field .we regard this as important in order to test the consistency of the model solutions and at the same time to give a feeling about the accuracy of the estimated physical quantities and magnetic field topology based on those models .we also investigate the effect which a binning of the sp instrument data to a lower resolution has on the model outcome .we , however , do not search for explanations of the differences of the hmi and sp inversion products itself , considering this as to be out of the scope of this study .the hmi on board sdo obtains filtergrams at the photospheric fe 617.3 nm spectral line .the full stokes vector is retrieved from filtergrams averaged over about 12 min and inverted using the milne - eddington ( me ) inversion algorithm of .the 180-azimuth ambiguity of the transverse field is resolved using a minimum energy algorithm and the resulting vector magnetograms have a plate scale of 0.5 arc - second . in its fast scan mode ,sp as part of the sot on board the hinode spacecraft obeys a spatial resolution of .32 arc - second .it observes the fe spectral line couplet at 630.15 nm and 630.25 nm .full stokes profiles are obtained with a spectral sampling of 2.25 nm and a slit scan time of 1.6s .the physical parameters from the full stokes profiles were obtained using the merlin me inversion algorithm .the 180-azimuth ambiguity is resolved in the same way as for hmi data . to perform a study as described above simultaneous observations from hmi and sp are required .both , hmi and sp vector magnetic field data are available to analyze the magnetic structure of active region 11382 on 2011 december 22 .sp scanned this active region from 04:46 ut to 05:29 ut ( i.e. , scanned /min from solar east to west ) .the hmi vector map used for this study was retrieved at :00 ut , approximately at half of the scanning time of sp . given average photospheric conditions ( , ) the alfvn speed is .the alfvn travel time and thus the evolution of the photospheric magnetic field over a characteristic distance in the present study ( ) is .therefore , we assume that the temporal changes of the photospheric field over min , during which sp scanned before and after the time when hmi recorded , are negligible . regarding the longitudinal magnetic field we assume this as justified since , e.g. , also found that the average ratio of the longitudinal magnetic flux densities measured by sp and soho / mdi was not strongly influenced by the evolution of the photospheric magnetic field during the scanning time of sp .the magnetic field vectors are transformed to heliographic coordinates .we correct the sp data set for the effect of differential rotation where we use as reference time . given present computational capabilities high - resolution data is sometimes binned to a lower resolution in order to allow for near real - time magnetic field modeling and/or computational domains of feasible dimensions ( see e.g. , * ? ? ?thus , in the course of co - alignment of the hmi and sp data , we also bin the original - resolution sp ( sp ) data to the resolution of hmi which involves a 2d linear interpolation . the binned sp data is hereafter denoted as sp data .the field - of - view ( fov ) used to study active region 11382 is mainly determined by the area covered by the sp scan ( centered around s19w07 ; see figure [ fig : fig1 ] ) . within the hmi data which covers roughly , we define a window equally sized as the fov of sp and calculate the cross - correlation between its vertical field component and that of the sp data . by shifting the position of the hmi window we search for the highest cross - correlation to find the corresponding hmi sub - field. data .the gray - scale background reflects the vertical magnetic field , .black / white arrows indicate the direction of the horizontal magnetic field , originating from positive / negative polarity where .the dashed rectangle outlines the quiet - sun region used to calculate the 2 uncertainty of the vertical and horizontal magnetic field .the black arrow just below the dashed rectangle indicates the length of an arrow representing a horizontal field magnitude of .,title="fig : " ] ( -55,205)*(a ) * ( -55,107)*(b ) * the noise level of the hmi data is on the order of / for the longitudinal / transverse magnetic field ( x. sun , private communication ) .the average uncertainty for both the longitudinal and transverse field , estimated from the sp inversion error maps is . besides seeming rather low especially for the transverse field these standard error estimates may not be reliable ( b. lites , private communication )thus , we employ a consistent measure of the uncertainty level for the data from the two instruments , as described in the following . when investigating the properties of the hmi and sp data in the course of the force - free modeling , we only consider ( ) pixels with values of the vertical field , , _ and _ horizontal field , , above the respective 2 uncertainty levels ( 2 and 2 , respectively ) or ( ) pixels with values of _ and _ , the latter in order not to disregard the strong vertical fields in the center of the active region .the respective 2 uncertainty levels are calculated for the quiet - sun area , outlined by the dashed rectangles in figure [ fig : fig1 ] , where we find : 2.3/6.7 and 2.2/12.2 for the hmi / sp data .the uncertainty levels for the sp data are calculated from a quiet - sun region , equivalent to that outlined in figure [ fig : fig1 ] . herewe find 2.8 and 2.3 , i.e. slightly higher than what was found for the sp data .the latter estimates , in fact , conform with the findings of who had estimated the measurement uncertainties of sp for the quiet - sun inter network field strengths and fluxes as .( -255,140)*(a ) * ( -125,140)*(b ) * ( -255,58)*(c ) * ( -125,58)*(d )* photospheric polarization signals originate from atmospheric layers which are known not to be force - free .for instance , showed , using vector magnetic field measurements of an active - region magnetic field , that it can be considered as to be force - free above 0.4 mm above a photospheric level .modeling the interchanging dominance of plasma and magnetic pressure , was able to estimate the height regime of dominating magnetic fields above a sunspot / plage region as .8 200 mm . using high - resolution sp vector maps, found that umbral and inner penumbral parts of sunspots may nearly be force - free but that sunspots as a whole at a photospheric level might not entirely be so .therefore , the inferred hmi and sp magnetic vector maps are not force - free consistent and need to be preprocessed to achieve suitable force - free consistent boundary conditions . from the vertical component of the preprocessed field vector , a potential fieldis calculated and used as start equilibrium and to prescribe the boundaries of the cubic computational domain .the bottom boundary is replaced by the preprocessed vector field and the set of force - free equations for the nonlinear case in cartesian coordinates solved .a boundary layer of is introduced towards the lateral and top boundaries where the nonlinear force - free ( nlff ) solution drops to the prescribed boundary field . for our analysiswe discard this layer and only consider the inner ( physical ) domain and the according bottom boundary field . since this method involves the relaxation of the magnetic field not only inside the computational domain but also on its bottom boundary we compute a potential field from the relaxed lower boundary based on the fast - fourier method described by .hereafter , we refer to the 3d nlff field model based on the hmi vector map as an input to as `` hmi model '' . similarly , the `` sp model '' and `` sp model '' result from using the binned and original - resolution sp vector magnetic field , respectively , as input data to our preprocessing and force - free reconstruction algorithms .we compare the hmi and sp model in [ ss : mfaff][ss : topo ] and summarize the effects of binning on the model outcome in [ ss : eof_binning ] .after performing the data preparation as described in [ ss : event_selection_and_data_set ] , we are able to investigate how the vertical and horizontal field components of the hmi and sp vector maps compare to each other and to check the force - free consistency .we only consider data points where the criteria outlined in [ ss : uncertainty_estimation ] are fulfilled .we find on overall stronger vertical fields in the sp than the hmi data ( especially for ; see figure [ fig : fig2]a ) and stronger horizontal field ( except for ; see figure [ fig : fig2]b ) .this can be seen also when comparing the area - integrated unsigned vertical flux , , and the average horizontal field , : the hmi data hosts % of and % of ( see table [ tab : forces ] ) .hmi and sp have a comparable sensitivity on long scales , , i.e. at small wave numbers , , especially on scales ( figure [ fig : fig2]c , d ) . with decreasing scale the amount of detected field increasingly differs : sp data show considerably stronger fields on smaller scales .( -255,110)*(a ) * ( -125,110)*(b ) * ( -255,58)*(c ) * ( -125,58)*(d ) * [email protected]@[email protected] & & & & & + & ] & + hmi & 1.323 & 18.4 & 0&13 & 0&08 & 0&29 + sp & 2.440 & 22.7 & 0&03 & 0&06 & 0&81 + sp & 2.510 & 21.5 & 0&03 & 0&06 & 0&84 + + hmi & 1.280 & 19.2 & 0&01 & 0&01 & 0&03 + sp & 2.341 & 30.2 & 0&02 & 0&01 & 0&02 + sp & 2.427 & 30.1 & 0&02 & 0&02 & 0&02 + + hmi & 1.279 & 20.9 & 0&01 & 0&02 & 0&01 + sp & 2.338 & 33.5 & 0&01 & 0&03 & 0&01 + sp & 2.364 & 30.3 & 0&01 & 0&03 & 0&08 + listed are the total unsigned vertical flux , , the average horizontal field , as well as the net lorentz force components ( , and ) normalized to the magnetic pressure force , . only pixels with values _ and _ , or _ and _ are taken into account .a necessary condition for a magnetic field to be force - free is that the components of the net lorentz force are considerably smaller than a characteristic magnitude of the total lorentz force in case of a non force - free magnetic field .the latter can be approximated by the magnetic pressure , , on the lower boundary .the ratio with in table [ tab : forces ] shows that this conditions are met only to a certain degree .the ratios found here agree with the values found by , e.g. , and and we also find [ , ] .preprocessing , however , certainly improves the force - freeness as it smooths the vertical field and , additionally , alters the horizontal field in order to minimize the net force and torque and to gain boundary conditions compatible with the force - free assumption ( showing ratios of clearly smaller than unity ) .the effect of smoothing can be seen when comparing the spectral power of the raw and preprocessed hmi and sp data ( figure [ fig : fig3 ] ) : the signal on shorter scales ( i.e. , for large ) is reduced . on longer scales , the power of the vertical field remains the same ( see figure [ fig : fig3]a , c ) but that of the horizontal field is enhanced ( more pronounced for the sp data ; see figure [ fig : fig3]b , d ) .the preprocessing leads to a slight reduction of : is reduced by % and by % .the preprocessing also leads to an enhancement of : is enhanced by % and is enhanced by % ( see table [ tab : forces ] ) .the preprocessed hmi data hosts % of and % of of the preprocessed sp data . in summary, the preprocessing only slightly reduces the difference of the vertical unsigned flux between hmi and sp ( which is % before as well as after preprocessing ) but enhances the difference of the average horizontal field , , ( from % _ before _ to % _ after _ preprocessing ) .these preprocessed hmi and sp data are used as lower boundary condition for the nlff reconstruction . as mentioned above , our nlff modeling algorithm also relaxes the magnetic field on the bottom boundary of a cubic computational domain .thus , besides during the preprocessing , the magnetic field vector on the lower boundary is altered also while iteratively seeking for the force- and divergence - free field solution in the volume .it s modification to the preprocessed lower boundary data , as listed in table [ tab : forces ] , is that is reduced by .1% and is increased by % .thus , the hmi nlff lower boundary hosts % of and % of of the sp nlff lower boundary data , comparable to the ratio we found for the preprocessed data . from the 3d model fields , we can estimate the magnetic energy content of the potential field ( ) and of the nlff field ( total energy ; ) .an upper limit for the energy which can be released ( excess energy ) is given by .we can estimate the statistical accuracy of our volume - integrated energy estimates by adding different artificial noise models to the hmi and sp magnetograms , consecutive application of the preprocessing and extrapolation algorithms and comparison of the resulting energy values .this yields a statistical error of % for both and and % for .the estimated _ absolute _ potential energy of the hmi model is % of that of the sp model ( see table [ tab : energies ] ) .this is a direct consequence of the hmi nlff bottom boundary hosting only % of the unsigned vertical flux of the sp bottom boundary ( see table [ tab : forces ] ) .a similar trend is found for the _ absolute _ total and excess energy of the hmi model .however , the _ relative _ excess energy is about 20% of the total energy in both models ( given by the ratio in table [ tab : energies ] ) .an excess energy on the order of is assumed to be sufficient for powering c - class flaring , which was actually observed for the active region analyzed here on the days before .similar values related to c - class flaring activity were found by , e.g. , and lately by .lcccc & & & & + & & + hmi & 3.45 & 2.80 & 0.65 & 0.19 + sp & 7.44 & 5.80 & 1.64 & 0.22 + sp & 7.31 & 5.85 & 1.46 & 0.20 + given are the total , potential and excess magnetic energy of the 3d magnetic model fields , listed as , and , respectively .the ratio gives the relative amount of excess energy .the statistical error of these estimates is % for and and % for .( -435,220)*(a ) * ( -285,220)*(c ) * ( -135,220)*(e ) * ( -435,115)*(b ) * ( -285,115)*(d ) * ( -135,115)*(f ) * besides comparing a volume - integrated quantity like the magnetic energy , we are also interested how the modeled magnetic field configurations compare to each other .to do so and to ensure that we are looking at the same topological structure we look for regions of strong gradients in the magnetic connectivity .they are thought of being linked to the creation of strong current concentrations in the solar corona and believed to represent the footprint of quasi - separatrix layers .we quantify the magnetic connectivity following and calculate the squashing degree at a height of above the nlff lower boundary ( figure [ fig : fig4]a , b ) .the squashing degree quantifies the eccentricity of an elliptical cross - section of a flux tube into which a flux tube of initially circular cross - section is transformed .wherever is large , the magnetic field connectivity changes drastically over short distances .according to this pattern , we choose a region of interest ( roi ) around a clearly distinguishable pattern of high values of ( rectangular outline in figure [ fig : fig4]a , b ) .though clearly visible in both models , the -ridge appears more diffuse in the sp model and its location differs up to for a given .model in the range .the vertical magnetic field is shown as gray - scale background .black and white contours are drawn at [ , ] and [ , ] , respectively .the locations where field lines originate from regions of within the roi ( rectangular outline ) re - enter the lower boundary are color - coded according to the vertical magnetic field there.,title="fig : " ] ( -140,225)*(a ) * ( -140,115)*(b ) * , to their apex height ( ) for the ( a ) hmi and ( b ) sp model .field lines are calculated starting from every pixel location in the range where _ and _ , or where _ and _ and the color - code reflects the value of the absolute vertical field there.,title="fig : " ] ( -130,35)*(a ) * ( -24,35)*(b ) * ( -255,140)*(a ) * ( -125,140)*(b ) * ( -255,58)*(c ) * ( -125,58)*(d ) * in total , 92/83 magnetic field lines in the hmi / sp model ( ) start from locations where within the roi and ( ) connect back to the lower boundary ( see figure [ fig : fig4]c - f ) .they qualitatively outline two neighboring connectivity domains which connect the negative polarity region to its neighboring positive polarity surrounding .the locations where the field lines connect back to the lower boundary are shown color - coded based on the local vertical field in figure [ fig : fig5 ] .we assume that the field lines we calculated represent thin flux tubes . for each flux tube , we choose its cross section at the location from where we started the field line calculation , , as the size of one pixel ( i.e. , ) .furthermore , we assume the vertical field there determines the flux of the flux tube .summation over the all considered thin flux tubes gives an estimate of the total absolute shared flux for the hmi / sp model , where we find / .we find a lower value of connected flux in the hmi model though more closed magnetic field lines are considered due to our selection criterion .however , comparable is the _ relative _ amount of connected flux linked by these field lines : comprises % of the unsigned vertical nlff lower boundary flux as listed in table [ tab : forces ] .we also recognize differences in the connectivity pattern of the field lines as shown in figure [ fig : fig5 ] .numerous sp model field lines connect to the weak - field regions ( ) between the two major positive polarity patches centered around ( , ) or to the weak - field surrounding at ( , ) .however , none of the selected field lines of the hmi model does so , instead they re - enter the nlff lower boundary at locations of strong vertical fields ( ) . from figure[ fig : fig4]c - f one recognizes that a larger number of field lines connects to the positive polarity at ( ,)( , ) and the highest field lines of the sp model reach up to greater heights than those in the hmi model . to investigateif the latter represents a general trend or is biased due to our restrictive selection of start locations for field line calculation , we consider all field lines starting from a pixel location on the nlff bottom boundary in the range where _ and _ , or _ and _ .we then find very similar relationships of the length of the calculated field lines , , to the height of their apex , for both models ( see figure [ fig : fig6 ] ) : the relation is well defined and to a first approximation linear ( see also , e.g. * ? ? ? * ) .the sp model field lines follow a steeper distribution , i.e. , seem to be on overall higher .it appears that the field lines carrying the strongest vertical fluxes are neither the shortest ones nor the longest ones .instead , it seems that in both models , field lines with a length of 20 and an apex height of 5 carry most magnetic flux .the longest and highest closed field lines in the hmi model are found to be and , respectively .this is lower than the values found for the longest / highest closed field lines of the sp model ( / ) .it is not surprising that the sp model field lines tend to be longer and reaching higher up in the model atmosphere when looking at the magnetic field distribution on the lower boundary .there , we find an average of .8/0.9 for the hmi / sp model , i.e. , the sp model field lines are on average more vertical .( -435,115)*(a ) * ( -285,115)*(b ) * ( -135,115)*(c ) * model in the range . the vertical magnetic field is shown as gray - scale background .black and white contours are drawn at [ , ] and [ , ] , respectively .the locations where field lines originate from regions of within the roi ( rectangular outline ) re - enter the lower boundary are color - coded according to the vertical magnetic field there . ]the binning of the original - resolution sp data ( sp ) to the resolution of hmi involves a 2d interpolation and the resulting changes are discussed in the following . as before , we only consider data values fulfilling the criteria outlined in [ ss : uncertainty_estimation ] .the binning causes a decrease of by % ( see table [ tab : forces ] ) .it also causes an increase of by % .the force - freeness remains basically the same , as do the relative occurrence of the vertical / horizontal field ( figure [ fig : fig7]a / b ) and the respective power distributions ( figure [ fig : fig7]c / d ) remain similar . now it is also evident that the binning is only to a minor degree responsible for the differences between the sp and hmi data .this agrees with , e.g. , who found that the scaling the data to a lower resolution does not significantly alter the results of their particular analysis .the hmi data hosts % of and % of ( see table [ tab : forces ] ) . naively , one would suspect that the difference in arises from the different resolution limits of the two instruments , i.e. , that the magnetic field is partially on scales which hmi can not resolve . in context with the relative occurrence of discussed in , however ,this can not be concluded since it was found that hmi vertical fields are on overall weaker than those of vsm data ( which with a plate - scale of arc - second has a _ lower _ resolution than hmi ) .therefore , the different occurrence rates must have reasons besides the different resolution limit of the instruments .preprocessing the sp data leads to a decrease of by % and and increase of by % , which is comparable to the changes of the sp data due to preprocessing .the modification to the sp lower boundary data during solving for the force- and divergence - free field , as listed in table [ tab : forces ] , is that decreases by % and increases by % .summarizing , we find a similar behavior of the modifications to the sp data during the force - free modeling and very similar values as we found for the sp data ( compare the values listed in table [ tab : forces ] ) .the potential energy of the sp model is % lower and the total energy is % higher than that of the sp model ( see table [ tab : energies ] ) .hence , when basing the analysis on the sp data , we find and enhancement / reduction of the absolute energy estimates / on the order of the statistical error of the energy estimates itself .the excess energy , , is enhanced by % , i.e. , also on the order of the statistical error .the _ relative _ amount of , however , remains approximately the same .repeating the analysis of the magnetic connectivity within the sp model , we find that in total 228 magnetic field lines originating from locations of within the roi in figure [ fig : fig8]a connect back to the lower boundary ( see figure [ fig : fig8]b , c ) .again , we assume that those represent thin flux tubes with a cross section of and assume the vertical field at footpoint from which we started the field line calculation determines its flux .summation over the all considered thin flux tubes gives an estimated total shared flux of which comprises about 1% of the unsigned vertical flux of the nlff lower boundary of the sp model .this is almost identical to what was found for the model based on the sp data , as is the connectivity pattern ( compare figures[fig : fig9 ] and [ fig : fig5]b ) . considering all field lines starting from a pixel location on the nlff bottom boundary in the range where _ and _ , or _ and _ , we find an identical relationship of ( figure [ fig : fig10 ] ) as was found for the sp model ( compare figure [ fig : fig6]b ) .the sp model field is found to be even more vertical ( .97 ) and , again , the field lines carrying the strongest vertical fluxes are those with an intermediate length and apex height . , to their apex height ( ) for the sp model .field lines are calculated starting from every pixel location in the range where _ and _ , or where _ and _ .the color - code reflects the value of the absolute vertical field there . ]non - identical photospheric vector magnetic fields inferred from polarization measurements of different instruments used as input for nonlinear force - free ( nlff ) coronal magnetic field models directly translate to substantial differences in the model outcome . to quantify these , we performed force - free magnetic field modeling using active - region vector magnetic field data based on measurements of polarization signals by the sdo / hmi and hinode sot / sp . the possible causes of the differences of the data products itself , including the , e.g. , different intrinsic nature and sensitivity of the instruments , the temporal evolution of the photospheric magnetic field during the ongoing scanning times or the inversion techniques used to infer the magnetic field vector from the measured polarization signals , were out of the scope of this work .our aim was to compare force - free model results based on data from these two instruments so we applied all data preparation and modeling routines in exactly the same way to both data sets .force - free coronal magnetic field modeling is computationally expensive and high - resolution data is sometimes binned to a lower resolution in order to shorten the computational time .we , therefore , binned the original - resolution sp ( sp ) data to the resolution of hmi which allowed us , besides ( ) to quantify the deviations of the force - free modeling outcome due to the usage of data of _ different _ instruments ( hmi and sp ) , also ( ) to investigate the effect of binning the input data to a lower resolution on the model outcome , by subsequent comparison of the models based on the binned sp ( sp ) data and sp data .we did not intend to mimic the different spatial resolution of the instruments by binning of the vector data since , as pointed out by , any kind of binning does not account properly for resolution effects . instead , in practice , a binning of high - resolution data prior to the nlff modeling is meant to result in feasible computational dimensions and to allow for near real - time modeling .we used hmi and sp vector maps of active region 11382 on 2011 december 22 and found considerably higher vertical magnetic flux and average horizontal field in the sp data .this difference of the vertical flux was found to be much larger than the modifications to the data due to application of our force - free modeling routines . alsothe modifications to the vertical flux due to binning of the sp data to the resolution of hmi were found to be small compared to the inequality of detected flux by the two instruments .unequal estimates of the magnitude and orientation of longitudinal and transverse fields based on the inversion of circular and linear polarization signals measured by the two different instruments , yield differing estimates of the vertical and horizontal magnetic field in a local coordinate system .the unequal amount of vertical magnetic flux ( the projected sp data hosting about two times the unsigned vertical flux of hmi ) then directly translates to a considerable discrepancy in the _ absolute _ estimates of the energy content of the considered coronal volume : models based on the sp data hold about twice as much energy as does the hmi model . by tracing magnetic field lines in the half - space above the modellower boundaries their connectivity was investigated .the sp and sp models revealed a closely matching photospheric footprint of two neighboring coronal connectivity domains .a similar footprint was found in the hmi model but locally shifted by up to several .the same applies to the locations where field lines re - enter the nlff lower boundary : while the location of the footpoints of the sp and sp models coincide , those of the hmi model are displaced by up to ten .moreover , on overall , the sp and sp model fields tend to be more vertical than the hmi model field which is a direct consequence of the relative vertical and horizontal field distribution given by the instrument data. however , the models also showed great similarities : _ relative _ estimates like the fraction of energy in excess over a potential field ( about 20% of the total energy content ) or the fraction of vertical flux shared by two neighboring connectivity domains ( about 1% of the total vertical flux of the active region ) agree very well .also the overall field - line geometry was found to be comparable : the length of all closed field lines in the model volumes was found to be more or less linearly related to the apex height .common to the model outcomes is also that the shortest and lowest as well as the longest and highest field lines carry least vertical flux . in conclusion, caution is needed when analyzing the coronal magnetic field and its connectivity with the help of force - free magnetic field models based on the vector magnetic field products of different instruments , made available to the community ._ relative _ estimates and the overall structure of the model magnetic fields might indeed be reliable while _ absolute _ estimates might only be so concerning their order of magnitude. moreover , binning of the magnetic vector data to a lower resolution prior to the force - free modeling results only in little differences in the model outcome , small compared to the remarkable deviations when basing the modeling on data from the two different instruments , hmi and sp .we thank the anonymous referee for careful consideration in order to improve our manuscript .we would also like to thank x. sun for support with the hmi vector magnetic field data and b. inhester for insightful discussions .acknowledges supported by dfg grant wi 3211/2 - 1 , t.w .is funded by dlr grant 50 oc 0904 .sdo data are courtesy of the nasa / sdo hmi science team .hinode is a japanese mission developed and launched by isas / jaxa , with naoj as domestic partner and nasa and stfc ( uk ) as international partners .it is operated by these agencies in co - operation with esa and nsc ( norway ) .hinode sp inversions were conducted at ncar under the framework of the csac .
photospheric magnetic vector maps from two different instruments are used to model the nonlinear force - free coronal magnetic field above an active region . we use vector maps inferred from polarization measurements of the solar dynamics observatory / helioseismic and magnetic imager ( hmi ) and the solar optical telescope spectropolarimeter ( sp ) aboard hinode . besides basing our model calculations on hmi data , we use both , sp data of original resolution and scaled down to the resolution of hmi . this allows us to compare the model results based on data from different instruments and to investigate how a binning of high - resolution data effects the model outcome . the resulting 3d magnetic fields are compared in terms of magnetic energy content and magnetic topology . we find stronger magnetic fields in the sp data , translating into a higher total magnetic energy of the sp models . the net lorentz forces of the hmi and sp lower boundaries verify their force - free compatibility . we find substantial differences in the absolute estimates of the magnetic field energy but similar relative estimates , e.g. , the fraction of excess energy and of the flux shared by distinct areas . the location and extension of neighboring connectivity domains differs and the sp model fields tend to be higher and more vertical . hence , conclusions about the magnetic connectivity based on force - free field models are to be drawn with caution . we find that the deviations of the model solution when based on the lower - resolution sp data are small compared to the differences of the solutions based on data from different instruments .
half a lifetime has passed since human astronauts directly explored another planetary body .currently , two robotic geologists are exploring the surface of mars , one scientific probe has just landed on saturn s moon titan , and a number of orbiters are studying several of the planets and moons in our solar system .soon , we will be sending more robotic explorers to mars , more science stations to orbit our planetary neighbors , and perhaps new human explorers to the moon .all of these exploration systems , human or robotic , semi - autonomous or remote - operated , can benefit from enhancements in their capabilities of scientific autonomy .human astronaut explorers can benefit from enhanced scientific autonomy astronauts with `` augmented reality '' visors could explore more efficiently and perhaps make more discoveries than astronauts who exclusively rely upon guidance from earth - based scientists and engineers .remote - operated robotic rovers can benefit from enhanced scientific autonomy this autonomy could either be on - board the rover or in computers on the earth .such scientific autonomy can enhance the scientific productivity of these expensive missions , but only if the autonomy measures are well tested , and therefore ` trustable ' by the controllers on the earth .along these lines , we have constructed a field - capable platform ( at relatively low cost ) in order to develop and to test computer - vision algorithms for space exploration here on the earth , prior to deployment in a space mission .this platform uses a wearable computer and a video camera , as well as the human operator .since this astrobiological exploration system is part human and part machine , we call our system the ` cyborg astrobiologist ' .recently , we have reported our first results at a geological field site with the cyborg astrobiologist exploration system ( mcguire _ et al . _2004b ) . in this paper, we discuss our results at a second geological field site , near riba de santiuste , in northern guadalajara , spain ( see the maps in figures 2 & 3 ) .this second geological field site offers a different type of imagery than was studied in the previous paper .this new imagery resembles some aspects of the imagery that mars exploration rover ( mer ) opportunity is currently studying on mars , and the new imagery has greater astrobiological implications than the imagery studied in the previous mission of the cyborg astrobiologist .we show that the basic exploration algorithms that we introduced in our first paper also function rather well at this second field site , despite the change in nature of the imagery . to give context to our work, we would like to remind the reader of two discoveries by the apollo 15 and the apollo 17 astronauts on the moon in the early 1970 s , namely of the ` genesis rock ' and of ` orange soil ' ( compton 1989 ) .the apollo 15 astronauts were astronauts trained to be geologists " , and one of their missions was to find ` ancient ' rocks , in order to obtain information as to how the moon was formed .given the ` bias ' of their mission , and given significant ` scientific autonomy ' by mission control on the second day of their mission at the hadley - appenines landing site , astronauts scott & irwin found an anorthosite .scott & irwin dubbed this specimen ` the genesis rock ' , because it possibly recorded the time after the molten moon s surface first cooled down , over 4 billion years ago .they initially found the genesis rock to be interesting partly because their mission requirements and their focused geological training biased them to look for such crystalline types of rocks .this biased - search during the apollo 15 mission on the moon is not unlike one mission requirement of the mer opportunity rover .one main objective for sending opportunity to the meridiani planum site on mars was to understand the mars odyssey orbiter s suggestion of abundant coarse - grained gray hematite at meridiani .so the initial focus and bias of opportunity s study of meridiani was on determining the source and nature of the hematite , and based upon this bias , opportunity and the earth - based geologists and engineers have been highly successful in writing ` the story ' about hematite at meridiani ( squyres _ et al ._ 2004 ; chan _ et al ._ 2004 ) .the apollo 17 astronauts had one astronaut who had been trained to be a geologist ( cernan ) and one geologist who had trained to be an astronaut ( schmitt ) .one of schmitt s responsibilities ( as geologist / astronaut ) was to try to see if he could observe phenomena on the moon which the previous astronaut / geologists had not seen .another responsibility for both cernan and schmitt was to try to find signs of recent volcanism , which was one reason their taurus - littrow landing site had been chosen , since it contained numerous craters of possible volcanic origin . despite the bias of the mission towards volcanic geology , schmitt discovered an unusual ` orange soil ' on the rim of one crater , which he initially thought could be due to some unusual oxidative process .this orange soil later turned out to be composed of small orangish glass - like beads , probably of volcanic origin .as with the rest of schmitt s geological observing on the moon in which he based his decisions on taking samples on visually detectable differences or similarities " ( compton 1989 ) , schmitt ( schmitt 1987 ) found the orange soil interesting because it was different from anything else he had seen on the moon . from our point of view , since schmitt was an expert geologist , he was able to go much beyond the naive mission goals and biases of looking for fresh signs of volcanism , to discover something else ( orange soil ) , which at the time seemed unrelated to volcanism . in summary , we point to the apollo 15 & 17 missions and the mer opportunity mission as examples of the need to have both biased and unbiased techniques for scientific autonomy during space exploration missions .it is rather difficult to make a system that can reliably detect signatures of interesting geological and biological phenomena ( with an imager or with a spectrometer ) in a general and biased manner . in this report, we describe the further testing of our _ unbiased _ `` cyborg astrobiologist '' system .in section 2 , we discuss the hardware and software of our system , followed by summaries & results of the geological expeditions to riba de santiuste in sections 3 & 4 , and finishing with more general discussion & conclusions in section 5 .our ongoing effort in the area of autonomous recognition of scientific targets - of - opportunity for field geology and field astrobiology is maturing . to date , we have developed and field - tested a `` cyborg astrobiologist '' system ( mcguire _ et al . _ 2004a ; mcguire _ et al . _2004b ) that now can : 1 use human mobility to maneuver to and within a geological site ; use a portable robotic camera system to obtain a mosaic of color images ; use a ` wearable ' computer to search in real - time for the most uncommon regions of these mosaic images ; use the robotic camera system to repoint at several of the most uncommon areas of the mosaic images , in order to obtain much more detailed information about these ` interesting ' uncommon areas ; choose one of the interesting areas in the panorama for closer approach ; and repeat the process as often as desired , sometimes retracing a step of geological approach . in the mars exploration workshop in madrid in november 2003 , we demonstrated some of the early capabilities of our ` cyborg ' geologist / astrobiologist system ( mcguire _ et al . _we have been using this cyborg system as a platform to develop computer - vision algorithms for recognizing interesting geological and astrobiological features , and for testing these algorithms in the field here on the earth ( mcguire _ et al . _2004b ) . the half - human / half - machine` cyborg ' approach ( see figure 1 ) uses human locomotion for taking the computer vision - algorithms to the field for teaching and testing , using a wearable computer .this is advantageous because we can therefore concentrate on developing the ` scientific ' aspects for autonomous discovery of features in computer imagery , as opposed to the more ` engineering ' aspects of using computer vision to guide the locomotion of a robot through treacherous terrain .this means the development of the scientific vision system for the robot is effectively decoupled from the development of the locomotion system for the robot .after the maturation of the computer - vision algorithms , we hope to transplant these algorithms from the cyborg computer to the on - board computer of a semi - autonomous robot that will be bound for mars or one of the interesting moons in our solar system .these algorithms could also work in analyzing remote - sensing date from orbiter spacecraft . with human vision , a geologist , in an unbiased approach to an outcrop ( or scene ) :1 firstly , tends to pay attention to those areas of a scene which are most unlike the other areas of the scene ; and then , secondly , attempts to find the relation between the different areas of the scene , in order to understand the geological history of the outcrop .the first step in this prototypical thought process of a geologist was our motivation for inventing the concept of uncommon maps .see mcguire _( 2004b ) for an introduction to the concept of an uncommon map , and our implementation of it .we have not yet attempted to solve the second step in this prototypical thought process of a geologist , but it is evident from the formulation of the second step , that human geologists do not immediately ignore the common areas of the scene . instead , human geologists catalog the common areas and put them in the back of their minds for `` higher - level analysis of the scene '' , or in other words , for determining explanations for the relations of the uncommon areas of the scene with the common areas of the scene .for example , a dark , linear feature transects a light - toned , delineated surface . at this specific scale , the dark feature is uncommon , an `` interest point '' , as it has specific relation to the surrounding light - toned material .it can `` tell the story '' of the outcrop .continued study may show how it cuts the delineation of the light - toned material , most likely indicating a younger age .coupled with a capacity for microscopic analysis , or even better , spectrographical analysis of mineralogy , a continued study may show the dark feature to be of basaltic composition and the light - toned material to be of granitic composition .this data compared with the information stored in the mind of the geologist ( knowledge ) may lead to the interpretation of the outcrop as a foliated granite ( gniess ) cut be a dolerite dike . prior to implementing the ` uncommon map ' , the first step of the prototypical geologist s thought process, we needed a segmentation algorithm , in order to produce pixel - class maps to serve as input to the uncommon map algorithm .we have implemented the classic co - occurrence histogram algorithm ( haralick , shanmugan & dinstein 1973 ; haddon & boyce 1990 ) . for this work ,we have not included texture information in the segmentation algorithm nor in the uncommon map algorithm .currently , each of the three bands of hue , saturation and intensity ( ) color information is segmented separately , and later merged in the interest map by summing three independent uncommon maps . in ongoing work, we are working to integrate simultaneous color & texture image segmentation into the cyborg astrobiologist system ( e.g. , freixenet , muoz , mart & llad 2004 ) .the concept of an ` uncommon map ' is our invention , though it indubitably has been independently invented by other authors , since it is somewhat useful . in our implementation, the uncommon map algorithm takes the top 8 pixel classes determined by the image segmentation algorithm , and ranks each pixel class according to how many pixels there are in each class .the pixels in the pixel class with the greatest number of pixel members are numerically labelled as ` common ' , and the pixels in the pixel class with the least number of pixel members are numerically labelled as uncommon. the ` uncommonness ' hence ranges from 1 for a common pixel to 8 for an uncommon pixel , and we can therefore construct an uncommon map given any image segmentation map .rare pixels that belong to a pixel class of 9 or greater are usually noise pixels in our tests thus far , and are currently ignored . in our work , we construct several uncommon maps from the color image mosaic , and then we sum these uncommon maps together , in order to arrive at a final interest map . for more details on our particular software techniques , especially on image segmentation and uncommon mapping ,see mcguire _( 2004b ) . for this mission to riba ,the non - human hardware of the cyborg astrobiologist system consisted of : 1 a 667 mhz wearable computer ( from via computer systems in minnesota ) with a ` power - saving ' transmeta ` crusoe ' cpu and 112 mb of physical memory , an indoor / outdoor sunlight - readable tablet display with stylus ( from via computer systems ) , a sony ` handycam ' color video camera ( model _ dcr - trv620e - pal _ ) , and a tripod for the camera .the sony handycam provides real - time imagery to the wearable computer via an ieee1394/firewire communication cable .the system as deployed to riba used two independent batteries : one for the computer , and the other for the camera .the power - saving aspect of the wearable computer s crusoe processor is important because it extends battery life , meaning that the human does not need to carry spare batteries . a single lithium - ion battery for the wearable computer , which weighs about 1 kg , was sufficient for this 4 hour mission .likewise , a single lithium - ion battery ( sony model np - f960 , 38.8wh ) was sufficient for the sony handycam for the 4 hour mission to riba , despite frequent use of the power - hungry fold - out lcd display for the handycam .the main reason for using the tripod during the mission to riba is that it allows the user to repoint the camera or to zoom in on a feature in the previously - analyzed image . in the previous study at rivas ( mcguire _ et al . _2004b ) , mosaicking and automated repointing was part of the study , so the stable platform provided by the tripod was essential .we eliminated the pan - tilt unit from this mission to riba because it adds to the mobility of the cyborg astrobiologist system , by eliminating the extra bag containing a battery and communication & power cables to the pan - tilt unit .another reason for eliminating the pan - tilt unit from the system for the mission to riba is that it saves time since there is less waiting around for the mosaicking and repointing to be completed .the capacities of automated mosaicking and of automated repointing at interest points are , however , essential to the system in the long run , and will be re - introduced at a later stage when needed . ) .if we had had the pan - tilt unit with us , then this would have been much more automatic , as the repointing could have been under computer control . ] during this mission , the cyborg astrobiologist system analyzed 32 images from 7 different tripod positions at 3 different outcrops over a 600 meter distance and over a 4 hour period . during the previous mission at rivas , with the pan - tilt unit enabled ,the cyborg astrobiologist system only analyzed 24 mosaic images from 3 different tripod positions at only one outcrop over a 300 meter distance and over a 5 hour period .see table [ missionparameters ] and figure [ missionmap ] . for this particular study ,the mobility granted by the wearable computer was almost essential . using a more powerfulnon - wearable computer could have restricted the mobility somewhat , and would have made it more difficult to study the third outcrop on the slopes of the castle - topped hill .the head - mounted display that we used during the rivas mission ( 2004 ) was much brighter than the tablet display used during the riba mission .together with the thumb - operated finger mouse , the head - mounted display was more ergonomic during the mission to rivas , than was the tablet display and stylus used during this riba mission .however , the spatial resolution of the head - mounted display was somewhat less than the resolution of the tablet display . during the riba mission , we wanted to share and interpret the results interactively between the three investigators .this would have been much more difficult with the single - user head - mounted display than it was with the multi - viewer higher - resolution tablet display .so we used the tablet display during the mission to riba , with the intention of switching to the head - mounted display later in the day .the wearable computer processes the images acquired by the color digital video camera , to compute a map of `` interesting '' areas .what the system determines as `` interesting '' is based on the `` uncommon '' maps ( introduced in section 2.1 ) .it is the relation between uncommon and common that eventually can tell the geological history of the outcrop .the computations use two - dimensional histogramming for image segmentation ( haralick , shanmugan & dinstein 1973 ; haddon & boyce 1990 ) .this image segmentation is independently computed for each of the hue , saturation , and intensity ( h , s , i ) image planes , resulting in three different image - segmentation maps .these image - segmentation maps were used to compute ` uncommon ' maps ( one for each of the three ( h , s , i ) image - segmentation maps ) : each of the three resulting uncommon maps gives highest weight to those regions of smallest area for the respective ( h , s , i ) image planes .finally , the three ( h , s , i ) uncommon maps are added together into an interest map , which is used by the cyborg system in order to determine the top three interest points to report to the human operator .the image - processing algorithms and robotic systems - control algorithms are all programmed using the graphical programming language , neo / nst ( ritter _ et al ._ 1992 , 2002 ) .using such a graphical programming language adds flexibility and ease - of - understanding to our cyborg astrobiologist project , which is by its nature largely a software project .we discuss some of the details of the software implementation in mcguire _( 2004b ) .after segmenting the mosaic image ( see figure [ explanation ] ) , we use a very simple method to find interesting regions in the , , & ) together to form an interest map ; and third , blurring this interest map .this smoothing kernel effectively gives more weight to clusters of uncommon pixels , rather than to isolated , rare pixels . ] .based upon the three largest peaks in the blurred / smoothed interest map , the cyborg system then shows the human operator the locations of these three positions , overlaid on the original image .the human operator can then decide how to use this information : i.e. , whether to ignore the information or to zoom in on one of the interest points .this step can be automated in future versions .on the 8th of february , 2005 , three of the authors ( daz martnez , orm & mcguire ) tested the cyborg astrobiologist " system for the second time at a geological site , with red - sandstone layers , near the village of riba de santiuste , north of sigenza in the northern part of the province of guadalajara ( spain ) ..a description of some of the parameters of the missions to rivas vaciamadrid and to riba de santiuste .[ cols="<,<,<,<",options="header " , ] these tests at riba are meant to be a complementary , confirmatory test of the methodology first tested in the spring of 2004 in rivas vaciamadrid .the site in riba may have more direct relevance as an analog for the terra meridiani site on mars , which the mars exploration rover , opportunity , is now exploring . riba has red sandstone beds , with some outcrops showing local chemical bleaching due to reduction and mobilization of iron , as well as precipitation of oxidized iron impurities .this process gives the sandstone both dark - red and white colored areas . within the red sandstone ,there are concretions of dark - red oxidized iron , as well as some sites where concretion had only been partially complete .furthermore , the coloring of the red sandstones at riba is much more brilliant " and saturated than the unsaturated white coloring of the clay- & sulfate - bearing cliffs in rivas .the dichotomy and contrast of the colors at riba , between the oxidized red and the bleached white colors of the sandstones certainly made the study of these red beds " into a relatively straight - forward , but highly - discriminating , test of the cyborg astrobiologist s current computer vision capabilities .the rocks at the outcrops at riba are of triassic age ( 260 - 200 myrs before present ) , and consist of sandstones , gravel beds , and paleosols .the rocks were originally deposited during the triassic in different layers by the changing depositional processes of a braided river system .this river system consisted of active channels with fast transport of sand grains and gravel . during different millenia in the triassic , the river system shifted and evolved .therefore , in the example of the paleosol outcrop , the deposition was only of fine - grained silt , which later was affected by soil processes and formed the paleosol layers .the rock layers were folded by alpine tectonics in the cenozoic .we arrived at the site at 12 noon on february 8 , 2005 , and in the next 30 mins the roboticist quickly assembled the cyborg astrobiologist system , and taught the two geologists how to use the system .for the next 4 hours , the two geologists were in charge of the mission , deciding where to point the cyborg astrobiologist s camera , interpreting the results from the tablet display , and deciding how to use the cyborg astrobiologist s assessment of the interest points in the imagery .often the geologists chose one of the top three interest points , and then either zoomed in on that point with the camera s zoom lens , or walked with the camera and tripod to a position closer to the interest point , in order to get a closer look at the interest point with the computer vision system .the computer was worn on the geologist s belt , and typically took 30 seconds to acquire and process the image .the images were downsampled in both directions by a factor of 2 during these tests , and the central area was selected by cropping the image ; the final image dimensions were .we chose a set of 7 tripod positions at 3 geological outcrops at riba de santiuste in order to test the cyborg astrobiologist system ( see the map in figure [ missionmap ] ) .this is an improvement upon the number of tripod positions and outcrops studied in the first mission to rivas vaciamadrid in the spring of 2004 ( see table 1 ) . the first geological outcrop at riba de santiuste consisted of layered deposits of red sandstone . in several of the images( see figures 6a , 6b , & 6d for examples ) , the wearable computer determined aspects of white bleaching to be interesting . furthermore , in several of the images ( see figures 6c & 6d for examples ) , the wearable computer found the concretions to be interesting .the second geological outcrop was partly covered by a crust of sulfate and carbonate minerals . at this outcrop ,the computer vision system found the rough texture of the white - colored mineral deposits to be interesting ( see figure 7a ) . at the third geological outcrop, we pointed the camera and computer vision system at a paleosol .the computer vision system of the wearable computer found the calcified root structures within the paleosols to be of most interest ( see figure 7b ) .without the problems with shadow and texture that we encountered during our first mission at the white gypsum - bearing cliffs of rivas vaciamadrid , the system performed admirably .see figure 6b for such an example of an image without shadow or texture .we show more details of the automated image processing by the wearable computer of this image in figure 5 .this image contained an area with bleached domains ( in white ) in the red sandstone , as well as a few isolated iron - oxide concretions of dark red color .processing this image by image segmentation and by uncommon - map construction was straightforward .the wearable computer reported to the geologists that the three most interesting points were two areas of the white bleached domain , and a third point in the lower part of the image which was a somewhat - darker tone of red .the geologists also found the smaller dark - red concretions in the upper part of the image to be interesting .the wearable computer found these smaller dark - red concretions to be interesting , but only before the interest map was smoothed ( or blurred ) , as summarized in figure 5 .the wearable computer does not report the top three points from the unblurred interest map to the user .however , the full unblurred interest map , as well as the blurred interest map , are both displayable to the user in real time , and they are both stored to disk for post - mission analysis .we regard the fact that the wearable computer had interest in the dark red concretions , even though it did not report this interest in its top 3 interest points , as a partial success .perhaps , if the camera had zoomed in by a factor of 10 , it would have found these dark red concretions to be interesting , even in the blurred interest map .hence , these features would be noticed when further approaching the outcrop . even at this distance ,the small size of the dark red concretions was at the limits of the perception capabilities of the geologists themselves .nonetheless , the computer did inform the geologists that it had interest in the white bleached spots and the darker red region at the bottom of the image , which we regard to be a higher level of success . in figures 6 & 7, we present a number of other images that were acquired and processed by the cyborg astrobiologist at riba de santiuste . for each image , we show here the image segmentation for the saturation slice of the color image , since the saturation slice discriminated rather well the red and white colors .we also show the final , blurred interest map , which was computed by summing the uncommon maps for hue , saturation & intensity as in figure 5 , and then smoothing the resulting map .figure 6a shows the border between a large oxidized red - colored zone of the red bed and a large bleached white - colored zone of the red bed .the wearable computer found the texture in the white zone to be most interesting .figure 6c shows a highly - magnified view of some of the concretions in the red bed near tripod position 2 .these concretions are not well - rounded like the `` blueberries '' ; rather , they are more like the `` popcorn '' concretions observed by opportunity at meridiani planum .the computer vision system in the wearable computer found two of the darker - red concretions on the left side of the image to be interesting , and one bright white area on the right side of figure 6c to be interesting .the discriminatory power of the image segmentation was not very clean at this high level of magnification .figure 6d shows a zoomed - out view of some concretions and a bleached white zone near tripod position 2 .the computer vision system found the area of the concretions to be most interesting , followed by the bleached area in the left side of the image .as shown in figure 6e , in the same part of the outcrop were some quartzite rocks of green , gray and yellow colors . in an image containing these rocks, the computer vision system found several of them to be more interesting than the surrounding red sandstone . likewise , at the second outcrop , we observed some highly - textured mineral deposits of whitish colors . in a zoomed - out view of these mineral deposits ( figure 7a ) , the wearable computer found the texture of the mineral deposits to be more interesting than the more - homogenous red color of the neighboring sandstone .the wearable computer did not find the highly - textured region containing many dark holes to be as interesting as the white - textured mineral deposits . in an ideal computer vision system ,one of the top three interest points probably should have been within the region with all of the dark holes .these dark holes formed by a combination of physical ( wind erosion ) and chemical ( differential cementation ) processes .based partly upon the guidance of the wearable computer , we acquired several samples of the mineral deposits for analysis in the laboratory .based upon the geologists experience and upon acid tests of these mineral deposits at the field site , we suspected that the mineral deposits consisted largely of a white textured overlayer of 15 - 20 mm of a sulfate , with a thin gray underlayer of 1 - 10 mm of calcite .the laboratory tests later confirmed that the overlayer is composed of largely of hydrated calcium sulfate ( gypsum ) . at the third outcrop ,the cyborg astrobiologist system studied a layer of paleosols , which are `` affected '' silty deposits from a fluvial plain .these paleosols contained partly - calcified root structures of plants that grew in this fluvial plain during the triassic .at tripod position 6 , our computer vision system found the white - colored calcified - root structures to be more interesting than the surrounding matrix of red siltstone ( figure 7b ) .however , at tripod position 7 , in a more complex image ( not shown here ) with more root structures and more diverse coloring , the computer vision system did not find the root structures to be particularly interesting .we have compared the cyborg astrobiologist s performance in picking the top 3 interest points with a human geologist s quasi - blind classification .geologist daz martnez verbally noted the interesting parts of the image .then daz martnez was shown the cyborg astrobiologist s top three interest points , and he then judged how well the computer vision system matched his own image analysis .we judged that there was concurrence between the human geologist and the cyborg astrobiologist on the interest points about 69 times ( true positives ) , with 32 false positives and with 32 false negatives , for a total of 32 images studied .this was an average of 2.1 true positives ( tp ) for each image out of a possible ( usually ) 3 trials per image ; an average of 1.0 false positives ( fp ) per image out of a possible ( usually ) 3 trials per image ; and an average of 1.0 false negatives ( fn ) per image , with no _ a priori _ limitation on the number of false negatives per image .there was double counting of positives since sometimes a localized physical feature corresponded to more than one interesting features in the image ( i.e. one image had some parallel lines at a geological contact , so the interest points corresponding to this region counted doubly ) .one way to look at this data is to take the ratios , and . with our technique ,it is difficult to assess the true negative rate . for the data from riba , we compute : a true positive rate of , a false positive rate of , and a false negative rate of . we did not attempt to compute the roc curves for these images , as a function of the number of interest points computed for each image .a more careful analysis could be made with many more images from different field sites , but these numbers agree qualitatively with the results we were getting at rivas vaciamadrid , even though we have not computed the statistics .qualitatively , we expect that for the types of imagery that we typically study , the cyborg astrobiologist will not do a near - perfect job ( with geologist - concurrence analysis giving and both less than 10% ) , nor will it do a poor job ( with geologist - concurrence analysis giving either or both and greater than 90% ) , it typically has mid - level false - positive and false - negative rates a mid - level in performance .of course , we would like to improve the system to do a near - perfect job , but considering the simplicity of the image - segmentation algorithm and the uncommon - mapping technique , we believe that this work can provide a good basis for further studies . one approach for improving the false positive rate is to include further filtering of the interest points that are currently being determined from the uncommon maps . that way ,errant positives could be caught before they are reported to the human operator . right now in the field , the human operator serves as a decent filter of false positives , so fixing the false positive rate is not so urgent ; fixing the false negative rate is somewhat more important .one approach for improving the false negative rate would be to compute interest features in new and different ways ( i.e. , with edge detectors or parallel - line detectors ) , in order to ensure that all the interesting features are detected .both approaches could be attacked , for example , by using context - dependent geologist knowledge , which would need to be coded into a future , more advanced cyborg astrobiologist .currently , considering that we do not deploy such context - dependent geologist knowledge , our relatively - unbiased uncommon - mapping technique seems to be doing rather well .we have shown that our cyborg astrobiologist exploration system performs reasonably well at a second geological field site . given similar performance at the first geological field site , we can have some degree of confidence in the general unbiased approach towards autonomous exploration that we are now using .we can also have sufficient confidence in our general technique that we can use our specific technique of image segmentation and uncommon mapping as a basis for further algorithm development and testing . in the near future ,we plan to : * upgrade our image - segmentation algorithm , in order to handle texture and color simultaneously ( cf .this upgrade may give us more capabilities to handle shadow as well .* test the cyborg astrobiologist system at field sites with a microscopic imager .this will complete our mimicry of a stepwise approach by a human geologist towards an outcrop . at riba de santiuste ,our system has been tested for exploration on imagery at a site that is not unlike meridiani planum , where the mer opportunity is now exploring . riba desantiuste is similar to meridiani planum in its iron - oxide concretions and its sulfate mineral deposits .the bleached - white zones of the red sandstones at riba de santiuste may be an analog for similar geochemical or perhaps biological phenomena on mars ( chan _ et al ._ 2004 ; orm _ et al ._ 2004 ; hofmann 1990 , 2004 ) .the software behind the cyborg astrobiologist s computer vision system has shown its initial capabilities for imagery of this type , as well as for imagery of some other types .this computer vision software may one day be mature enough for transplanting into the computer - vision exploration system of a mars - bound orbiter , robot or astronaut .p. mcguire , j. orm and e. daz martnez would all like to thank the ramon y cajal fellowship program of the spanish ministry of education and science .many colleagues have made this project possible through their technical assistance , administrative assistance , or scientific conversations .we give special thanks to kai neuffer , antonino giaquinta , fernando camps martnez , and alain lepinette malvitte for their technical support .we are indebted to gloria gallego , carmen gonzlez , ramon fernndez , coronel angel santamaria , and juan prez mercader for their administrative support .we acknowledge conversations with beda hofmann , virginia souza - egipsy , mara paz zorzano mier , carmen crdoba jabonero , josefina torres redondo , vctor r. ruiz , irene schneider , carol stoker , jrg walter , claudia noelker , gunther heidemann , robert rae , jonathan lunine , ralph lorenz , goro komatsu , nick woolf , steve mojzsis , david p. miller , bradley joliff , raymond arvidson , and daphne stoner . the field work by j. orm was partially supported by grants from the spanish ministry of education and science ( aya2003 - 01203 and cgl2004 - 03215 ) .the equipment used in this work was purchased by grants to our center for astrobiology from its sponsoring research organizations , csic and inta .compton , w.d .( 1989 ) . _ where no man has gone before : a history of apollo lunar exploration missions _ , chapters 13 - 14 . based on nasa history series _sp-4214_. + http://www.apolloexplorer.co.uk/default.asp?libsrc=/books/sp-4214/cover.htmfreixenet , j. , muoz , x. , mart , j. & llad , x. ( 2004 ) .color texture segmentation by region - boundary cooperation ._ computer vision eccv 2004 , eighth european conference on computer vision , proceedings , part ii , lecture notes in computer science _ springer , prague , czech republic , eds .toms pajdla , jir matas , * 3022 * , pp .250 - 261 . also available in the _ cvonline _archive : + http://homepages.inf.ed.ac.uk/rbf/cvonline/local_copies/freixenet1/eccv04.html hofmann , b. ( 2004 ) .redox boundaries on mars as sites of microbial activity ._ iv european workshop on exo / astrobiology _ , held at the open university , milton keynes , united kingdom , abstract , + http://physics.open.ac.uk/eana/talks/redox%20boundaries%20on%20mars%20as%20sites%20of%20microbial%20activity.pdf mcguire , p. c. , rodrguez - manfredi , j. a. , sebastin - martinez , e. , gomez - elvira , j. , daz - martnez , e. , orm , j. , neuffer , k. , giaquinta , a. , camps - martnez , f. , lepinette - malvitte , a. , prez - mercader , j. , ritter , h. , oesker , m. , ontrup , j. & walter , j. ( 2004a ) .cyborg systems as platforms for computer - vision algorithm - development for astrobiology , _ proceedings of the iii european workshop on exo / astrobiology _ , held at the centro de astrobiologia , madrid , _european space agency special publication , esa sp-545 _ , pp .141 - 144 .http://arxiv.org/abs/cs.cv/0401004 mcguire , p. c. , orm , j. o. , diaz - martinez , e. , rodriguez - manfredi , j. a. , gomez - elvira , j. , ritter , h. , oesker , m. , & ontrup , j. ( 2004b ) .the cyborg astrobiologist : first field experience , _ international journal of astrobiology _ , vol .3 , issue 3 , pp .189 - 207 .http://arxiv.org/abs/cs.cv/0410071 orm , j. , komatsu , g. , chan , m. a. , beitler , b. , and parry , w. t. ( 2004 ) .geological features indicative of processes related to the hematite formation in meridiani planum and aram chaos , mars : a comparison with diagenetic hematite deposits in southern utah , usa , _icarus _ * 171 * , pp .295 - 316 .scott , j. r. , mcjunkin , t. r. , tremblay , p. l. ( 2003 ) .automated analysis of mass spectral data using fuzzy logic classification , _ journal of the association for laboratory automation _ * 8 * ( 2 ) , pp .61 - 63 .scott , j.r . & tremblay , p.l .highly reproducible laser beam scanning device for an internal source laser desorption microprobe fourier transform mass spectrometer , _ review of scientific instruments _ * 73 * ( 3 ) , pp .1108 - 1116 .
the ` cyborg astrobiologist ' has undergone a second geological field trial , at a site in northern guadalajara , spain , near riba de santiuste . the site at riba de santiuste is dominated by layered deposits of red sandstones . the cyborg astrobiologist is a wearable computer and video camera system that has demonstrated a capability to find uncommon interest points in geological imagery in real - time in the field . in this second field trial , the computer vision system of the cyborg astrobiologist was tested at seven different tripod positions , on three different geological structures . the first geological structure was an outcrop of nearly homogeneous sandstone , which exhibits oxidized - iron impurities in red and and an absence of these iron impurities in white . the white areas in these `` red beds '' have turned white because the iron has been removed . the iron removal from the sandstone can proceed once the iron has been chemically reduced , perhaps by a biological agent . the computer vision system found in one instance several ( iron - free ) white spots to be uncommon and therefore interesting , as well as several small and dark nodules . the second geological structure was another outcrop some 600 meters to the east , with white , textured mineral deposits on the surface of the sandstone , at the bottom of the outcrop . the computer vision system found these white , textured mineral deposits to be interesting . we acquired samples of the mineral deposits for geochemical analysis in the laboratory . this laboratory analysis of the crust identifies a double layer , consisting of an internal millimeter - size layering of calcite and an external centimeter - size effluorescence of gypsum . the third geological structure was a 50 cm thick paleosol layer , with fossilized root structures of some plants . the computer vision system also found certain areas of these root structures to be interesting . a quasi - blind comparison of the cyborg astrobiologist s interest points for these images with the interest points determined afterwards by a human geologist shows that the cyborg astrobiologist concurred with the human geologist 68% of the time ( true positive rate ) , with a 32% false positive rate and a 32% false negative rate . the performance of the cyborg astrobiologist s computer vision system was by no means perfect , so there is plenty of room for improvement . however , these tests validate the image - segmentation and uncommon - mapping technique that we first employed at a different geological site ( rivas vaciamadrid ) with somewhat different properties of the imagery . * keywords : * computer vision , robotics , image segmentation , uncommon map , interest map , field geology , mars , meridiani planum , wearable computers , co - occurrence histograms , red beds , red sandstones , nodules , concretions , reduction spheroids , triassic period . 1
the quadratic assignment problem ( qap ) is a combinatorial optimization problem first introduced by koopmans and beckman ( 1957 ) .it is np - hard and is considered to be one of the most difficult problems to be be solved optimally .the problem was defined in the following context : a set of facilities are to be located at locations .the distance between locations and is and the quantity of materials which flow between facilities and is .the problem is to assign to each location a single facility so as to minimize the cost where represents the location to which facility is assigned. it will be helpful to think of the facilities and the matrix of flows between them in graph theoretic terms as a graph of nodes and weighted edges , respectively .there is an extensive literature which addresses the qap and is reviewed in pardalos et al .( 1994 ) , cela ( 1998 ) , anstreicher ( 2003 ) , loiola et al .( 2007 , and james et al .( 2009a ) . with the exception of specially constructed cases ,optimal algorithms have solved only relatively small instances with .various heuristic approaches have been developed and applied to problems typically of size or less .one of the most successful heuristics to date for large instances is _ robust tabu search _ , rts , ( taillard ( 1991 ) .the use of tabu search for the quadratic assignment problem has been studied extensively ( drezner ( 2005 ) , hasegawa et al .( 2000 ) , james et al , ( 2009a , 2009b ) , mcloughlin and cedeno ( 2005 ) , misevicius ( 2007 ) , misevicius and ostreika ( 2007 ) , skorinkapov ( 1994 ) , and wang ( 2007 ) ) .some of the best available algorithms for the solution of the qap are the hybrid genetic algorithms that use tabu search as an improvement mechanism .( see drezner ( 2008 ) ) .here we will consider the robust tabu heuristic applied to _ sparse _ qap instances .that is , the number of non - zero entries in the either the flow matrix and/or the distance matrix is of as opposed to . without loss of generalitywe will assume the flow matrix is sparse .many real world problems are sparse . in fact , this work was motivated by the study of random regular sparse graphs .these graphs are very robust to partitioning and collapse due to removal of nodes or edges .we are interested in the problem of determining how to assign the nodes of such a graph to locations in a metric space such that the total edge length of the graph is minimized ; this problem maps directly to a quadratic assignment problem .there has been some previous work on sparse quadratic assignment problems .milos and magirou ( 1995 ) developed a lagrangian - relaxation lower - bound algorithm for sparse problems and panos et al .( 1997 ) developed a version of their grasp heuristic for sparse problems .however , to the best of our knowledge , an efficient implementation of the robust tabu heuristic for sparse qap instances has not been proposed .the tabu heuristic for the quadratic assignment problem consists of repeatedly swapping locations of two nodes . a single iteration of the heuristic consists of * determining the move which most decreases the total cost . under certain conditions ( see section 4 ) , if a move which lowers the cost is not available , a move which raises the cost is made .so that cycles of the same moves are avoided , the same move is forbidden ( _ taboo _ ) until a specified later iteration ; we call this later iteration the _ eligible iteration _ for a given move .this eligible iteration is traditionally stored in a _tabu list _ or _tabu table_. * making the move .* recalculating the new cost of all moves .the process is repeated for a specified number of iterations .traditional implementations of robust tabu search require operations per iteration .the complexity of is achieved by maintaining a matrix containing the cost of swapping and for all and , given a current assignment .the complexity of the each step above is as follows : * - all possible moves are considered .the cost of each move is retrieved from * - the locations of the two swapped nodes are simply transposed .* - based on the following observations of taillard ( 1991 ) : + following taillard ( 1991 ) , starting from an assignment of facilities let the resulting assignment after swapping facilities and be .that is : for a symmetrical matrix with a null - diagonal , the cost of swapping and is : + to calculate for any and with complexity o(n ) , we can use equation xx . for asymmetric matices or matrices with non - null diagonals , a slightly more complicated version of equation ( [ eqfull ] ) also of complexity o(n )is given by burkhard and rendl ( 1984 ) .+ to calculate in the case that the swapped facilities and are different from or , we use the value calculated in the previous iteration and find : * * the cost of moves which do not involve the two nodes in the previous move can be calculated in time .there are of these moves . ** the cost of moves which do involve the two nodes in the previous move must be calculated from scratch .there are of these moves and the complexity of calculating each is .to reduce the complexity of step ( a ) , instead of scanning all possible moves , we use multiple _ priority queues _ ( pqs ) to determine the best move .a priority queue is a data structure for maintaining a set of elements each of which has an associated value ( priority ) .a pq supports the following operations : * insert an item * remove an item * return the item with the highest value priority queues are used to efficiently find an item with the highest value without searching through all of the items .the maximum complexity of pq operations is .we will see below that there will be insertions and deletions in the pqs for each iteration so the asymptotic complexity of this step is reduced to .furthermore , we will show that for problems of any practical size , pq operations are not the determinant of total complexity .the complexity to recalculate the cost of moves in step ( c ) , can be reduced to as follows : * as in the traditional robust tabu implementation , the cost of moves which do not involve the two nodes in the previous move can be calculated in time . on average , there are nodes which are connected to the two nodes in the previous moves , where is the average degree ( average number of nodes adjacent to a given node ) of the graph corresponding to the flow matrix . for each of these nodes we must calculate the cost of possible moves .thus , the cost is .* the cost of moves which do involve the two nodes in the previous move must be calculated from scratch .there are of these moves and the complexity of calculating each is since the cost of a node , , being in a specific location depends only on the on - average nodes adjacent to .thus the complexity of step ( c ) is reduced to .to describe our implementation , we must first describe the rules for determining the next move of taillard s robust tabu heuristic ( taillard ( 1991 ) ) .the following definitions for the possible _ state _ of a potential move are useful : * if the current iteration is less than or equal to the eligible iteration , the move is _ineligible_. * if the current iteration is greater than the eligible iteration , the move is _authorized_. * if the current iteration minus an _ aspiration constant _ is greater than the eligible iteration the move is _aspired_. the rules for determining the next move can then be stated as ( taillard ( 1991 ) ) : * if a move which decreases the lowest total cost found so far is available , the move which most decreases this total cost is chosen , independent of whether the move is ineligible , authorized or aspired . * if no move meets criterion ( 1 ) , the aspired move , if one is available , which most decreases the current total cost is chosen . *if no moves meet criteria ( 1 ) or ( 2 ) , the lowest cost authorized move is chosen . to implement these rules for sparse problems , we use two types of pqs : _ delta _ pqs which contain the cost delta for a given move and _ tabu _ pqs which contain entries ordered by the eligible iteration for the move . the tabu pqs control the change of state of a move .the delta pqs determine the lowest cost move in each state .five pqs are used : * _ ineligible tabu pq _ - this pq contains moves , ordered by eligible iteration , which are in the ineligible state .this pq allows us to efficiently determine when the state of a move can be changed to authorized . * authorized tabu pq - this pq contains moves , ordered by eligible iteration , which are in the authorized state .this pq allows us to efficiently determine when the state of a move can be changed to aspired . * ineligible delta pq - this pq contains moves , ordered by the cost of the move , which are in the ineligible state .this pq together with the two other delta pqs allows for efficient determination of the overall lowest cost move as required by rule 1 . * aspired delta pq - this pq contains moves , ordered by the cost of the move , which are in the aspired state .this pq allows for efficient determination of the lowest cost aspired move as required by rule 2 . *authorized delta pq - this pq contains moves , ordered by the cost of the move , which are in the authorized state .this pq allows us to determine the lowest cost authorized move as needed by rule 3 . as illustrated in fig .[ pqs ] , moves are inserted and removed in the pqs under the following circumstances : * at initialization all moves are inserted into the ineligible pqs .* at the beginning of each iteration , any moves on the ineligible pqs which become authorized , because the iteration has increased by one , are moved from the ineligible pqs to the authorized pqs . * at the beginning of each iteration , any moves on the authorized pqs which become aspired , because the iteration has increased by one , are moved to the aspired pq . * after each move , any move for which the move cost or eligible iteration has changed is removed from the pq in which it is present and inserted in the appropriate pq based on move state and move cost .( however , see lazy update discussion below . ) using these pqs we obtain _ exactly _ the same results as the traditional robust tabu implementation .we minimize the time updating pqs by performing _lazy updates_. after a change in the eligible iteration or move cost , if the state is changed or the value is increased , we update the pqs involved ; otherwise , we perform a lazy update and store the value in a data structure associated with the move and only do the update in the pq when and if the move becomes the move with the smallest value in the pq .this use of lazy updates significantly decreases the time spent on pq operations .= 7.0 cm .[ ratio ] = 7.0 cm = 7.0 cmwe test our algorithm on instances with locations on a square grid with a euclidean metric .the flow matrix for the facilities corresponds to an adjacency matrix for a -regular random graph ; that is , each facility has flows of value one to other random facilities .we run both the traditional non - sparse implementation and our new sparse implementation .the non - sparse implementation is the code of taillard ( 1991 ) . to implement the priority queues in the sparse implementation , we use the complete balanced tree code of marin and cordero ( 1995 ) .we perform a minimum of iterations for each instance . for ,priority queue update operations consume just of the total time . assuming a behavior for pq operations , it is not until becomes astronomically large that the pq operations would take the majority of the time and the behavior of our implementation crosses over from to complexity .we thus expect complexity for our sparse implementation for any practical sized problems . in fig .[ ratio ] , for , we plot the ratio of the time per iteration of the non - sparse implementation to the time per iteration of the sparse implementation versus .as expected , the plot is consistent with where .this reflects the reduction in complexity from to . in fig .[ comb ] we plot separately the time per iteration for the original and the sparse implementations .consistent with fig .[ ratio ] , the slopes on the log - log plot differ by 1 but the slopes are 2.5 and 1.5 , respectively , as opposed to the theoretical values of 2.0 and 1.0 . as explained in paul ( 2007 ) and saavedra and smith ( 1995 ) ,this is due to the finite size of processor cache memory ; as the problem size ( and memory needed ) increases , there are a smaller percentage of cache hits causing slower operation . in fig .[ tk ] we plot the time per iteration versus values of degree .the plot is linear with a slight deviation with increasing .this deviation from linear is due to the fact that as increases there is an increasing chance that a node will be connected to both nodes involved in the previous move ; however , the updated costs of moving node must be calculated only once .we also performed numerical experiments on a class of problems known in the literature ( see drezner et al .( 2005 ) ) .these problems denoted as drexx are sparse and are designed to be very difficult to solve with heuristics .we obtained results similar to those for described above ; performance is o(n ) .for sparse quadratic assignment problems , we reduce the asymptotic complexity per iteration of robust tabu search to from ; for practical size problems , the complexity is reduced to .central to achieving this reduction is the use of multiple priority queues and lazy updates to these queues .the code which implements our approach and test qap instances used for this paper are available as supplementary material .we thank jia shao for helpful discussions and the defense threat reduction agency ( dtra ) for support .anstreicher , k. , 2003 .recent advances in the solution of quadratic assignment problems . mathematical programming 97 , 27 - 42 .drezner , z. , hahn , p.m ., taillard .d. , 2005 .recent advances for the quadratic assignment problem with special emphasis on instances that are difficult for meta - heuristic methods .annals of operations research 139 , 65 - 94 .drezner z. and marcoulides g. , 2009 . on the range of tabu tenure in solving quadratic assignment problems recent advances in computing and management information systems . in : recent advances in computing and management information systems ,157 - 169 , p. petratos and g. a. marcoulides ( eds ) , atiner .james , t. , rego , c. , glover , f. , 2009a .multistart tabu search and diversification strategies for the quadratic assignment problem .ieee tran . on systems ,man , and cybernetics part a : systems and humans 39 , 579 - 596 .mcloughlin , j. f. , cedeno , w. , 2005 . the enhanced evolutionary tabu search and its application to the quadratic assignment problem .gecco 2005 : genetic and evolutionary computation conference 1 , 975 - 982 .pardalos , p.m. , rendl , f. , wolkowicz , h. , 1994 .the quadratic assignment problem : a survey and recent developments . in : pardalos , p.m. , wolkowicz , h. , ( eds . ) , quadratic assignment and related problems .dimacs series on discrete mathematics and theoretical computer science 16 , amer .baltimore , md . 1 - 42 .pardalos , p. m. , pitsoulis , l. s. , resende , m. g. c. , 1997 .algorithm 769 : fortran subroutines for approximate solution of sparse quadratic assignment problems using grasp .acm transactions on mathematical software ( toms ) 23 , 196 - 208 .
we propose and develop an efficient implementation of the robust tabu search heuristic for sparse quadratic assignment problems . the traditional implementation of the heuristic applicable to all quadratic assignment problems is of complexity per iteration for problems of size . using multiple priority queues to determine the next best move instead of scanning all possible moves , and using adjacency lists to minimize the operations needed to determine the cost of moves , we reduce the asymptotic ( ) complexity per iteration to . for practical sized problems , the complexity is . combinatorial optimization , computing science , heuristics , tabu search
algebras of hilbert space operators may be mapped onto algebras of ordinary functions on linear spaces , with an associative but non - commutative _ star product _ ( see , e.g. ) .the images of the hilbert space operators are called _ operator symbols ._ weyl maps , -ordered operator symbols , their partial cases and tomograms are examples of this correspondence between hilbert space operator algebras and function algebras . in the case of tomograms ,the operator symbols of the density operators of quantum mechanics are families of ordinary probability distributions .a unified framework for operator symbols is presented in sect .2 and their main properties are reviewed .many of the results in sects .2 and 3 are scattered in previous publications and are collected here to make the paper reasonably self - contained .finite dimensional systems ( spin tomograms ) are studied in sect .these operator symbols are then proposed as a framework for quantum information problems .qudit states are identified with maps of the unitary group into the simplex .the image of the unitary group on the simplex provides a geometrical characterization of the nature of the quantum states . in the remaining sections ,generalized measurements , typical quantum channels , entropies and entropy inequalities are discussed in this setting .in quantum mechanics , observables are selfadjoint operators acting on the hilbert space of states .we map operators onto functions in a vector space in the following way : given the hilbert space and a trace - class operator acting on this space , let be a family of operators on , labelled by vectors . we construct the -number function ( and call it the _ symbol of the operator _ ) by . \label{eq.a1}\ ] ] let us suppose that relation ( [ eq.a1 ] ) has an inverse , i.e. , there is a set of operators acting on the hilbert space such that equations ( [ eq.a1 ] ) and ( [ eq.a2 ] ) define an invertible map from the operator onto the function . multiplying both sides of eq.([eq.a2 ] ) by the operator and taking the trace ,one obtains a consistency condition for the operators and = \delta \left ( \mathbf{x - x}^{\prime } \right ) .\ ] ] for two functions and , corresponding to two operators and , a star - product is defined by .\label{eq.a5}\ ] ] since the standard product of operators on a hilbert space is associative , eq . ( [ eq.a5 ] ) also defines an associative product for the functions , i.e. , let us suppose that there is another map , analogous to the one in ( [ eq.a1 ] ) and ( [ eq.a2 ] ) , defined by the operator families and . then one has \label{eq.a21}\ ] ] and the inverse relation the function will be related to the function by \,d\mathbf{x } \label{eq.a23}\ ] ] with the inverse relation \,d\mathbf{y}. \label{eq.a24}\ ] ] the functions and , corresponding to different maps , are connected by the invertible integral transform given by eqs.([eq.a23 ] ) and ( [ eq.a24 ] ) with the intertwining kernels \label{eq.a28aa}\ ] ] and . \label{eq.a28bb}\ ] ] using formulae ( [ eq.a1 ] ) and ( [ eq.a2 ] ) , one writes a composition rule for two symbols and their star - product the kernel in ( [ eq.a25 ] ) is determined by the trace of the product of the operators used to construct the map .\label{eq.a26}\ ] ] equation ( [ eq.a26 ] ) can be extended to the case of the star - product of symbols of operators with kernel . \label{eq.a26''}\ ] ] the trace of an operator is determined by \,d\mathbf{x}_{1}\,d\mathbf{x}% _ { 2}\cdots \,d\mathbf{x}_{n}.\end{aligned}\ ] ] consider now a linear superoperator acting in linear space of operators .the map of operators induces a corresponding map of their symbols the integral form of this map is determined by the kernel .\label{ar3}\ ] ] as operator family , we take the fourier transform of the displacement operator where is a complex number , , and the vector may be interpreted as , with and being the position and momentum .one sees that . the displacement operator ( creating coherent states from the vacuum ) may be expressed through creation and annihilation operators in the form the operator and its hermitian conjugate satisfy the boson commutation relation =\hat{% \mathbf{1}}. ] , meaning that the symbol of the identity operator equals .the operator is obtained from this means that , for -ordered symbols , the operator in the general formula ( [ eq.a2 ] ) takes the form if is a density operator , for the values of the parameters , the corresponding symbols are respectively the wigner , glauber sudarshan and husimi quasidistributions . for the explicit form of the kernel for the product of operator symbols we refer to .density operators may be mapped onto probability distribution functions ( tomograms ) of one random variable and two real parameters and .this map has been used to provide a formulation of quantum mechanics , in which quantum states are described by a parametrized family of probability distributions , alternative to the description of the states by wave functions or density operators .the tomographic map has been used to reconstruct the quantum state , to obtain the wigner function by measuring the state tomogram , to define quantum characteristic exponents and for the simulation of nonstationary quantum systems . herewe discuss the tomographic map as an example of the general operator symbol framework .the operator is mapped onto the function , where , which we denote as depending on the coordinate and the reference frame parameters and . \label{eq.53}\ ] ] the function is the symbol of the operator .the operator is where and are position and momentum operators and the angle and parameter are related to the reference frame parameters by moreover , and is a projection density .one has the canonical transform of quadratures using the approach of one obtains the relation in the case we are considering , the inverse transform determining the operator in terms of the tomogram symbol will be of the form where i.e. , the unitary displacement operator in ( [ eq.56q ] ) now reads where with and .the trace of the above operator provides the kernel determining the trace of an arbitrary operator in the tomographic representation the operators and are creation and annihilation operators .the function satisfies the relation , the tomographic symbol reads if one takes two operators and the tomographic symbol of the product is the star - product that is , with kernel given by .\label{eq.59}\ ] ] the explicit form of the kernel reads x\right\ } \right ) , \label{kernel}\end{aligned}\ ] ] and the kernel for the star - product of operators is x\right\ } \right ) .\label{kernelstar}\end{aligned}\ ] ]of particular importance for quantum information purposes are finite - dimensional spin systems ( qubits , qutrits , etc . ) .therefore , we describe here the tomographic operator symbols for spin systems .further details may be obtained from refs . . in this case , the physical interpretation of the symbol is as the set of measurable mean values of the operator in a state with a given spin projection in a rotated reference frame .to set the notation , we describe here some standard operators used to discuss the properties of spin states . for arbitrary values of spin , let the observable be represented by a matrix in the standard basis of angular momentum generators , , as where the spin projector onto the component along -axis is denoted and the same projector in a reference frame rotated by an element of is being a rotation operator of the irreducible representation with spin . since the projectors play an important role in constructing the tomographic map , we present several different expressions for these operators .the projector can be given an alternative form in terms of the dirac delta - function and for the rotated projector or , in integral form \,d\varphi \ , . \label{a05}\ ] ] another form of the rotated projector is the matrix elements ( wigner -functions ) are the matrix elements of the operator of group representation ( is an element of the group parametrized by euler angles ) .the matrix elements have the explicit form with it is convenient to introduce the irreducible tensor operator for the group the irreducible tensors have the properties ( see ) in terms of the irreducible tensors , the operator is expressed as follows : this means that the irreducible tensors are a basis for the linear space of operators acting on the hilbert space of the irreducible representation .the tomogram symbol of the observable is \nonumber \\ & = & \sum_{m_{1}^{\prime } = -j}^{j}\,\sum_{m_{2}^{\prime } = -j}^{j}\,d_{m_{1}m_{1}^{\prime } } ^{(j)}(\alpha , \beta , \gamma ) \,a_{m_{1}^{\prime } m_{2}^{\prime } } ^{(j)}\,d_{m_{1}m_{2}^{\prime } } ^{(j)*}(\alpha , \beta , \gamma ) \ , .\label{equation05}\end{aligned}\ ] ] in view of ( [ equation05 ] ) , the tomogram depends only on two euler angles , i.e. , the tomogram depends on the spin projection and on a point on the bloch sphere .the tomogram can be presented in another form using a kronecker delta - function , which is the general form for tomograms of arbitrary observables suggested in it is obvious that the tomogram of the identity operator is the unit . to derive the inverse of ( [ equation05 ] ), we multiply by the wigner -function and integrate over the volume element of the group , i.e. , where the known property of the wigner -functions was used . in view of the symmetry relations and properties of the clebsch gordan coefficients , we have that using the orthonormality property of clebsch gordan coefficients we have multiplying this equation by and summing over the indexes and we arrive at the result using eqs .( [ eq.2 ] ) and ( [ equation16 ] ) we can write the observable operator in terms of unitary irreducible tensors as follows : substituting into ( [ equation24 ] ) , in view of the orthonormality of the clebsch gordan coefficients , we obtain the observable in terms of its tomogram : the density operator can be expanded in terms of irreducible tensors ( [ tlm ] ) as follows : one can express the operators determining the star - product of tomographic symbols in terms of irreducible tensors . by comparing the formulas defining the generic symbol of operators ( [ eq.a1 ] ) and its inverse ( [ eq.a2 ] ) with the formulae defining the observable tomogram ( [ new1 ] ) and its inverse ( [ equation25 ] ), one can find the operators and explicitly .the operators and can be expressed as follows : using formulae ( [ neweq37 ] ) and ( [ neweq38 ] ) , one can write down a composition rule for two symbols and determining the star - product of these symbols . the composition rule is the kernel in the integral of ( [ eq.25 ] ) is the trace of the product of the operators used to construct the map .\label{eq.26}\ ] ] within this framework , according to ( [ a01 ] ) , ( [ a02 ] ) and ( [ a04 ] ) , one has two equivalent expressions for the operator or , due to the structure of this equation , the dual operator reads where is given in eq .( [ tlm ] ) . inserting the expressions for the operators and in ( [ eq.26 ] ) and using the properties of irreducible tensors ( [ a ] ) and ( [ b ] ), one obtains an explicit form for the kernel of the spin star - product one can extend the construction by introducing a unitary spin tomogram of the multiqudit state with density matrix . for this , one uses the joint probability distribution where is a unitary operator in the hilbert space of multiqudit states . for a simple qudit state , the tomogram unitary symbol is where is a matrix . since it is possible to reconstruct the density matrix using only spin tomograms , the unitary spin tomogram also determines the density matrix completely .one can integrate in eq .( [ equation29 ] ) the unitary spin tomogram using the haar measure instead of and adding the delta - function term .this construction means that the spin quantum state is defined by a map of the unitary group to the simplex .the following are the properties of the unitary spin tomograms of multiqudit systems : ( i)normalization ( ii)group normalization from the haar measure on the unitary group divided by the group volume , one obtains the measure with .then , this property follows from the orthogonality condition for matrix elements of unitary matrices as elements of an irreducible representation of a compact group .another property is where the tomogram is a tomogram for the subsystem density matrix .an analogous unitary group integration property follows from the relation yielding that corresponds to unitary spin symbol ( [ a86 ] ) defines , for each density matrix , a mapping from the unitary group , , to a dimensional simplex .the nature of the image of on the simplex depends on the nature of the density matrix .the unitary spin symbol image of on the simplex for most density matrices ( with at least two different eigenvalues ) has dimension . for pure states , it is the whole simplex and for mixed states a volume bounded by the hyperplanes where are the eigenvalues of the density matrix ._ proof _ : such that is diagonal .then by another if is a pure state , only one .then that is , all points in the simplex are obtained .therefore , for a pure state , the unitary tomographic symbol maps the unitary group on the whole simplex . to obtain the dimensionality of the image for a general ( mixed ) state , we consider the elementary transformations : consider these elementary transformations acting on the diagonalized matrix . does not change the diagonal elements and both and have a similar action : a general infinitesimal transformation would be and the dimension of the simplex image of is the rank of the jacobian . if has at least two different eigenvalues , the rank is , this being the dimension of the simplex image .the hyperplanes ( [ m1 ] ) bounding this simplex volume follow from the convex nature of the eigenvalues linear combination ( [ m2 ] ) .the situation where all eigenvalues are equal is exceptional , the image being a point in this case . figure 1 shows an example for a mixed state of a two - qubit state , when . for a bipartite system of dimension , the distinction between factorized and entangled states refers to the behavior under transformations of the factorized group .we call a state _ factorized _ ,if the density matrix is and _ classically correlated _ , if with .the simplex symbol image under of a generic factorized or classically correlated state has dimension ._ proof _ : for a classically correlated state , if it is not , in general , possible to find an element of diagonalizing .therefore , one has to consider the action of the elementary unitary transformations ( [ m3 ] ) on a general matrix . does not change the diagonal elements whereas the action for is and for is for generic matrices , and operate independently , therefore , infinitesimal transformations explore independent directions . generalization to classically correlated multipartite systems is immediate , implying that the image dimension under is . as an example , we compute explicitly the equation for the two - dimensional surface image in the two - qubit case for a factorized state . in this case , one has to consider mappings from to the simplex .let . here, without loosing generality , may be considered as diagonal .then is hence is implying figure 2 shows this two - dimensional surface in the 3-dimensional simplex . for a pure state, this would be the image of the group . for a mixed state ,the image is the intersection of the surface with the spanned volume , as in fig . 1. theorem 4.2 suggests a notion of _ geometric correlation _ ,namely , a state of a multipartite system is called geometrically correlated if the symbol image under has dimension less than . deviations from geometrical genericity occur when the systems are entangled or the density matrix has special symmetry properties . as an example , consider the entangled state a simple computation shows that the image under is defined by implying that the image is one - dimensional ( figure 3 ) however , the dimension reduction of the image of does not coincide with the notion of entanglement . as an example , consider the werner state , which is known to be entangled only for . in this case , because of the highly symmetric nature of the state , the orbit of the tomgraphic symbol is the same both for and , namely , implying that the image is always one - dimensional. incidentally , peres separability criterium applied to the partial transpose expressed in tomographic operator symbols would be the state being entangled when there is an for which this identity is violated .in the standard quantum formulation , measurements are realized by von neumann `` instruments '' which are orthogonal projectors onto eigenstates of the variables being measured .the projectors applied to the pure state yield or in terms of density matrix , one has the result for a mixed state , the measurement provides the state density operator after measurement generalized measurements use positive operator - valued measures ( povm ) , that is , positive operators with the property the index being either discrete or continuous . in the latter case, one has an integration in ( [ ai5a ] ) . within the framework of operator symbols and star - products , instead of ( [ ai4a ] ), we have after the measurement a symbol for the density operator of the state being the symbol of the density operator of measurable state and the symbol of the instrument . in the tomographic probability representation ,the result of measurements is described by a map of the probability distributions , namely , with the kernel of the star - product of tomograms given by eq . ( [ kernel ] ) . for the case of spin ( or unitary spin ) tomograms , the linear map of tomographic - probability distributions is realized by the formula the kernel of the star - product being given by eq .( [ eq.26a ] ) .one sees that , in the probability representation , the process of measurement , both with von neumann instruments and with povm , is described by a map of points in the simplex .in the standard representation of quantum mechanics , states ( state vectors or density operators ) of closed system evolve according to unitary change or this evolution is a solution to schrdinger or von neumann equations =0 , \label{v3}\ ] ] being the hamiltonian of the system .the evolution can be cast into operator symbol form .let be the symbol of an operator .we do not specify at the moment what kind of symbols are used , considering them as generic ones with quantizer dequantizer pair , .then , the operator equation ( [ v3 ] ) for density operator reads we denote the symbol of the density operator by .the solution of eq .( [ v4 ] ) has a form corresponding to ( [ v2 ] ) one can rewrite the solution ( [ v2 ] ) as a superoperator acting in a linear space of operators , namely , in matrix form , eq .( [ v6 ] ) reads for unitary evolution , the superoperator is expressed in terms of unitary matrix as a tensor product one rewrites the solution ( [ v5 ] ) for the symbol introducing the propagator for the unitary evolution ( [ v2 ] ) , the propagator reads where the kernels under the integral are given by eq .( [ eq.26 ] ) . in the case of superoperators describing the evolution of an open system the propagator reads the propagator corresponds to a superoperator , which in matrix form reads for the case of continuous variables and symplectic tomograms , eq .( [ v4 ] ) takes the form of a deformed boltzman equation for the probability distribution . for unitary spin tomograms, one has the unitary evolution matrix being determined by an hamiltonian matrix this means that the unitary spin tomogram , a function on the unitary group , evolves according to the regular representation of the unitary group .this means that the partial differential equation for the infinitesimal action is the standard equation for matrix elements of the regular representation , that is , being the hamiltonian hermitian matrix and the infinitesimal hermitian first - order differential operators of the left regular representation of the unitary group in the chosen group parametrization .in this section , we consider the unitary spin representation of some typical quantum channels .consider bit flip , phase flip and both with equal probability the kraus representation is for the unitary spin symbol representation , choosing the axis one has and with an arbitrary unitary group element one obtains when the image in the simplex contracts to a segment between and . the kraus representation is with for the unitary spin symbol representation , consider the example then and when the image in the simplex contracts to a point . the kraus representation is with for the unitary spin representation , consider an excited initial state then when varies from to , the image in the simplex first contracts to a point and then expands again to the whole simplex when .the operator symbols being functions on the rotation or unitary groups are highly redudant descriptions of qudit states . as expected from the number of independent parameters in the density matrix ,also here numbers are enough to characterize a qudit .this is easy to check .consider the operator symbol ( [ a86 ] ) for an arbitrary density matrix .a general may be diagonalized by of independent unitary transformations and this , together with the independent diagonal elements , gives the desired result .alternatively we may consider independent elements of the unitary group and compute the associated operator symbols .then , the qudit state would be described by their diagonal elements .therefore , a discrete quantum state ( qudit ) is coded by probability distributions . for each in the group ,the elements in the operator symbol are the probabilities to obtain the values in a measurement of the quantum state by an apparatus oriented along .therefore the problem of reconstructing the state from the set of operator symbol elements is identical to the reconstruction of the density matrix of a spin through stern - gerlach experiments , already discussed in the literature .the tomographic operator symbols satisfy therefore , they are probability distributions . one defines the _ operator symbol entropy _ by and the _ operator symbol rnyi entropies _ by likewise , we may define the _ operator symbol relative q - entropy _ by with because the operator symbols are probability distributions , they inherit all the known properties of nonnegativity , additivity , joint convexity , etc . of classical information theory .the relation of the operator symbol entropies to the von neumann and the quantum rnyi entropies is given by the following the von neumann and the quantum rnyi entropies are the minimum on the unitary group of and ._ proof _ : from there is a such that is a diagonal matrix .then for any other , the diagonal elements in ( [ tt1 ] ) are convex linear combinations of . by convexity of result follows for the von neumann entropy .the operator symbol rnyi entropy is not a sum of concave functions .however , the following function is , called the tsallis entropy and related to the rnyi entropy by the minimum result now applies to by concavity and then one checks from ( [ tt2 ] ) that it also holds for .therefore coincides with the quantum rnyi entropy the entropy varies from the minimum , which is von neumann entropy , to a maximum for the most random distribution . for each given state , one can also define the _ integral entropies _ where is the invariant haar measure on unitary group . in some cases, the properties of the von neumann entropy may be derived as simple consequences of the classical - like properties of the operator symbol entropies .for example : * subadditivity : * consider a two - partite system with density matrix being a unitary matrix from the reduced symbols and density matrices one writes the reduced symbol entropies for each fixed , the tomographic symbols are ordinary probability distributions .therefore , by the subadditivity of classical entropy , in particular , this is true for the group element in that diagonalizes the reduced density matrices and .therefore , but , by the minimum property , thus subadditivity for the von neumann entropy is a consequence of subadditivity for the operator symbol entropies .the situation concerning strong subadditivity is different .strong subadditivity also holds , of course , for the operator symbol entropies for any but the corresponding relation for the von neumann entropy is not a direct consequence of ( [ tt3 ] ) .the strong subadditivity for the von neumann entropy expressed in operator symbol entropies would be where are the different group elements that diagonalize the respective subspaces .therefore strong subadditivity for the von neumann entropy ( [ tt4 ] ) and strong subadditivity for the operator symbol entropies ( [ tt3 ] ) are independent properties . on the other hand , because of the invertible relation ( [ equation29 ] ) between the operator symbols and the density matrix , eq.([tt3 ] ) contains in fact a family of new inequalities for functionals of the density matrix .to conclude we summarize the main results of this work : \(iii ) evolution equations for qudit operator symbols are written in the form of first - order partial differential equations with generators describing the left regular representation of the unitary group .\(vi ) in view of the probability nature of the operator symbols , the corresponding entropies inherit the properties of classical information theory .some of the properties of the von neumann entropy and quantum rnyi entropy are direct consequences of these properties . on the other handthe properties of the operator symbol entropies also imply new relations for functionals of the density matrix .
hilbert space operators may be mapped onto a space of ordinary functions ( operator symbols ) equipped with an associative ( but noncommutative ) star - product . a unified framework for such maps is reviewed . because of its clear probabilistic interpretation , a particular class of operator symbols ( tomograms ) is proposed as a framework for quantum information problems . qudit states are identified with maps of the unitary group into the simplex . the image of the unitary group on the simplex provides a geometrical characterization of the nature of the quantum states . generalized measurements , typical quantum channels , entropies and entropy inequalities are discussed in this setting .
consensus of multi - agent systems ( mass ) is a fundamental problem in collective behaviors of autonomous individuals , which has been extensively studied in the last decade .the key problem for consensus is to design appropriate distributed protocols that each agent can only get information from its local neighbors , and the whole network of agents may coordinate to reach an agreement on certain quantities of interest eventually .most of previous researches analyzed and solved the consensus problem based on the time - domain state - space model .other studies utilized the nyquist stability criterion in the frequency domain .one contribution of this paper is to provide new insights of the consensus problem by exploring the recent development of graph signal processing in spatial frequency domain .graph signal processing has drawn great interests for analyzing high - dimensional data in recent years .there are two different frameworks on graph signal processing .one develops the discrete signal processing structure and concepts on graph based on the jordan normal form and generalized eigenbasis of the adjacency matrix . the other establishes the framework by merging algebraic and spectral graph concepts with classical signal processing which interprets the graph laplacian eigenvalues as graph frequencies and the orthonormal eigenvectors as graph fourier transformation . in this paper, we will first reveal the explicit connection between filtering of graph signals and consensus of mass .it is shown that the mas can reach its average consensus in finite time by designing an appropriate protocol filter to keep only the low frequency component of the graph ( corresponding to the zero eigenvalue of the graph laplacian matrix ) while surrogating other higher frequency components .the designed graph filter can be implemented by a distributed consensus protocol derived from the closed - loop property of the mas and the laplacian matrix of the network graph .viewing mass from graph signal processing perspective not only provides new insights , it also presents a new methodology to solve some challenging problems in mass on uncertain networks . for mass with estimated laplacian matrix, we show that the graph signal processing perspective can help to design distributed consensus protocol gains as well as estimate the asymptotic consensus error , which is difficult to analyze by the existing time - domain state - space model based methods . for mass with completely unknown laplacian matrix except assuming the connectivity and the maximum degree of the network graph, we provide a new design method of the consensus protocol gain .the consensus error bound is also presented .numerical examples are given to demonstrate the effectiveness of the proposed methods .let be an undirected graph of order ( ) which consists of an agent set , an edge set , and a adjacency matrix \in { \re ^{n \times n}} ] , if and only if the is connected . consider a graph , its laplacian matrix can be written as , where , \in \re ^ { n \times n} ] . similarly to classical signal processing , the fourier transform in the graph signal processing is defined on the graph spectra as , and the inverse graph fourier transform is given by , where ^t \in \re ^ n ] be the filtered graph signal in spatial frequency domain , then the graph spectral filtering can be defined as . taking the inverse graph fourier transform , the filtered graph signal in time domaincan be obtained as , where ^t \in \re ^ n ] is the adjacency matrix of graph .* definition 1 . * for a control protocol within time given by ( [ eq7 ] ) , the corresponding protocol filter is defined as the average consensus of multi - agent system ( [ eq6 ] ) under protocol ( [ eq7 ] ) is said to be achieved asymptotically if , . andthe average consensus is said to be reached at time if , , hold for all . from ( [ eq6 ] ) and ( [ eq7 ] ) , the dynamics of the multi - agent system can be calculated that let the initial state and the current state of the mas be the collective signal values of the original graph signal and the filtered one , respectively .then , the mas plays a role of the filter for the graph signal with and , and the the transfer function of the filter is defined in definition 1 .the following result gives the property of the protocol filter achieving average consensus .* theorem 1 .* for the mas ( [ eq6 ] ) on a connected graph , of which the laplacian matrix has eigenvalues as , assume the consensus protocol is in the form of ( [ eq7 ] ) .then the mas reaches average consensus at time if and only if the corresponding protocol filter defined by ( [ eq07 ] ) satisfies and for .the proof of theorem 1 is omitted due to space limitation .it is easy to verify that the consensus state is .theorem 1 shows that the protocol filter can be viewed as a low - pass filter with zero at high frequency components .it follows from theorem 1 that the mas with agents on a connected graph can definitely reach the average consensus at time by properly choosing the control gain .the following corollary shows that the consensus time can be smaller than .* corollary 1 .* for the mas ( [ eq6 ] ) on a connected graph with distinct nonzero eigenvalues ( ) , take the control gains as then the consensus protocol ( [ eq7 ] ) makes the mas reach average consensus at time , that is , .* the fact that finite time consensus can be achieved by choosing the control gains equal to the reciprocal of nonzero laplacian eigenvalues is not new .it has been obtained by using different methods , for example , matrix factorization method , minimal polynomial method . by defining the protocol filter , theorem 1derives the consensus result from a graph signal processing perspective . in the next section, we will show that this method is a powerful tool to solve the consensus of mass on uncertain networks , which is difficult ( sometimes unable ) to deal with by existing methods .in the previous section , we assume that the graph structure of mass is completely known .the exact average consensus in finite - time can be reached by designing a appropriate protocol filter .this section will discuss the average consensus problem in two non - ideal cases : estimated laplacian matrix and unknown network topology .denote the consensus error as , the consensus performance will be analyzed in both cases .consider the mas on a connected graph with laplacian matrix .denote the eigenvalues of as .let be the estimated laplacian matrix of , where . for , denote * theorem 2 . * for the mas ( [ eq6 ] ) on the graph under the consensus protocol ( [ eq7 ] ) , take the control gain as where are eigenvalues of the estimated laplacian matrix . then the mas reaches average consensus asymptotically if and , where is defined in ( [ eq11 ] ) .moreover , at time , the consensus error satisfies ._ proof outline : _ the corresponding protocol filter at each control period can be written as . then the consensus error at time can be calculated by .it is easy to see that since .thus , the mass ( [ eq6 ] ) can reach the average consensus asymptotically by the protocol ( [ eq7 ] ) . for .( b ) the trajectory of agent state for .( c ) the trajectory of agent state for .,width=302 ] for .( b ) the consensus square error for .( c ) the consensus square error for .,width=302 ] _ example 1 : _ consider the mas on a connected graph with agents , the laplacian matrix is given as , where is the estimated laplacian matrix corresponding to an unweighted cycle network , and is the estimation error bound . from theorem 2 , the control gain in one control period can be designed as for .let the simulation time be , respectively , then the evolution of agent states and the consensus error are shown in fig . 1 and figit can be seen that the mas ( [ eq6 ] ) under the designed protocol ( [ eq7 ] ) reaches average consensus asymptotically , and the consensus errors at each are , , and .consider a connected graph with unknown network topology , the methods proposed in previous sections have no application to solve the average consensus problem in this case .assume the maximum degree of the graph is given , i.e. , .then , divide ] . andthe consensus error can be calculated as .it is easy to see that since .then the mas reaches the average consensus asymptotically under the protocol ( [ eq7 ] ) .assume the lower bound of the algebraic connectivity satisfies , a more accurate upper bound of the protocol filter can be derived as .then the consensus error at time satisfies .it follows from theorem 3 that the algebraic connectivity of the graph plays an important role in reaching consensus , and the higher algebraic connectivity corresponding to a better consensus performance and a lower consensus error at each control period ._ example 2 : _ consider a mas with agents on two different graphs , one is an unweighted cycle , the other is an unweighted path .the maximum degrees of the two graphs are the same , i.e. , .divide $ ] into uniform intervals , then and . from theorem 3 ,the control gain in one period can be derived as , . for the mas ( [ eq6 ] ) on two graphs and respectively , the protocol ( [ eq7 ] ) with the designed periodic control gain can solve the average consensus asymptotically as shown in fig 3 . moreover , it is easy to verify that the algebraic connectivity of the cycle is higher than that of the path , thus the consensus performance of the graph is much better than the graph , and the consensus errors of the two graphs are shown in fig 4 . .( b ) the agent state trajectory of the unweighted path .,width=302 ] of the unweighted cycle .( b ) the consensus error of the unweighted path .,width=302 ]this paper has established the explicit connection between filtering of graph signals and consensus of mass . it has been shown that the mas can reach its average consensus in finite time by designing an appropriate protocol filter , which can be implemented by a distributed consensus protocol . by using the concept of the protocol filter , we provide new methods to solve the average consensus problem in cases of estimated laplacian matrix and unknown network topology .the asymptotic consensus error has been analyzed in both cases .while the protocol filter is defined for mass on undirected graphs , it can be easily extended to direct graphs .we only consider mass consisting of agents with first - order dynamics , it is interesting to extend our methods to mass with high - order dynamics .s. zhu , m. hong and b. chen , `` quantized consensus admm for multi - agent distributed optimization , '' _ in proceedings of 2016 ieee international conference on acoustics , speech and signal processing ( icassp ) _ , pp . 4134 - 4138 , 2016 .t. li and j. f. zhang , `` consensus conditions of multi - agent systems with time - varying topologies and stochastic communication noises , '' _ ieee transactions on automatic control _ , vol .9 , pp . 2043 - 2057 , 2010 .g. s. seyboth , d. v. dimarogonas , k. h. johansson , p. frasca and f. allgower , `` on robust synchronization of heterogeneous linear multi - agent systems with static couplings , '' _ automatica _ , vol .392 - 399 , 2015 .r. olfati - saber and j. s. shamma , `` consensus filters for sensor networks and distributed sensor fusion , '' _ in proceedings of the 44th ieee conference on decision and control ( cdc ) _ , pp .6698 - 6703 , 2005 .d. i. shuman , s. k. narang , p. frossard , a. ortega and p. vandergheynst , `` the emerging field of signal processing on graphs : extending high - dimensional data analysis to networks and other irregular domains , '' _ ieee signal processing magazine _ , vol .3 , pp . 83 - 98 , 2013 .j. mei and j. m. f. moura , `` signal processing on graphs : estimating the structure of a graph , '' _ in proceedings of 2015 ieee international conference on acoustics , speech and signal processing ( icassp ) _ , pp .5495 - 5499 , 2015 .a. sandryhaila , s. kar and j. m. f. moura , `` finite - time distributed consensus through graph filters , '' _ in proceedings of 2014 ieee international conference on acoustics , speech and signal processing ( icassp ) _ , pp .1080 - 1084 , 2014 .j. mei and j. m. f. moura , `` signal processing on graphs : performance of graph structure estimation , '' _ in proceedings of 2016 ieee international conference on acoustics , speech and signal processing ( icassp ) _ , pp .6165 - 6169 , 2016 .e. pavez and a. ortega , `` generalized laplacian precision matrix estimation for graph signal processing , '' _ in proceedings of 2016 ieee international conference on acoustics , speech and signal processing ( icassp ) _ , pp .6350 - 6354 , 2016 .s. k. narang , a. gadde and a. ortega , `` signal processing techniques for interpolation in graph structured data , '' _ 2013 ieee international conference on acoustics , speech and signal processing ( icassp ) _ , pp .5445 - 5449 , 2013 .a. anis , a. gadde and a. ortega , `` towards a sampling theorem for signals on arbitrary graphs , '' _ 2014 ieee international conference on acoustics , speech and signal processing ( icassp ) _ , pp .3864 - 3868 , 2014 .a. gadde and a. ortega , `` a probabilistic interpretation of sampling theory of graph signals , '' _ 2015 ieee international conference on acoustics , speech and signal processing ( icassp ) _ , pp . 3257 - 3261 , 2015 .
this paper revisits the problem of multi - agent consensus from a graph signal processing perspective . by defining the graph filter from the consensus protocol , we establish the direct relation between average consensus of multi - agent systems and filtering of graph signals . this relation not only provides new insights of the average consensus , it also turns out to be a powerful tool to design effective consensus protocols for uncertain networks , which is difficult to deal with by existing time - domain methods . in this paper , we consider two cases , one is uncertain networks modeled by an estimated laplacian matrix and a fixed eigenvalue bound , the other is connected graphs with unknown topology . the consensus protocols are designed for both cases based on the protocol filter . several numerical examples are given to demonstrate the effectiveness of our methods . multi - agent system , graph signal processing , average consensus
there are several practical scenarios where it is inappropriate to assume that the distribution of the observations does not change .for example , financial datasets can exhibit alternate behaviours due to crisis periods . in this caseit is sensible to assume changes in the underlying distribution .the change in the distribution can be either in the value of one or more of the parameters or , more in general , on the family of the distribution . in the latter case , one may deem appropriate to consider a normal density for the stagnation periods , while a student , with relatively heavy tails , may be more suitable to represents observations in the more turbulent data of a crisis .the task of identifying if , and when , one or more changes have occurred is not trivial and requires appropriate methods to avoid detection of a large number of changes or , at the opposite extreme , seeing no changes at all .whilst the literature covering change point analysis from a bayesian perspective is vast when prior distributions are elicited , the documentation referring to analysis under minimal prior information ( i.e. objective bayes ) , is very limited .in fact , to the best of our knowledge , only two papers tackled the problem from an objective point of view : and .the former discusses the single change point problem in a model selection setting , whilst the latter , which is an extension of the former , tackles the multivariate change point problem in the context of linear regression models .this work aims to contribute to the methodology for change point analysis under the assumption that the information about the number of change points and their location is minimal .first , we discuss the definition of an objective prior for change point location , both for single and multiple changes , assuming the number of changes is known a priori .then , we define a prior on the number of change points via a model selection approach . here, we assume that the change point coincides with one of the observations .as such , given data points , the change point location is discrete . to the best of our knowledge ,the sole objective approach to define prior distributions on discrete spaces is the one introduced by .to illustrate their idea , consider a probability distribution , where is a discrete parameter .then , the prior is obtained by objectively measuring what is lost if the value is removed from the parameter space , and it is the true value . according to , if a model is misspecified , the posterior distribution asymptomatically accumulates on the model which is the most similar to the true one , where the similarity is measured in terms of the kullback leibler ( kl ) divergence .therefore , , where is the parameter characterising the nearest model to , represents the utility of keeping .the objective prior is then obtained by linking the aforementioned utility via the self - information loss : where the kullback leibler divergence from the sampling distribution with density to the one with density is defined as : {\mathop{}\!\mathrm{d}}x .\label{kldef}\ ] ] throughout the paper , the objective prior defined in equation will be referenced as the loss - based prior .this approach is used to define an objective prior distribution when the number of change points is known a priori . to obtain a prior distribution for the number of change points, we adopt a model selection approach based on the results in , where a method to define an objective prior on the space of models is proposed . to illustrate ,let us consider bayesian models : where is the sampling density characterised by and represents the prior on the model parameter .assuming the prior on the model parameter , , is proper , the model prior probability is proportional to the expected minimum kullback leibler divergence from , where the expectation is considered with respect to .that is : \right\rbrace \qquad j=1 , \ldots , k. \label{villamodelcompactorg}\end{aligned}\ ] ] the model prior probabilities defined in equation can be employed to derive the model posterior probabilities through : ^{-1 } , \label{modelposterior}\ ] ] where is the bayes factor between model and model , defined as with .the paper is structured as follows . in section [ sc_locations ]we establish the way we set objective priors on both single and multiple change point locations .section [ sc_numbercp ] shows how we define the model prior probabilities for the number of change point locations .illustrations of the model selection exercise are provided in sections [ sc_simulation ] and [ sc_realdata ] , where we work with simulated and real data , respectively .section [ conclusion ] is dedicated to final remarks .this section is devoted to the derivation of the loss - based prior when the number of change points is known a priori .specifically , let be the number of change points and their locations .we introduce the idea in the simple case where we assume that there is only one change point in the dataset ( see section [ sub_sc_onechgpoint ] ) .then , we extend the results to the more general case where multiple change points are assumed ( see section [ sub_sc_multiplechgpoints ] ) . here , we show that the loss - based prior for the single change point case coincides with the discrete uniform over the set .let denote an _n_-dimensional vector of random variables , representing the random sample , and be our single change point location , that is , such that {\mbox{\normalfont\tiny\sffamily i.i.d.}}}{\sim}}}f_{1}(\cdot|\tilde\theta_{1 } ) \nonumber \\ x_{m+1 } , \ldots , x_{n}| \tilde\theta_{2 } & { \mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily i.i.d.}}}{\sim}}}f_{2}(\cdot|\tilde\theta_{2 } ) .\label{singlechangepointcase}\end{aligned}\ ] ] note that we assume that there is a change point in the series , as such the space of does not include the case .in addition , we assume that when . the sampling density for the vector of observations is : let .then , the kullback leibler divergence between the model parametrised by and the one parametrised by is : without loss of generality , consider . in this case , note that leading to on the right hand side of equation , we can recognise the kullback leibler divergence from density to density , thus getting : in a similar fashion , when , we have that : in this single change point scenario , we can consider as a perturbation of the change point location , that is where , such that .then , taking into account equations and , the kullback leibler divergence becomes : and =\nonumber \\&=\min_{m^{\prime}\neq m}\lbrace l \cdot d_{kl}(f_{2}(\cdot|\tilde\theta_{2})\|f_{1}(\cdot|\tilde\theta_{1})),l \cdot d_{kl}(f_{1}(\cdot|\tilde\theta_{1})\|f_{2}(\cdot|\tilde\theta_{2}))\rbrace\nonumber \\&= \min_{m^{\prime}\neq m}\lbrace d_{kl}(f_{2}(\cdot|\tilde\theta_{2})\|f_{1}(\cdot|\tilde\theta_{1})),d_{kl}(f_{1}(\cdot|\tilde\theta_{1})\|f_{2}(\cdot|\tilde\theta_{2}))\rbrace \cdot \underbrace{\min_{m^{\prime}\neq m}\{l\}}_{1}. \label{kl_no_m}\end{aligned}\ ] ] we observe that equation is only a function of and and does not depend on .thus , and , therefore , this prior was for example used in an econometric context by with the rationale of giving equal weight to every possible change point location . in this section, we address the change point problem in its generality by assuming that there are change points .in particular , for the data , we consider the following sampling distribution where , , is the vector of the change point locations and is the vector of the parameters of the underlying probability distributions .schematically : {\mbox{\normalfont\tiny\sffamily i.i.d.}}}{\sim}}}f_{1}(\cdot|\tilde\theta_{1 } ) \\x_{m_{1}+1 } & , \ldots , & x_{m_2}|\tilde\theta_{2 } & { \mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily i.i.d.}}}{\sim}}}f_{2}(\cdot|\tilde\theta_{2})\\ \vdots&,\ldots , & \vdots & \vdots\ldots\vdots\\ x_{m_{k-1}+1 } & , \ldots , & x_{m_k}|\tilde\theta_{k } & { \mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily i.i.d.}}}{\sim}}}f_{k}(\cdot|\tilde\theta_{k})\\ x_{m_{k}+1 } & , \ldots , & x_{n}|\tilde\theta_{k+1 } & { \mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily i.i.d.}}}{\sim}}}f_{k+1}(\cdot|\tilde\theta_{k+1}).\\ \end{array}\ ] ] if , then it is reasonable to assume . in a similar fashion to the single change point case, we can not assume since we require exactly change points .in this case , due to the multivariate nature of the vector , the derivation of the loss - based prior is not as straightforward as in the one dimensional case .in fact , the derivation of the prior is based on heuristic considerations supported by the below theorem [ lemmaonechgpoint ] ( which proof is in the appendix ) . in particular , we are able to prove an analogous of equations and when only one component is arbitrarily perturbed .let us define the following functions : where .the following theorem is useful to understand the behaviour of the loss - based prior in the general case .+ let be the sampling distribution defined in equation and consider .let be such that for , and let the component be such that and .therefore , where .[ lemmaonechgpoint ] note that , theorem [ lemmaonechgpoint ] states that the minimum kullback leibler divergence is achieved when or .this result is not surprising since the kullback leibler divergence measures the degree of similarity between two distributions .the smaller the perturbation caused by changes in one of the parameters is , the smaller the kullback leibler divergence between the two distributions is . if now we consider the general case of having change points , it is straightforward to see that the kullback leibler divergence is minimized when only one of the components of the vector is perturbed by ( plus or minus ) one unit . as such, the loss - based prior depends on the vector of parameters only , as in the one - dimensional case , yielding the uniform prior for .therefore , the loss - based prior on the multivariate change point location is where .the denominator in equation has the above form because , for every number of change points , we are interested in the number of -subsets from a set of elements , which is . the same prior was also derived in a different way by .here , we approach the change point analysis as a model selection problem . in particular , we define an objective prior on the space of models , where each model represents a certain number of change points ( including the case of no change points ) .the method adopted to define an objective prior on the space of models is the one introduced in .we proceed as follows .assume we have to select from possible models .let be the model with no change points , the model with one change point and so on .generalising , model corresponds to the model with change points .the idea is that the current model encompasses the change point locations of the previous model . as an example , in model the first two change point locations will be the same as in the case of model . to illustrate the way we envision our models , we have provided figure [ modelconstruct ] . keeping in line with the notation used during the introductory section , for the model , with a non - negative integer , the model parameter is equivalent to the vector .here , represent the parameters of the underlying sampling distributions considered under model , and are the respective change point locations , as in figure [ modelconstruct ] .based on the way we have specified our models , which are in direct correspondence with the number of change points and their locations , we state theorem [ remark_model ] ( which proof is in the appendix ) .let for any integers , with , and the convention , we have the following : ,\end{aligned}\ ] ] and .\end{aligned}\ ] ] [ remark_model ] the result in theorem [ remark_model ] is useful when the model selection exercise is implemented .indeed , the approach requires the computation of the kullback leibler divergences in theorem [ remark_model ] . recalling equation , the objective priors on the model prior probabilities are given by : \right\rbrace\qquad j=0 , 1 , \ldots , k. \label{modelpriors}\end{aligned}\ ] ] for illustrative purposes , in the appendix we derive the model prior probabilities to perform model selection among , and .+ * remark . * in the case where the changes in the underlying sampling distribution are limited to the parameter values , the model prior probabilities defined in follow the uniform distribution .that is , . in the real data exampleillustrated in section [ sub_sc_british_coal_mine ] , we indeed consider a problem where the above case occurs .let us consider the case where we have to estimate whether there is or not a change point in a set of observations .this implies that we have to choose between model ( i.e. no change point ) and ( i.e. one change point ) .following our approach , we have : \right\ } , \label{prior_m0_one_chg}\end{aligned}\ ] ] and \right\ } \label{prior_m1_one_chg}.\end{aligned}\ ] ] now , let us assume independence between the prior on the change point location and the prior on the parameters of the underlying sampling distributions , that is .let us further recall that , as per equation , . as such , we observe that the model prior probability on becomes : \right\rbrace .\label{prior_m1_trans_one_chg}\end{aligned}\ ] ] we notice that the model prior probability for model is increasing when the sample size increases .this behaviour occurs whether there is or not a change point in the data .we propose to address the above problem by using a non - uniform prior for .a reasonable alternative , which works quite well in practice , would be to use the following shifted binomial as prior : to argument the choice of , we note that as increases , the probability mass will be more and more concentrated towards the upper end of the support .therefore , from equations and follows : \right\rbrace .\label{prior_m1_pseudo_binom}\end{aligned}\ ] ] for the more general case where one considers more than two models , the problem highlighted in equation vanishes .in this section , we present the results of several simulation studies based on the methodologies discussed in sections [ sc_locations ] and [ sc_numbercp ] .we start with a scenario involving discrete distributions in the context of the one change point problem .we then show the results obtained when we consider continuous distributions for the case of two change points .the choice of the underlying sampling distribution is in line with .[ [ scenario-1 . ] ] scenario 1 .+ + + + + + + + + + + the first scenario concerns the choice between models and .specifically , for we have : {\mbox{\normalfont\tiny\sffamily i.i.d.}}}{\sim}}}\mbox{geometric}(p)\ ] ] and for we have : {\mbox{\normalfont\tiny\sffamily i.i.d.}}}{\sim}}}\mbox{geometric}(p ) \\x_{m_{1}+1 } , x_{m_{1}+2 } , \ldots , x_{n}&|\lambda { \mathrel{\overset{\makebox[0pt]{\mbox{\normalfont\tiny\sffamily i.i.d.}}}{\sim}}}\mbox{poisson}(\lambda)\end{aligned}\ ] ] let us denote with and , the probability mass functions of the geometric and the poisson distributions , respectively .the priors for the parameters of and are and . in the first simulation, we sample observations from model with . to perform the change point analysis ,we have chosen the following parameters for the priors on and : , , and . applying the approach introduced in section [ sc_numbercp ], we obtain and .these model priors yield the posterior distribution probabilities ( refer to equation ) and .as expected , the selection process strongly indicates the true model as .table [ discrete_results_1 ] reports the above probabilities including other information , such as the appropriate bayes factors .the second simulation looked at the opposite set up , that is we sample observations from , with and .we have sampled 50 data points from the geometric distribution and the remaining 50 data points from the poisson distribution . in figure [ fig_gp ]we have plotted the simulated sample , where it is legitimate to assume a change in the underlying distribution .using the same prior parameters as above , we obtain and .again , the model selection process is assigning heavy posterior mass to the true model .these results are further detailed in table [ discrete_results_1 ] . in scenario 1 . ].model prior , bayes factor and model posterior probabilities for the change point analysis in scenario 1 .we considered samples from , respectively , model and model . [cols="^,^,^ " , ] from table [ finance_tableresults ] we note that the prior on model and assigned by the proposed method , are the same .this is not surprising as the only difference between the two models is an additional log - normal distribution with different parameter values .bayesian inference in change point problems under the assumption of minimal prior information has not been deeply explored in the past , as the limited literature on the matter shows .we contribute to the area by deriving an objective prior distribution to detect change point locations , when the number of change points is known a priori . as a change point locationcan be interpreted as a discrete parameter , we apply recent results in the literature to make inference . the resulting prior distribution , which is uniform , it is not new in the literature , and therefore can be considered as a validation of the proposed approach .a second major contribution is in defining an objective prior on the number of change points , which has been approached by considering the problem as a model selection exercise .the results of the proposed method on both simulated and real data , allow to show the strength of the approach in estimating the number of change points in a series of observations .a point to note is in the generality of the scenario considered .indeed , we consider situations where the change is in the value of the parameter(s ) of the underlying sampling distribution , or in the distribution itself. of particular interest , is the last real data analysis ( s&p 500 index ) , where we consider a scenario where we have both types of changes , that is the distribution for the first change point and on the parameters of the distribution for the second .xx berk , r. h. 1966 , ` limiting behavior of posterior distributions when the model is incorrect ' , _ the annals of mathematical statistics _ * 37*(1 ) , 5158 .carlin , b. p. , gelfand , a. e. smith , a. f. 1992 , ` hierarchical bayesian analysis of changepoint problems ' , _ applied statistics _ * 41*(2 ) , 389405 .chib , s. 1998 , ` estimation and comparison of multiple change - point models ' , _ journal of econometrics _ * 86*(2 ) , 221 241 .girn , f. j. , moreno , e. casella , g. 2007 , objective bayesian analysis of multiple changepoints for linear models , _ in _j. bernardo , m. bayarri , j. berger , a. dawid , d. heckerman , a. smith m. west , eds , ` bayesian statistics 8 ' , oxford university press , pp .227252 .kass , r. e. raftery , a. e. 1995 , ` bayes factors ' , _ journal of the american statistical association _ * 90*(430 ) , 773795 .koop , g. potter , s. m. 2009 , ` prior elicitation in multiple change - point models ' , _ international economic review _ * 50*(3 ) , 751772 .kullback , s. leibler , r. a. 1951 , ` on information and sufficiency ' , _ the annals of mathematical statistics _ * 22*(1 ) , 7986 .moreno , e. , casella , g. garcia - ferrer , a. 2005 , ` an objective bayesian analysis of the change point problem ' , _ stochastic environmental research and risk assessment _ * 19*(3 ) , 191204 .villa , c. walker , s. 2015 _ a _ , ` an objective approach to prior mass functions for discrete parameter spaces ' , _ journal of the american statistical association _ * 110*(511 ) , 10721082 .villa , c. walker , s. 2015 _ b _ , ` an objective bayesian criterion to determine model prior probabilities ' , _ scandinavian journal of statistics _ * 42*(4 ) , 947966 .yu , j. 2001 , chapter 6 - testing for a finite variance in stock return distributions , _ in _ j. knight , s. satchell , eds , `return distributions in finance ' , quantitative finance , butterworth - heinemann , oxford , pp . 143 164 .here we show how model prior probabilities can be derived for the relatively simple case of selecting among scenarios with no change points ( ) , one change point ( ) or two change points ( ) .first , by applying the result in theorem [ remark_model ] , we derive the kullback leibler divergences between any two models .that is : * the prior probability for model depends on the following quantities : * the prior probability for model depends on the following quantities : * the prior probability for model depends on the following quantities : * for model : }_{1}\cdot \left[\inf_{\tilde{\theta}_2}d_{kl}(f_{1}(\cdot|\tilde{\theta}_{1})\|f_{2}(\cdot|\tilde{\theta}_{2}))\right]\\= & \inf_{\tilde{\theta}_2}d_{kl}(f_{1}(\cdot|\tilde{\theta}_{1})\|f_{2}(\cdot|\tilde{\theta}_{2}))\\ \inf_{\theta_2}d_{kl}(m_0\|m_2)=&\underbrace{\left[\inf_{m_{1}\neq m_{2}}(m_{2}-m_{1})\right]}_{1}\cdot \left[\inf_{\tilde{\theta}_{2}}d_{kl}(f_{1}(\cdot|\tilde{\theta}_{1})\|f_{2}(\cdot|\tilde{\theta}_{2}))\right]\\ & + \underbrace{\left[\inf_{m_{2 } \neq n}(n - m_{2})\right]}_{1}\cdot \left[\inf_{\tilde{\theta}_3}d_{kl}(f_{1}(\cdot|\tilde{\theta}_{1})\|f_{3}(\cdot|\tilde{\theta}_{3}))\right]\\ = & \inf_{\tilde{\theta}_{2}}d_{kl}(f_{1}(\cdot|\tilde{\theta}_{1})\|f_{2}(\cdot|\tilde{\theta}_{2}))+\inf_{\tilde{\theta}_3}d_{kl}(f_{1}(\cdot|\tilde{\theta}_{1})\|f_{3}(\cdot|\tilde{\theta}_{3}))\end{aligned}\ ] ] * for model : }_{1}\cdot \left[\inf_{\tilde{\theta}_3}d_{kl}(f_{2}(\cdot|\tilde{\theta}_{2})\|f_{3}(\cdot|\tilde{\theta}_{3}))\right]\\=&\inf_{\tilde{\theta}_3}d_{kl}(f_{2}(\cdot|\tilde{\theta}_{2})\|f_{3}(\cdot|\tilde{\theta}_{3}))\\ \inf_{\theta_0=\tilde{\theta}_1}d_{kl}(m_1\|m_0)=&(n - m_{1})\cdot \inf_{\tilde{\theta}_1}d_{kl}(f_{2}(\cdot|\tilde{\theta}_{2})\|f_{1}(\cdot|\tilde{\theta}_{1}))\end{aligned}\ ] ] * for model : * the model prior probability is proportional to the exponential of the minimum between : ,\mathbb{e}_{\pi_{0}}\left[\inf_{\tilde{\theta}_{2}}d_{kl}(f_{1}(\cdot|\tilde{\theta}_{1})\|f_{2}(\cdot|\tilde{\theta}_{2}))\right.\right . \nonumber \\ & \left.\left .\hspace*{0.3cm}+\inf_{\tilde{\theta}_3}d_{kl}(f_{1}(\cdot|\tilde{\theta}_{1})\|f_{3}(\cdot|\tilde{\theta}_{3}))\right]\right\rbrace\end{aligned}\ ] ] * the model prior probability is proportional to the exponential of the minimum between : , \right . \nonumber \\ & \left .\hspace*{0.3 cm } \mathbb{e}_{\pi_1}\left[(n - m_{1 } ) \cdot \inf_{\tilde{\theta}_1}d_{kl}(f_{2}(\cdot|\tilde{\theta}_{2})\|f_{1}(\cdot|\tilde{\theta}_{1}))\right]\right\rbrace\end{aligned}\ ] ] * the model prior probability is proportional to the exponential of the minimum between : ,\right.\nonumber\\&\left.\hspace*{0.3cm}\mathbb{e}_{\pi_{2}}\left[(m_{2}-m_{1})\cdot \inf_{\tilde{\theta}_1}d_{kl}(f_{2}(\cdot|\tilde{\theta}_{2})\|f_{1}(\cdot|\tilde{\theta}_{1}))+(n - m_{2})\right.\right.\nonumber\\&\left.\left.\hspace*{0.3cm}\cdot\inf_{\tilde{\theta}_1}d_{kl}(f_{3}(\cdot|\tilde{\theta}_{3})\|f_{1}(\cdot|\tilde{\theta}_{1}))\right]\right\rbrace\end{aligned}\ ] ] we distinguish two cases : and . when , equivalent to : \,\mathrm{d}\mathbf{x}^{(n ) } \nonumber \\ = & \mathlarger{\sum_{i = m_{j}+1}^{m_{j}^{\prime } } } \int f(\mathbf{x}^{(n)}|\bm{m},\bm{\tilde\theta } ) \cdot \left[\ln\left(\dfrac{f_{j+1}(x_{i}|\tilde\theta_{j+1})}{f_{j}(x_{i}|\tilde\theta_{j})}\right)\right]\,\mathrm{d}\mathbf{x}^{(n ) } \nonumber \\ = & \mathlarger{\sum_{i = m_{j}+1}^{m_{j}^{\prime } } } \left\lbrace 1^{n-1 } \cdot \int f_{j+1}(x_{i}|\tilde\theta_{j+1 } ) \cdot \left[\ln\left(\dfrac{f_{j+1}(x_{i}|\tilde\theta_{j+1})}{f_{j}(x_{i}|\tilde\theta_{j})}\right)\right]\,\mathrm{d}x_{i}\right\rbrace \nonumber \\ = & \sum_{i = m_{j}+1}^{m_{j}^{\prime } } d_{kl}(f_{j+1}(x_{i}|\tilde\theta_{j+1})\|f_{j}(x_{i}|\tilde\theta_{j } ) ) \nonumber \\ = & ( m_j^{\prime}-m_{j } ) \cdot d_{kl}(f_{j+1}(\cdot|\tilde\theta_{j+1})\|f_{j}(\cdot|\tilde\theta_{j } ) ) \nonumber \\ = & ( m_j^{\prime}-m_{j } ) \cdot d_j^{+1}(\bm{\tilde\theta } ). \label{demopositive}\end{aligned}\ ] ] when , equivalent to , in a similar fashion , we get from equations and , we get the result in theorem [ lemmaonechgpoint ] .we recall that the model parameter is the vector , where . here , represent the parameters of the underlying sampling distributions considered under model and are the respective change point locations . in thissetting , we proceed to the computation of , that is the kullback leibler divergence introduced in section [ sc_numbercp ] .similarly to the proof of theorem [ lemmaonechgpoint ] , we obtain the following result . given equation , if we integrate out the variables not involved in the logarithms , we obtain in a similar fashion , it can be shown that
in this paper we present an objective approach to change point analysis . in particular , we look at the problem from two perspectives . the first focuses on the definition of an objective prior when the number of change points is known a priori . the second contribution aims to estimate the number of change points by using an objective approach , recently introduced in the literature , based on losses . the latter considers change point estimation as a model selection exercise . we show the performance of the proposed approach on simulated data and on real data sets . * keywords : * change point ; discrete parameter space ; loss - based prior ; model selection ; objective bayes .
current networks aim to support high data rates to end users by increasing the spectral efficiency in bits - per - hertz of the network , at the expense of the energy efficiency of the network .indeed , an important part of the energy consumption of mobile networks is proportional to the radiated energy , which relies on the frequency bandwidth and the transmission power .any energy efficient transmission scheme should exploits the whole system bandwidth by allocating the entire available spectrum to each base station .such an approach , however , leads to significant interference increase and performance degradation for mobiles located at the cell edges .the key challenge is to balance interference avoidance and spectrum use to reach an optimal spectral efficiency energy efficiency ( ee - se ) trade - off .this challenge has been addressed in the past , for instance using frequency / code planning in 2g/3 g networks or with cooperative multiple point antennas in 4 g .the aim of our work is to demonstrate the concept of a recently driven interference management scheme that not only leverage the current technology but also achieves greater overall energy efficiency .it exploits interference alignment ( ia ) concept for downlink to reduce inter - cell interference without complex coordination .the theoretical achievements of ia has been largely discussed e.g. .it has been concluded that one of the key results is that , under specific conditions , dense and high - power wireless networks are not fundamentally interference limited . as an example , under idealized assumptions , using ia in the setting of an interference channel formed by transmitter - receiver pairs interfering between one another allows an achievable data rate per pair equal to half of his interference - free channel , regardless of .strong efforts in the research community have been done to extend ia far beyond the initial -user interference channel .the recent review highlights the different technical challenges to be solved before envisioning a practical application among which implementing accurate feedback loops is probably the most important challenge .but even beyond the practical implementation of ia solutions in a network , the actual model of the network tends to be complex and involve a large number of hypothesis .these assumptions , or lack thereof , are needed and will play a significant role in the design of ia schemes .these ia schemes are in return heavily tuned to the specific hypotheses made and may not adapt to all cellular configurations , thereby justifying the need to develop an experimental evaluation of these techniques .a downlink cellular network is basically an interfering broadcast channel , where base stations ( bs ) transmit towards a number of users and interfere with each other .however , several tries to extend this approach in the context of cellular networks revealed some limiting improvement for the following reasons : i ) the direct extension of the ic model to cellular networks relies on defining first the association of each mobile to a given subset of resources .thus in each cell , the bs decides without coordination which set of resources is given to each mobile . to have a significant gain ,ia should be performed for users mutually suffering from interference .ii ) many works rely on a clustering approach , however , users located at cluster edges can not benefit from any improvement and are still subject to strong interference .iii ) signaling requirements reduce the theoretical gain of the system . in , suh_ proposed an ia scheme for downlink channels by considering a scenario where each bs uses a reduced space for its own transmission , preserving a given free subspace for other cells .their solution extended in , allows users in a cell to cancel their dominant interferer as well as the intra - cell interference for users in the same sub - band , achieving one degree of freedom ( dof ) , without any communication between the bss . in other words ,each bs create an interference - free hole in which the other bss can serve their mobile users .each mobile measures and feeds back its own free subspace to its main bs . after receiving this information from all associated ue the bs jointly computes a set of precoders and schedules the ue to maximize the overall capacity , under some fairness / priority constraints .basically , a certain cooperation between cells is not explicit but exists through the feedback channels .the performance of such scheme relies on many parameters such as synchronization , feedback capabilities , precoders choice ... in this paper , we review the aforementioned non - classic ia scheme for downlink proposed in and .then , we show the implementation scenario in the experimentation shielded room cortexlab located in lyon , france , and we discuss some experimental issues . finally , we present the results of our demo and highlight the capacity gain of the implemented ia scheme over the classical ofdma scheme .notations : boldface upper case letters and boldface lower case letters denote matrices and vectors , respectively . the superscripts and stand for the transpose and transpose conjugate matrices , respectively .the following notation denotes the last vectors of an -dimension space matrix defined as the left - singular matrix of a matrix , where svd( ) stands for the singular value decomposition ( svd ) .for our demo , we consider a cellular network with two bs and three user equipment ( ue ) , all equipped with a single antenna . the transmission scheme is based on ofdm with available sub - carriers .the transmit signal of the interfering bs is written as where is the data vector of length before hadamard precoding . in the implemented scheme ,each bs preserves dof for the other bss , thereby creating a hole free of interference at all cell - edges .the maximum number of transmitted streams is then . is a unitary trunk hadamard matrix of dimension that allows the use of the reduced space .we denote the fading channel matrix between the bs and the ue .let the reduced channel denotes , where and refer to the desired and interfering channel , respectively .the received signal at the ue is written as where is the noise vector .in addition to the hadamard trunking matrix , each original stream with is carried over the precoding vector ; the vector of a precoding matrix .the precoded stream vector at the bs is defined as and includes all streams associated to the bs . in order to accomplish a reliable interference management scheme , we review the non - classic ia technique extended in that calculates an optimal precoding set lying over the free - interference subspace and then selects the best subset that maximizes the total capacity of the downlink interference channel .the fading channels connecting the ues to their main bs and to the strongest interferer bs are firstly estimated at the all ues .this is possible through training sequences provided by both bss .then , each ue determines the interference null space thanks to the svd of the reduced channel .it is calculated as by applying a linear projection on the interference null space , the received signal is rewritten as in other words , the signal is being received in the inter - cell interference - free hole created by the strongest interferer .this means that the best for a ue is to have its desired streams aligned with the equivalent channel which guarantees a maximum received power of the desired signal .it is worth noting that in practice , the ues at cell edges can be interfered from more than one bs .dealing with such as a case requires either considering the interference as noise or estimating all interfering channels while the transmission is focused in the joint hole created by the union of the interference . for the sake of simplicity , we only assume one interfering bs for our demo . each ue calculates its optimal precoding vectors and feeds them back to the main bs , which in turn collects precoding vector where is the total number of user fixed to three in our demo .however , the maximum number of streams allowed for a bs is . when , all vectors are used for transmission .otherwise , i.e. , an precoding vectors must be selected among the total set .that is , we have to run a scheduling to select which users and streams are best to serve . among the different existing criteria , we choose to maximize the total channel capacity given by where the term stands for the signal to interference and noise ratio at the selected stream after scheduling .when a scheduling is required , the precoding vectors set can be presented as an underdetermined matrix of dimension .the problem is seen as selecting the best determined matrix that maximizes the total rate .the optimal solution is through exhaustive search . for this to happen, we need to build all possible subset combination and to calculate the achievable rate of each candidate .the data symbols are carried over the vectors yielding a maximum achievable rate .the precoding vector of the stream of the user is calculated as where is the row of the reduced interference - free channel .in other words , the precoding vector aims at maximizing the power of the desired part of the received signal .hence , each user relies on the fact that its interference - free directions are independent from the interference - free directions of the other ues associated to the same bs , which means that channel diversity between ues has a major impact for achieving a desired performance of the described interference management scheme .that is , when the channels are almost orthogonal there is no intra - cell interference and the achievable rate is maximum .in contrast , when the channels between ues are completely correlated , the interference reaches its maximum and the rate is highly degraded . going back to the scheduling problem , it can be expressed as where and .such a problem is known as a discrete optimization that requires in most cases an exhaustive search to find out the global optimum .however , this becomes so costly and expensive with the dimension increases .other sub - optimal algorithms based on heuristic methods or local minima search can be applied with no guarantee of any improved performance . since we do not assume high dimensions systems and we apply non - classic ia technique over only four sub - carriers , the optimal exhaustive search is feasible with low computational cost .once the best subset is selected , we proceed to calculate the precoding matrix of all streams that cancels the intra - cell interference .it is given by where the selected precoding vectors are the columns of the matrix and is the identity matrix .the achievable rate can be then written as where and is the column of .it can be noticed that and is equal to one when the selected vectors are all orthogonal .the performance of the addressed non - classic ia approach has been evaluated through exhaustive simulations in different networks and load scenarios .it has been found that in comparison to a reference scenario without ia , the cell capacity can on average be increased by a factor and that the spectral efficiency of cell edge users can be increased up to a factor .our work herein gives a proof of concept and focuses on the main challenges related to the non - classic ia approach for downlink , namely the knowledge of the interference footprint and the scheduling algorithms to make use of the interference information to maximize the spectral efficiency .we implement an experimental scenario with two transmitting bss , i.e. the main bs and an interfering one and three ues associated to the main bs .we show that the ia is feasible , in the sense that ues are able to measure a channel , feed it back to the bs which in turn applies a scheduling to select the best set and calculates the theoretical spectral efficiency ( se ) gain over a classical dumb ofdma allocation .a wireless network is emulated on cortexlab ( http://www.cortexlab.fr ) , a controlled hardware facility located in lyon , france with remotely programmable radios and multi - node processing capabilities . during the live demo , a control laptopis remotely connected to the facility , deploying software on the radios and launching an ia scenario and collecting real - time performance feedback .the efficiency gain of ia is then shown for various experimental conditions that can be tuned from the control laptop .represented by the national instruments usrp 2932 , the general - purpose software defined radio ( sdr ) nodes use the gnu radio toolkit for rapid prototyping of transmission techniques mostly reliant on the general purpose processor ( gpp ) of the host pc .the usrp 2932 is a high end radio platform , counting with a 400 mhz 4.4 ghz rf board , data rates of up to 40 mhz ( with reduced dynamic range , nominal band of 20 mhz ) , a precise ocxo clock source and a gigabit ethernet ( gige ) connection to the host pc .the host pc is based on a linux environment and allow users to test phy and mac layer techniques .it is preferable to first use both linux and gnu radio for the development and test at the user s own computer and then to bring the experiment over to cortexlab . among the forty available sdr nodes in cortexlab ,five are selected where two serve as main and interfering bs and the remainders serve as mobile ues .all transmitters ( tx ) and receivers ( rx ) are equipped with single antenna .therefore , the ia scheme lies on the frequency dimensions provided by the ofdm transmission scheme implemented at all txs and rxs . in the first transmission phase ,the ues need to estimate the main and interfering channels .this can be done by assigning two orthogonal time slots to the bss through which they transmit the training sequences .however , the synchronization of the two bs nodes remains a challenge since they are distantly separated .herein , we propose an over - air synchronization as follows .holding a unique i d , the interfering bs starts the transmission to let the user nodes measure the interference channel coefficients while the main bs tries to decode the i d of the transmitter .if the decoded i d corresponds to the interferer , the main bs node transmit ofdm symbols ( one or more ) for channel estimation . by collecting both estimated channels ,each receiver is able to calculate the optimal precoding vectors in the free dimensions such that the desired signal power is maximized . in order to avoid imperfections in the ia scheme , we use a wire connection provided in the experimentation room between all nodes to perform a perfect feedback to the tx ( main bs ) .this latter gathers the precoding vectors from all ues and run the scheduling algorithm to seek the best precoding vectors that minimizes the intra - cell interference , and hence maximizes the achievable data rate of the cell . the maximized rateis then compared to the classical ofdma and the theoretical gain is calculated .the ofdma scheduler select at each users the streams that maximizes the signal - to - interference and noise ratio ( sinr ) given by where stands for the sinr , is the channel coefficient between the main bs and the user at the stream , is the interference channel coefficient at the stream and is the noise variance .the classical ofdma achievable rate is then defined as in the live demo , we show a spectral efficiency gain that can largely vary between and .this variation is due to the influence of many factors summarized by the following parameters : the noise power , the interference power , the channel diversity , the distance between the different nodes , the tx gain ... for instance , in perfect conditions the ratio gain given in ( [ eq9 ] ) and ( [ eq11 ] ) , tends to its maximum when the inter - cell interference power is of the same order or higher than the desired signal power ; i.e. sinr is low , this is the case of cell edge mobile users .however , as long as the users get closer to the main bs , this ratio decreases .another critical parameter that highly impacts the performance gain is the channel diversity and correlation , the less the channel is correlated the higher the efficiency gain is .this is because the applied ia scheme requires a completely decorrelated channel coefficients for the scheduling , otherwise the intra - cell interference dominates the desired signal . in order to get a better decorrelation in the shielded room ,we emulate a virtual channel on all txs and rxs , however , this still induces some correlation at the different mobile ues .a better way is to use a multi - antennas node at the tx to generate multi - paths through the different antennas .each path is randomly attenuated , phase shifted and delayed .such a multi - paths generation creates a perfect decorrelation since the paths generated by the different nodes are totally independent .an illustration of our demo is given in figure [ fig2 ] . in our demo , we assume two bss and users . the ia is applied over four sub - carriers which gives a total dimension of , one is free of interference and three other used by each bs .we focus on the average theoretical capacity gain offered by the non - classical ia over the classical ofdma scheme with respect to the number of transmission as shown in figure [ fig3 ] .we also plot the channel spectrum at the all sub - carriers and for the different transmission to show how decorrelated the channel coefficients are .for each channel realization , we display the snr of the interference - free streams and the sinr of all streams when the classical ofdma is applied to see their impact over the efficiency gain .in this demo , we have implemented the first phase of a non - classic ia approach for interference management .it consists in measuring the channel coefficients , calculating the interference - free sub - spaces , feeding them back to the main bs , and applying a scheduling to decide which stream to transmit .we have shown in experiment that a significant theoretical capacity gain over the classical ofdma scheme can be achieved depending on the channel conditions and the interference dominance .the remaining work is to start the ia transmission and to apply linear decoding criteria at the receiver such as zero - forcing ( zf ) or minimum mean squared error ( mmse ) to recover the original data .however , the challenge here is to synchronize the transmission between both txs for the ia transmission and to compare the theoretical gain to the practical one .it is also worth trying to face more practical issues related to de - synchronization , which means to study the impact of the delay and phase shift between the different txs on the ia scheme .m. a. maddah - ali , a. s. motahari , and a. k. khandani , `` communication over mimo x channels : interference alignment , decomposition , and performance analysis '' , _ information theory , ieee transactions on _ , 2008 , vol .54 , no 8 , 3457 - 3470 .a. gshasemi , a. s. motahari , and a. k. khandani , `` interference alignment for the k user mimo interference channel '' , _ in information theory proceedings ( isit ) , 2010 ieee international symposium on _ , pp .360 - 364 , ieee , june 2010 . o. el ayach , s. w. peters , and r. w. heath jr , `` the feasibility of interference alignment over measured mimo - ofdm channels '' , _ vehicular technology , ieee transactions on _ , 2010 , vol .59 , no 9 , 4309 - 4321 .d. aziz , f. boccardi , and a. weber , `` system - level performance study of interference alignment in cellular systems with base - station coordination '' , _ in personal indoor and mobile radio communications ( pimrc ) , 2012 ieee 23rd international symposium on _ , pp .1155 - 1160 , ieee , september 2012 .r. tresch , m. guillaud , and e. riegler , `` on the achievability of interference alignment in the k - user constant mimo interference channel '' , _ in statistical signal processing , 2009 .ssp09 , ieee / sp 15th workshop on _ , pp .277 - 280 , ieee , august 2009 .l. s. cardoso , a. massouri , b. guillon , f. hutu , g. villemaud , t. risset , and j. m. gorce , `` cortexlab : a cognitive radio testbed for reproducible experiments '' , _ in proc .wireless@ virginia tech symposium_ , may 2014 a. massouri , l. cardoso , b. guillon , f. doru - hutu , g. villemaud , t. risset , and j .-gorce , `` cortexlab : an open fpga - based facility for testing sdr and cognitive radio networks in a reproducible environment , '' _ in ieee infocom _ , ieee , april 2014 .
our demo aims at proving the concept of a recent proposed interference management scheme that reduces the inter - cell interference in downlink without complex coordination , known as non -classic interference alignment ( ia ) scheme . we assume a case where one main base station ( bs ) needs to serve three users equipments ( ue ) while another bs is causing interference . the primary goal is to construct the alignment scheme ; i.e. each ue estimates the main and interfered channel coefficients , calculates the optimal interference free directions dropped by the interfering bs and feeds them back to the main bs which in turn applies a scheduling to select the best free inter - cell interference directions . once the scheme is build , we are able to measure the total capacity of the downlink interference channel . we run the scheme in cortexlab ; a controlled hardware facility located in lyon , france with remotely programmable radios and multi - node processing capabilities , and we illustrate the achievable capacity gain for different channel realizations .
justifying its use in what follows , one of the great advantages of the woodhouse axiomatics resides precisely in the absence of the chronometric hypothesis ( ch ) of general relativity as well as in references to somehow unspecified standard or absolute " clocks .let us recall that the chronometric hypothesis is defined by the equality , where is the proper time of the clock ( also known as the ` _ _ atomic time _ _ ' ) , is the time interval between two events and is defined from a given metric field , and is the so - called ` _ _ gravitational time _ _ ' ( up to the speed of light ) . strongly criticized this hypothesis , invoking at least two reasons : 1 ) it can be replaced by the conformal or projective structures without a metric field , and 2 ) the shifts between atomic and gravitational times can be deduced from the kundt - hoffmann protocol and , as a result , allow the selection of those clocks satisfying the equality defined by the ch .these axiomatics are one of those appropriate frameworks under which the latter shifts , which are scale factors differing from , enable us to track physical justifications to the ch .the main tool we use throughout is the geometrical situation encountered in the so - called ` _ _ clock ( pseudo-)paradox _ _ , ' the solutions of which explicitly require the ch .additionally , this is another great advantage of the woodhouse axiomatics for approaching temporal paradoxes , since it does not refer to any proper time notions . for more on the geometric settings, we recall that in these axiomatics , the so - called ` _ _ message functions _ _ ' and ` _ _ clock functions _ _ ' ( i.e. the time parameterizations defined by woodhouse ) are only defined for ` _ _ particles _ _ ' ascribed to worldlines of freely falling massive punctual objects , and the metric fields depend only on particles up to conformal factors defined for clock functions .in addition , upstream of the metric structure given by , there is a unique affine structure given by the totality of the freely falling particles of the woodhouse axiomatics or equivalently , from ehlers _ et al ._ , a weyl structure , i.e. , in particular , a class of conformally equivalent riemann structures .weyl structures do not fix rates of clocks a priori independently of their histories .this is the so - called second clock effect .the opposite case was dismissed by audretsch , who concluded that there were several efforts after 1970 to assign a weyl geometry to spacetime rather than a riemannian geometry . in reality , this is close to our viewpoint , i.e. , that rates of clocks , invoked to approach the ch via the clock paradox , are not observable , but fixed only respective to very particular given causal protocols exhibiting the physical and historical relationships among particles themselves , each endowed with particular different sub - structures of a given common conformal one . obviously , whenever the ch is posed and the metric connection is chosen , then the clocks with proper time aging ( or those that are displaying " a proper time ) are automatically inferred from the so - called ` geodesic hypotheses ' as well as singled out , and so , there are clearly no clock paradoxes .the differential aging is quite simply given from the proper time differences obtained from the integration of the infinitesimal proper times along each worldline .however , the ch actually raises the point that we must feature a proper time notion in the physical viewpoint .that is , we must find a proper time definition _ regardless of _ the choice of time parameterizations of the worldlines carried out for each particle of the woodhouse axiomatics . on the contrary ,the differential aging would only translate or reflect the agreement between the proper time definition , and , in particular , the ch and the selected metric connections .that is to say , this agreement expresses the metric structure choice from the unique affine structure of a weyl spacetime . additionally , the woodhouse results are situated upstream of the projective , conformal and weyl structures defined in particular in the ehlers _ et al. _ axiomatics .terms such as ` _ _ freely falling particles _ _ ' are simply generic , all the more so as their worldlines can not be defined from geodesic equations since metric connections are never utilized in the woodhouse formalism .the only important restriction in the definition of the particles is mainly , in our opinion , that it must satisfy the following fundamental property : that for each particle passing through each event in spacetime , there exists a neighborhood of into which a four - dimensional ` __ -congruence _ _ ' can be defined and which can also be an extension of . to some extent , this condition is a topological translation of the possible existence in this neighborhood of a projective connection .it can finally be substituted for axiom of ehlers _ et al ._ , specifying the ( projective ) equations assigned to freely falling particles and , in some ways , incidentally and unfortunately , making a contradictory use of proper times .this indicates another advantage of the woodhouse axiomatics . as a fundamental result ,the true freely falling particles are those belonging to a .particles subjected to forces do not belong to it , but even in this case , the message and clock functions are still usable in the woodhouse formalism .returning to the tool used in the following , a resolution of the clock paradox is possible if the accelerations are involved in the differential aging formulas , since they are the sole data distinguishing the two clocks .this accounts for the well - known historical attempts made by einstein to solve it in general relativity .he used , in his own terms , a ` _ _ pseudo - gravitational _ _ ' field accounting for the accelerating force experienced by the moving clock , which is at the origin of the clock asymmetry .unfortunately , we run into an important problem with at least two distinct physical situations : the first is met in the case of rectilinear motions and the second is related to circular motions . in the first case , we refer , for example , to the unnikrishnan results , recalling and clearly showing some possible inconsistencies in the solutions based upon general relativity due to the use of pseudo - gravitational fields .however , in the second situation for circular motions and among the most precise recent experimental results , differential aging clearly does not depend in any way on a pseudo - gravitational field such as , for instance , the pseudo - field associated with the centrifugal force . in this last case ,the conclusion is clear : special relativity is more than sufficient to fully account for the experimental results .therefore , it is necessary to raise the question of why , when motions are rectilinear , we must use general relativity , if that is indeed what must be done , and why , in the case of circular motions , general relativity no longer intervenes .nevertheless , a certain type of mechanical work might also be used for interpretations and solutions , as we shall see .it is a fact that in the first , linear case , some forces work , whereas in the case of circular motions , centripetal forces do not .in addition , at the origin of the conformal proper times , considering an accelerating frame under a kind of primary initial geometry or special relativity , we know , for instance , that there are precession effects influencing space vectors , such as the spin vectors leading to the so - called thomas precession .the latter has a long - standing history in relation to the clock paradox . at the basis of the precession equations ,there is , as a particular case , the so - called _ mller term _ , where is a velocity vector , or more generally , if is a velocity field , .then , if is the restriction of to a worldline and is tangent to the latter , this mller term can exactly account for another metric connection restricted to this worldline . as a result , the precessing vector is the tangent vector of a geodesic .furthermore , if precesses itself with respect to another precession equation , i.e. , if is _ congruent _ with another vector field , then its dual with respect to the metric defines a _ weyl proper time _* remark ( d ) p.67 , and p.81 ) .moreover , the vanishing behavior of this mller term meaning the primary connection is a projective connection with its corresponding geodesics defines the projective geodesics as being those of the primary initial geometry . in other words ,a non - vanishing mller term points to a change in a non - equivalent projective geometry , i.e. , a change of physical frame of reference , but it is at least compatible with the conformal structure .consequently , again and in particular , the flat minkowski spacetime should no longer be valid for non - geodesic motions .we could say that the projective connections and , thus , the weyl structures depend on the non - geodesic motions to which they apply .this is a way of justifying the use of varying weyl structures ( or projective structures ) attached to each particle .then , each worldline or particle should correspond to the restriction of a specific weyl structure only , and all these non - projectively equivalent weyl structures should be conformally related , leading to ( relative ) conformal proper times . in the next section , we recall the different definitions given in the woodhouse axiomatics and the malament theorem .section [ coeur ] is divided into four sub - sections , of which the second is devoted to a difficulty that occurs systematically and is inherited from the simultaneity maps resulting in a non - trivial differential aging . in the third sub - section ,geometrical arguments that are necessary to define formulas for differential aging and are consistent with the experimental results are presented using conformal proper times . in the fourth sub - section ,the conformal factors in the previous formulas are physically interpreted , and then in section [ expert ] , these formulas are applied to certain experimental results .in the woodhouse axiomatics , the spacetime is a set of points , to which a set of subsets of called ` _ _ particles _ _ ' and denoted by is associated , each of them being homeomorphic to .thus , the set is not a priori a topological manifold , and the particles can not be loops .it is assumed that at least one particle passes through each point of .the first axiom of causality , i.e. , axiom 1a , states that , for each particle , an orientation can be selected among the two determined by the homeomorphism with such that , once these orientations are selected and fixed for the whole set of the particles , each event of can not be chronological to itself , i.e. , then .the ` _ _ chronological relation _ _ ' is then a partial and anti - reflexive order on .this is a global property of called the ` _ _ chronological principle _ _ ' , i.e. , there are no causal loops , and thus , is also called a ` _ _ chronological _ _ ' spacetime .axiom 2 states that the intersection of any particle with any open of the alexandrov topology then , and also .these sets are open for the alexandrov topology .] is open for the topology on ( issued from the borel topology on ) . from axioms 1a , 1b , 2 and 3 , providing with the alexandrov topology ,then is hausdorff , the chronological relation is _ _ past- and future - distinguishing _ _ is ` _ _ future distinguishing _ _ ' if , and ` _ _ past distinguishing _ _ ' if . ] and _ full _ , , such that , 2 ) if and then such that , and , and 3 ) the dual relations of 1 ) and 2 ) are satisfied with the dual chronological relation . ] and each event possesses a _ _ past and future reflecting _ _ ) of is an ` _ _ future and past reflecting _ _ ' open if then 1 ) ( _ future reflecting _ ) , and 2 ) ( _ past reflecting _ ) . ]open ( for ) neighborhood .then , woodhouse shows that is of the kronheimer - penrose type , where is the ` _ _ causal relation _ _ ' if and . ] and the ` _ _ horismotic relation _ _ ' if and . ] or ` _ _horismos__. ' moreover , the chronological relation is transitive , which is not initially required in the kronheimer - penrose causal axiomatics in full generality .the _ message functions , and . ] _ associated with each particle are defined on the tubular past- and future - reflecting alexandrov open neighborhoods , such that where .these functions are unique and increase strictly monotonically for the chronological relation , open and continuous on .the ` _ _ clock functions _ _ ' associated with each particle of are homeomorphisms , which increase monotonically for the chronological relation .additionally , we define the ` _ _ radar coordinates _ _ ' defined for each particle . from this point on , for each pair of non - coplanar particles and ( if coplanarity has a meaning in ) , axiom 4a is satisfied if , such that , and the four radar coordinates ( ) together define a morphism : which is a one - to - one map .then , from this axiom , it can be proven that is a topological manifold homeomorphic to .in addition , from the `_ _ -congruence _ _ ' of particles with the parameterizations , is a _-congruence _ on if : 1 ) , such that and , and 2 ) a neighborhood of , then the ` _ _ evaluation map _ _ ' is a homeomorphism . ] on a set and from the-differentiability assumption have no caustics , that is each message function restricted to with are diffeomorphisms . ] of the clock and message functions , woodhouse defined -congruences and inductively proved that can be assumed to be _ smooth _ , i.e. , of class .lastly , we call ` _ _ scalar potentials of metric _ _ ' smooth functions defined on with values in , each one associated with a particle , such as : then , the lorentzian metric on is the hessian of : which no longer depends , nonetheless up to a conformal factor , on and .in addition , if is in a past and future reflecting open set of and , then there exists ( * ? ? ?* lemma 4.2 ) a neighborhood of such that ( with ) is a ( unique ) light path and a one - dimensional submanifold .each light path is a null curve , and they are null conformal geodesics of class .hence , if then there exists a unique ( no caustics ) light path that is at least piecewise smooth .one can notice that in the woodhouse axiomatics , we may have without light path joining and , meaning the latter have no common past- and future - reflecting open neighborhoods .then , defining a new horismos such that if and , this new causal structure on ( or each ) is somehow natural " , i.e. , is a conformal manifold with its field of light cones , such that relates events on a cone or equivalently by a light path . andthen , is identified with .based on this topology , the projective , conformal and weyl structures can be defined from the choices of metric connections , together with the constraints as indicated in the previous section .that is to say , the particles must be geodesic for the weyl structure .moreover , we recall the malament theorem , which can be used further with the topology provided by the woodhouse causal axiomatics : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ malament theorem _ let and be two past- andfuture - distinguishing spacetimes , each supplied with a metric connection to inherit a riemannian structure , and let be a causal isomorphism ( i.e. , keeping the chronological relation as well as its inverse ) from to .then , is a smooth conformal diffeomorphism .furthermore , preserves timelike curve orientations ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ by definition , is a conformal diffeomorphism preserving orientations if , where is a continuous function defined on .if , then is said to be isometric .in this preliminary section , we present the general mathematical framework used in the next two sub - sections treating the clock paradox .let be a woodhouse spacetime supplied with a lorentzian metric ( defined from a scalar potential of metric ) and a metric connection denoted by providing with a riemannian structure .we assume is at least of class and that it is so defined from clock and message functions at least of class as well as .then , we consider two intersecting particles ( or _ ` trips ' _ , following the woodhouse definition ) and .we point out again that the non - intersection is only considered in the definition of the -congruences , and so it is associated with the choice for the projective structure defined by but not with the whole set of particles or in particular , and . in other words , there are two other -congruences , each containing or and each associated with another possibly non - equivalent projective structure .we denote by a tubular past- and future - reflecting open neighborhood of , onto which the message functions are defined . the neighborhood possibly be reduced to avoid including caustics . in addition , and intersect at two points and only , the latter being contained in a past- and future - reflecting open sub - neighborhood of .moreover , we assume that and are contained in between these two points ( chronologically ) .the same notation is used for the restrictions of to .we set , where is analogously defined to . in order to apply the malament theorem , is assumed to be connected and without boundaries .moreover , we also consider as a conformal manifold , i.e. , a manifold being endowed with a class ] of metric connections considered throughout , each supplying with a riemannian structure compatible with the conformal structure defined by ( or ) , represents those metric connections such that , at least , together _ and _ are projective geodesics between and .then , only on and are these specific _ restricted _ weyl structures defined .if and are clock functions , respectively , on and , then can also be parameterized with using .indeed , we can set : , where .we choose such that and .let and be the vector fields tangent , respectively , to and and defined from the parameterizations and .then , in full generality , we have on : and , where and are real differentiable functions defined , respectively , on and only .let be a metric field defined on , where is a real differentiable function on such that if and if .thus , the relations and hold , and ] ) parallel transported from along , such that and , where is the covariant derivative in the direction of . the transported vectors are reached using the parallel transport map which is defined such that : where , and , and is continuous and increasing for the horismos . on the other hand , this is a fundamental map accounting for the causal interpretations that each observer makes locally from a signal being propagated along a null geodesic .obviously , from the vanishing covariant derivative of in the direction of , we have along , .let and be the dual one - forms defined , respectively , on and such that .then , we define on the one - form such that : figure 1 we now consider the inverse situation with two other parameterizations for the particles and , namely , and respectively , but using instead the map ( see figure 1 ) . in the same way , we set : where , and and . thus , one obtains an expression analogous to along , where and are defined in a way similar to and : in conclusion , only the parameterizations on and on are relevant , with the others being deduced afterwards from the message functions .lastly , if is relatively compact , the integrals computed subsequently will always be defined , since the integrands will be lipschitzian . however , in addition , the lipschitz constants will no longer depend on , according to the zeghib theorem .indeed , all the maps on are defined from a foliation of codimension one defined from a given -congruence . in the present framework, we shall define a simultaneity map and certain types of proper times from which the first difficulty that arises is the realization of a differential aging formula consistent with the experimental results .let the set of pairs be such that .an expression developed for could be : , and thus , a priori . setting this equality is like defining as a function of , or conversely .then , we impose the condition .now , let the map be such that when .thus , the relation necessarily holds , since .moreover , is bijective and reversible . the map transforms timelike curves into other timelike curves . however , if on , then on , it is equivalent to . thus , the map is a causal isomorphism from to .however , from the malament theorem , there exists on a ( non - unique ) conformal extension of , which , in addition , preserves the particle orientations while passing from to .let us note that the map can never be one of the message functions restricted to , because in this case , the relation with should hold with being causal , which is obviously impossible .one calls the ` _ _ simultaneity map__. ' it is almost fully geometrically unspecified a priori and is non - unique , depending on the parameterizing clock functions .however , it is well - known that , in contrast , it is causally worked out as being unique , as shown with the malament theorem on simultaneity . from this point on, we shall define a variant of the chronometric hypothesis that is nonetheless quite different . instead of assuming that the interval associated with is identified as , where is the proper time of a clock ,we suppose that on , and on . that is to say , the interval is identified as the sole physically " measurable quantity , which represents the time intervals _ displayed _ by the clocks. therefore , these are as varying as the clocks used , contrary to the proper times and depending on each particle but not on their parameterizations .moreover , the constants and are the numerical values for the speed of light when making use of the corresponding parameterizations and . then, we set : where and are the usual " proper times associated with ( not defined on ) the particles and , respectively .the one - form is ascribed to the infinitesimal proper time of , because , first , depends on two points , one on and the other on , and secondly , the term is the well - known factor of special relativity between one observer on and another on .hence , the proper time of an object on is evaluated on , onto which is defined , hence the notation .implicitly , this shows the relative " character of the concept of proper time , which will also be truly justified further by using certain other arguments presented below .then , these two one - forms are defined strictly on the product fibered " by a simultaneity map .in addition , since puts into correspondence the ( co)tangent vector spaces on and , the relations and hold while preserving the particle orientations , since .moreover , at least between the points and , we have the important property , but naturally a priori .we suppose this last equality is satisfied when choosing , among all of the maps , those that satisfy the condition below .let be a conformal diffeomorphism , such that .we denote by the condition when is restricted to or only .therefore , carries out the exchange of the two particles . from all of these relations , we have on and , only if .this relation would be satisfied if , in particular and due to the malament theorem , the following equalities and held on .in fact , we would obtain the relation on , because and , as well as and , have the same norms with respect to . in order to satisfy these equalities, we set below the defining constraint or hypothesis on , namely , the constraint : satisfies the condition if preserves the riemannian structure on .then , as we will show in the proof below , the relation holds if and are assumed. then , we have : * lemma * _ if and are satisfied , then on and . _ _ proof _ : indeed , firstly , if preserves the riemannian structure , then for all continuous vector fields and on , there exists a continuous function on such that , where and .then , considering , a null vector tangent to the piecewise null geodesic from to , and ( ] , we find on .thus , and .however , from , we also have when is restricted on and so the expected result .then , is the pull - back of by . hence , on the basis of a physical interpretation of the signals received by each particle , i.e. , considering again that the proper time of each particle is actually assessed as such only by another particle running in the time " given by its own clock ( in special relativity , the proper time for a physical frame is always compared , using a factor , to another physical frame ) , then the differential aging between the clocks can be computed _ a priori _ with the formula : we set , the differential aging , with the usual " proper times : then , in making use of and the constraints and , we find : thus , we must clarify the links between the constraints on and some physical properties .nevertheless , we can not make allowance for concepts inherited from special relativity to define a differential aging formula as above . now , in order to seek and to justify a correct formula for the differential aging , we will also consider a situation with a third particle , which is more related to physical experiments that have actually been performed , such as the hafele and keating experiment .let be a third particle not necessarily passing through the points or , with its two message functions .we assume that and as well as every point of such that are in .according to and the signals it receives coming from and , i.e. via the parallel transports along light paths , a difference of proper times between the two particles and could be written as follows : we define where and are the parallel transported vectors at from , respectively , the vectors , , and is a vector field tangent to provided with the parameterization ( see figure 2 ) .figure 2 in addition , we assume that with on , where . moreover , we set and . the expression must be absolutely independent on , and we shall show that a particular condition must be satisfied to obtain such independence .we denote by the null geodesic from to ( see figure 2 ) , from to , and from to .in addition , we denote again by the vector field tangent to at and and , the parallel transported vectors , such that : then , we have : * proposition .* _ there exists a unique differentiable function on , independent on the representative metric connection and , such that and at with and .we obtain an analogous relation for on . _ _ proof _ : let be the message function restricted to . as for , we extend the latter map to a causal conformal map defined on and denoted by .the relation holds , and since and have the same norms with respect to , . additionally , we define such that . then , we have the relations : however , acts as an isometry with respect to on the vectors and , as well as on .thus , we can set the relations : where and are isometries with respect to .on the particle , the carriage of from to is not a parallel transport with respect to , because is not a priori geodesic under .accordingly , we can set , although , . setting ,if , then we necessarily find : on .then , integrating from to , we deduce the integral definition of : from this definition , we set , where additionally , , since , but at and thus , . in conclusion , there exists an isometric transformation with respect to , such that , and consequently : thus , we deduce with that : however , and are normalized with respect to the metric at , namely , the metric we denote hereafter by . thus , necessarily , the holonomy map along the loop , i.e. , , is a dilatation composed with a lorentz transformation of : such that .the map can always be composed on the left with another lorentz transformation preserving , but such that . therefore , in full generality , we can always choose a map such that the associated lorentz transformation remains invariant .thus , we deduce that there exists a map and , moreover , a function on such that : therefore , in relation , the only way to pass from to using signaling , that is to say , passing from to using null geodesics , is to apply . however , on the loop , equivalently , the signals at coming from sometimes via or sometimes via must be the same .this signal identification , regardless of the light paths followed , means that leaves invariant .hence , the relation must hold .additionally , this means that the holonomy group of at each point in is the lorentz group of at this point .this is the present case , since , from condition , the metric connection provides with a riemannian structure .thus , there always exists a map leaving invariant .then , at the point and with restricted to , we find : with et .the relation holds .in addition , we can deduce at , using an analogous transformation and with restricted to : with and . then , we obtain in particular : with and . however , may vary and be such that . the relation holds as well . performing a change of variables using the map , we find : however , the two integrands are positive , so we can set new supplementary defining constraints and on , which is not fully defined since on , if , then we find only : such that , then \,f^-_{{\tilde p}}(x),f^+_{{\tilde p}}(x'')\,[\,\,\subset{{\tilde p}} ] this class of functions and the full set of such classes on defines a ` sheaf ' of rings of continuous functions onto , denoted by \,/\,z\in\,p\} ] then we might have {z'} ] to do the computations .thus , we can set at , and use this expression to compute the metric connection at associated to from the metric connection associated with . as a result , at and if is assumed to be differentiable , the relation : holds , where and are any vector fields defined in an open neighborhood of .we note that if and both belong to ] whose gradients differ at .hence , we must define a sub - sheaf of denoted by such that \,/\,z\in\,p\}\subset s_p(w) ] denotes the class of functions such that and .any function only defined on is suitable ( although continuous on ) , but between and , we impose the relation at : where is the dual one - form of such that on .this can be presented with a more meaningful formula .first , let be any vector field on , with its corresponding space vector such that , where is assumed to be the timelike vector field tangent to and normalized with respect to .of course , satisfies the relations and .secondly , with , we have the relation : however , since , then , and thus , we deduce the relations : , and therefore , we obtain . the relation is thus written in the simplified form : .one can rewrite the preceding formula in another form that is more amenable to easy interpretation : these two relations depend on the clock chosen , i.e. , the clock function and , thus , the functions . then, if there are no applied forces when considering the metric , i.e. , , then naturally it is no longer the case considering instead the metric if and , reciprocally , if and .thus , this exactly represents the ` _ _ pseudo - fields _ _ ' of the gravity viewpoint , whose explicit terminology can be found in the einstein paper on the twin paradox as well as in the lift metaphor of the einstein equivalence principle .hence , their origins can be ascribed to the choice of a clock _ displaying _ a time differing from the proper time .then , to the lift metaphor , we could append a sort of _ unregulated clock metaphor _ to special relativity : a massive object is in inertial motion along a straight line at a constant relative speed when compared to a certain clock , and the latter suddenly starts to have a delay that can not be observed in the absence of an absolute time of reference . as a result, one would observe a sudden increase in the relative speed and , thus , a non - vanishing relative acceleration .however , we must mention that the accelerations given by accelerometers remain null , so that the motion remains inertial , even though the relative accelerations are modified .we can ascribe this variation to a geodesic or curvature variation to become non - zero , as well as to a pseudo - field of gravity occurrence .thus , these fields are pseudo " only relative to the choice of the conformal metric carried out a priori during the variation and , thus , of a clock .if the variations are not considered as being due to changes in temporal coordinates or clocks , then , quite naturally , two metrics and can be simultaneously associated with each particle . in the geometry defined by , there are no forces applied on , i.e. , , and in the geometry defined by , a force is applied on and is interpreted as fictive on the metric , i.e. , ascribed to a pseudo - field of gravity .on , one would then have , for example , the forces of gravity or centrifugal forces , which would be fictive forces on but true " forces on .nevertheless , why would that not remain valid if the variations are considered as changes in clocks or clock functions ?indeed , how do we note such a change ?we do not have a local absolute clock against which we can calibrate the others .this is well highlighted in the _ allan deviation _ of a pool of identical clocks .then , deduced from the latter , the precision of an atomic clock is no longer defined from an absolute , standard clock linked to a given temporal orientation , but from a statistical relation between clocks , and moreover , time drift bias can not be defined .thus in practice , an unobservable change in clock function is also equivalent to the sole observed change in geodesics ( related to curvature ) , and then , the lift metaphor can no longer be distinguished from that of an unregulated clock .therefore , this at once poses the question of the relation between clock functions and accelerations , forces or geodesics .thus , if the relation holds , we should interpret as a quantity associated , in some ways , with the mechanical work of a force , fictive on , or with an energy potential . from the previous equality ,one thus notes that the fact that this scalar product does not change when passing from to means that the work would not be , in any case , fictive .additionally , we can notice elsewhere that a non - fictive work is at the basis of the schild argument for deducing the gravitational frequency shifts in light ( * ? ? ?187189 ) .now , starting from the relation , the function should be a priori the mechanical work of force , along and dependent on the path . as we shall seethe function could be the variation in the total energy of the particle minus the kinetic energy .lastly , it is significant to note that this interpretation for differs for instance from that given by wheeler for the weyl potentials .wheeler identified with the classical action ; it would be very appealing indeed to connect to the physics .however , the examples presented below prevent us , disappointingly , from making such an identification and lead us to dismiss such a proposal in the present situation .one considers in this section only verifications up to the order , corresponding to a first approximation of the definitions of the functions , whose refinements in their complete physical interpretations , under study , remain to be made if one wants to move to results up to the order of at least .this is the simplest situation , occurring when , but with at least two distinct clocks on the same worldline of a freely falling particle , with one of them not running .thus , in this case , if is on between and , and it starts to run only while arriving at , so that . then , the relation is written in the form : where because the clock on is not running .moreover , is then linked only to the internal energy of the running clock , since it is in free fall .let us consider the term of the entropy in classical thermodynamics , such that , where is the infinitesimal internal " work ( work not related to the kinetic energy ) .the terms and are dimensionless quantities and represent energies per unit of mass or per unit of thermodynamic energy , such as for instance ( being boltzmann s constant ) . setting , where is the entropy , and where is the internal work , and assuming with constant while the clock operates , then : then , we consider that the clock produces a positive variation in entropy during its irreversible counting : , and .thus , the relation : holds . setting by definition , where is the proper time of this clock , and considering as the time the clock displays as an increasing counter , then at each point such that , we have the variation : hence , the second law of thermodynamics would be equivalent to the monotonic increase of the proper time with respect to the displayed time , with entropy linking them .the question remains as to what the verification conditions are for the chronometric hypothesis .one notes that we must start from the relation and the scalar potentials of metric .the metric field is independent of and up to a conformal factor .in fact , the conformal factor depends upon the first - order derivatives of at point only such that . at this point , one can show that the conformal factor is proportional to the square of the differential and , so , of from .the mere verification of the ch would involve having a constant non - vanishing factor to set down a relation of the form up to a constant factor .the condition is that for every point between and , .then , necessarily , we find .this can not be required for the whole of the interval between and , and then , it remains only to suppose that there exist and such that when .hence , only in that case , the chronometric hypothesis would be equivalent to einstein s adiabatic hypothesis .moreover , if , then it requires , first , that is high when , and second , that we have an adiabatic process .these are conditions that atomic clocks should satisfy with closed " to .additionally , let us investigate the case with and constant .for instance , the clock starts to spin or acquire a moment of rotation .then , to first order , we find the following expression : in other words , this process leads to a slower aging " of the clock compared to the case without rotation .this phenomenon should be strictly different from the rejuvenation " case found by prigogine __ in the same circumstances of a rotating thermodynamic system . nevertheless, they did not account for the time of rotation while this rejuvenating effect occurred .this time could compensate for the rejuvenation , such that the system would only age more slowly , and then , actually , their phenomenon would be very similar to the present case .this well - known experiment brings into play three reference frames , each one using a cesium atomic clock : two jet aircrafts , one moving westward and the other eastward , and a ground control base station on the equator at null altitude .actually , hafele and keating used a fourth reference frame , i.e. , an inertial reference frame ( thus , in particular , without rotation ) associated with a remote star .following our notations , they computed the integrals and to the first order of the relativistic effects .the two aircrafts were supposed to fly along the equator at the same velocity with respect to the ground .the altitudes and the maximal velocities during this experiment were typically about and , respectively .the earth s equatorial radius is about and the mean angular velocity for the earth rotation about .thus , the equatorial tangential velocity is about .the aircraft moving towards the east moves in flight at a speed with respect to the frame , and the aircraft moving towards the west travels at a speed of .in addition , we denote by the constant of gravity at the equator .now , considering the first term in the integral in the formula for , the term in is the factor for one of the aircrafts . for the aircraft flying toward the east, we find : in this expression , is the minkowski metric , since we refer only to the remote inertial frame . then , to the first order , we obtain : the second term in the integral can also be written to the first order as : , according to our interpretations , the coefficient is equal to 1 , because is not subjected to an external working force . on the other hand, the coefficient will reflect the work of the external force that brings the aircraft to its cruising altitude and speed . under these conditions , andcontrary to the function in the previous example for one clock only , it is necessary to take with the expression , where is the work per energy of mass unit , where is the mass of the aircraft .thus , with being the work of the forces ( directed to the ground ) given by the accelerometers and considering to be roughly constant , then we find the relation : hence , to the first order , the relation : is exactly that given by hafele and keating .thus , the following integrands correspond to each integral : the integrals are both computed with respect to the variable , and thus , we can subtract them to find the differential aging between the airplanes .equivalently , the term : must be integrated .rigorously , this difference is possible only if the two aircrafts are in flight at the same speed , in opposite directions , and at the same distance from the base .this hypothesis is also needed to compare with the integrand in the difference , i.e. , to avoid passing by the basis .then , we easily obtain the corresponding following integrands : subtracting these two terms gives the same result .one can also consider the use of the formula . as previously indicated , the metric is the minkowsky metric , only far from the earth . in the vicinity of the earth , this is the schwartzschild metric , and from the viewpoint of each aircraft in motion or of the base station , this metric is in a rotating reference frame .in addition , it must be synchronous , or equivalently , conditions such that must be satisfied .the schwarzschild metric can be approximated by the expression ( ) : where , and is the gravitational radius of the earth . in a rotating reference frame such that , , and by replacing by with the approximation and taking into account the orders of magnitude for , and , we can write the following approximation of : however , we need a metric in a synchronous frame , so a change of basis is performed by replacing by and also by multiplying all of the terms by a factor ( i.e. , the basis is defined up to a conformal factor and a projective transformation ) : the constants and are chosen in order to normalize the metric ,i.e. , its determinant is equal to on the basis , and with the constraint : for the value of .then , to the first order , we find that and , and so : in order to evaluate the factor for this metric , the tangential velocity denoted by of an object with respect to this rotating reference frame is such that : with the upward velocity defined by : setting at the equator , the inverse of the factor for this metric is roughly of the form : \right\}.\ ] ] let us now consider the formula to compute .we assume the angles are toward the east . with or and assuming , then , for the two cases , we find the relations : \right\},\\ & & \notag\\ \beta^-_e=-\frac{\mathrm{g}\,h}{c^2 } , & & \frac{e^{-\beta^-_e}}{g(\eta,\zeta^+_1)}\simeq 1+\frac{1}{c^2 } \left\ { \mathrm{g}\,h - v\,v_t-\frac{v^2}{2 } -\frac{h}{3r } \left [ 4\,v\,v_t+v^2 \right ] \right\}.\end{aligned}\ ] ] hence , in order to obtain , we need to integrate : with respect to . at the zeroth order in ,we find again the formula .we consider now the formulas obtained from the viewpoint of each aircraft , that is to say , or .first of all , in their proper frames , none of the external forces work , since the origins of the reference frames are the aircrafts themselves . however , the functions are the evaluations of the work of the forces being applied on the aircraft which is not at the origin of the considered reference frame .formula must be applied : there are the following correspondences : is the work of the forces given by the accelerometers placed on the aircraft moving toward the west and estimated in the reference frame of the same aircraft .reciprocally , is the work of the forces given by the accelerometers placed on the aircraft moving to the east and estimated by the plane flying toward the west . from the point of view of each airplane, these two sources of work would be the same if they could simultaneously be estimated with the function . however , in fact , this is not even necessary because the function takes into account the temporal delay via the map .for the two differences , it is thus necessary to estimate the coefficients ascribed to .there is the following correspondence : \right\},\ ] ] where , a function of , is the velocity of the airplane travelling towards the east from the point of view of , and is its upward velocity .the expressions for the functions are : where the altitudes are functions of the values of the clock function , is the altitude of the airplane moving toward the east , and is that for the plane traveling west .these altitudes are , actually , relative altitudes between the two airplanes , not respective to the ground .one can thus make the reasonable assumption that . under these conditions , by considering , the integrand of is of the approximate form : \right\}.\ ] ] this expression is far from that obtained in the reference frames of or , but in fact it means that the comparison of the integrated terms is no longer valid .it is the complete integral that must be computed .it follows that the doppler terms in must include dilatations and contractions of the durations depending on whether the airplanes are moving away from each other or are approaching one another along the equator .the contributions of these two modifications in duration will be the same and will be cancelled out at the conclusion of a circumnavigation .thus , only the effect of the term in will persist , leading to the same final result .we have seen that the clock paradox is perfectly solved within the relativity framework due to the asymmetry resulting from the factors . however , at the same time , it is necessary to ascribe a new meaning to these conformal factors .these factors are strongly associated with the concept of total energy minus kinetic energy , which is not necessarily defined by a potential along each timelike worldline ( particles ) .actually , these functions imply in themselves the data of an additional dimension .more precisely , we have sheafs of germs of functions that are classes of scalar fields , which mimic " , at each given event , the values of the functions .in other words , we could have no such scalar fields on , but there is always the need for non - vanishing conformal scale factors , i.e. , values independent on the _ locus _ in the spacetime , which is an extra dimension . in the same vein , we can suggest a reply to a concluding question of ehlers _ et al . _ , asking whether other interpretations of the so - called weyl _ streckenkrmmung _ bivector rather than those ascribing the latter to the electromagnetic field _ might contain some physical truth"_. the fact is that as a field , per se , we may have no such field , but only germs of such fields along particles or beams of particles .again , this bivector would be related as a germ to derivatives of the functions . however , in any case , such ( bi)vectors would be defined only in jet manifolds of germs of functions and , as such , would be associated with an extra variable .thus , it seems necessary to consider a spacetime supplied with an extra dimension of energy , i.e. , the conformal scale factors , that translates the various physical transformations without gravitational origins .therefore , it is basically a spacetime " of five dimensions upon a conformal spacetime of four dimensions .however , the geometrical structure should also be a product of copies of the spacetime fibered by conformal scale factors ( or equivalently , ) . in conclusion , this extra dimension is absolutely necessary in order to provide an account of the internal evolution of objects in relation to the spacetime structure , and it alone gives a meaning to the concept of the proper time of an object in relation to the chronology given on a spacetime .ehlers , j. , pirani , f. a. e. , & schild , a. ( 1972 ) .the geometry of free fall and light propagation . in l.oraifeartaigh ( ed . ) , _ general relativity , papers in honour of j. l. synge _ ( pp .oxford : clarendon press .kundt , w. , & hoffmann , b. ( 1962 ) .determination of gravitational standard time . in_ recent developments in general relativity - a book dedicated to leopold infeld s 60th birthday303306 ) .new york : pergamon press .prigogine , i. , & ordonez , g. ( 2006 ) .acceleration and entropy : a macroscopic analogue of the twin paradox . in f.f. orsucci and n. sala ( eds . ) , _ new research on chaos and complexity _( pp . 520 ) .new york : nova science publishers inc ..
on the basis of the woodhouse causal axiomatics , we show that conformal proper times and an extra variable in addition to those of space and time , precisely and physically identified from experimental examples , together give a physical justification for the ` _ _ chronometric hypothesis _ _ ' of general relativity . indeed , we show that , with a lack of these latter two ingredients , no clock paradox solution exists in which the clock and message functions are solely at the origin of the asymmetry . these proper times originate from a given conformal structure of the spacetime when ascribing different compatible projective structures to each woodhouse particle , and then , each defines a specific weylian sheaf structure . in addition , the proper time parameterizations , as two point functions , can not be defined irrespective of the processes in the relative changes of physical characteristics . these processes are included via path - dependent conformal scale factors , which act like sockets for any kind of physical interaction and also represent the values of the variable associated with the extra dimension . as such , the differential aging differs far beyond the first and second clock effects in weyl geometries , with the latter finally appearing to not be suitable .
analysis of complex high dimensional data is an exploding area of research , with applications in diverse fields , such as machine learning , statistical data analysis , bio - informatics , meteorology , chemistry and physics . in the first three application fields ,the underlying assumption is that the data is sampled from some unknown probability distribution , typically without any notion of time or correlation between consecutive samples .important tasks are dimensionality reduction , e.g. , the representation of the high dimensional data with only a few coordinates , and the study of the geometry and statistics of the data , its possible decomposition into clusters , etc .in addition , there are many problems concerning supervised learning , in which additional information , such as a discrete class or a continuous function value is given to some of the data points . in this paperwe are concerned only with the unsupervised case , although some of the methods and ideas presented can be applied to the supervised or semi - supervised case as well . in the later three above - mentioned application fields the data is typically sampled from a complex biological , chemical or physical _ dynamical _ system ,in which there is an inherent notion of time .many of these systems involve multiple time and length scales , and in many interesting cases there is a separation of time scales , that is , there are only a few `` slow '' time scales at which the system performs conformational changes from one meta - stable state to another , with many additional fast time scales at which the system performs local fluctuations within these meta - stable states . in the case of macromoleculesthe slow time scale is that of a conformational change , while the fast time scales are governed by the chaotic rotations and vibrations of the individual chemical bonds between the different atoms of the molecule , as well as the random fluctuations due to the frequent collisions with the surrounding solvent water molecules . in the more general case of interacting particle systems ,the fast time scales are those of density fluctuations around the mean density profiles , while the slow time scales correspond to the time evolution of these mean density profiles .although on the fine time and length scales the full description of such systems requires a high dimensional space , e.g. the locations ( and velocities ) of all the different particles , these systems typically have an intrinsic low dimensionality on coarser length and time scales .thus , the coarse time evolution of the high dimensional system can be described by only a few dynamically relevant variables , typically called reaction coordinates .important tasks in such systems are the reduction of the dimensionality at these coarser scales ( known as homogenization ) , and the efficient representation of the complicated linear or non - linear operators that govern their ( coarse grained ) time evolution .additional goals are the identification of the meta - stable states , the characterization of the transitions between them and the efficient computation of mean exit times , potentials of mean force and effective diffusion coefficients . in this paper , following , we consider a family of diffusion maps for the analysis of these problems .given a large dataset , we construct a family of random walk processes based on isotropic and anisotropic diffusion kernels and study their first few eigenvalues and eigenvectors ( principal components ) . the key point in our analysisis that these eigenvectors and eigenvalues capture important geometrical and statistical information regarding the structure of the underlying datasets .it is interesting to note that similar approaches have been suggested in various different fields . in graph theory, the first few eigenvectors of the normalized graph laplacian have been used for spectral clustering , approximations to the optimal normalized - cut problem and dimensionality reduction , to name just a few .similar constructions have also been used for the clustering and identification of meta - stable states for datasets sampled from dynamical systems .however , it seems that the connection of these computed eigenvectors to the underlying geometry and probability density of the dataset has not been fully explored . in this paper, we consider the connection of these eigenvalues and eigenvectors to the underlying geometry and probability density distribution of the dataset . to this end , we assume that the data is sampled from some ( unknown ) probability distribution , and view the eigenvectors computed on the finite dataset as discrete approximations of corresponding eigenfunctions of suitably defined continuum operators in an infinite population setting . as the number of samples goes to infinity , the discrete random walk on the set converges to a diffusion process defined on the -dimensional space but with a non - uniform probability density . by explicitly studying the asymptotic form of the chapman - kolmogorov equations in this setting ( e.g. , the infinitesimal generators ) , we find that for data sampled from a general probability distribution , written in boltzmann form as , the eigenfunctions and eigenvalues of the standard normalized graph laplacian construction correspond to a diffusion process with a potential ( instead of a single ) .therefore , a subset of the first few eigenfunctions are indeed well suited for spectral clustering of data that contains only a few well separated clusters , corresponding to deep wells in the potential . motivated by the well known connection between diffusion processes and schrdinger operators , we propose a different novel non - isotropic construction of a random walk on the graph , that in the asymptotic limit of infinite data recovers the eigenvalues and eigenfunctions of a diffusion process with the same potential .this normalization , therefore , is most suited for the study of the long time behavior of complex dynamical systems that evolve in time according to a stochastic differential equation .for example , in the case of a dynamical system driven by a bistable potential with two wells , ( e.g. with one slow time scale for the transition between the wells and many fast time scales ) the second eigenfunction can serve as a parametrization of the reaction coordinate between the two states , much in analogy to its use for the construction of an approximation to the optimal normalized cut for graph segmentation .for the analysis of dynamical systems , we also propose to use a subset of the first few eigenfunctions as reaction coordinates for the design of fast simulations .the main idea is that once a parametrization of dynamically meaningful reaction coordinates is known , and lifting and projection operators between the original space and the diffusion map are available , detailed simulations can be initialized at different locations on the reaction path and run only for short times , to estimate the transition probabilities to different nearby locations in the reaction coordinate space , thus efficiently constructing a potential of mean field and an efficient diffusion coefficient on the reaction path .finally , we describe yet another random walk construction that in the limit of infinite data recovers the laplace - beltrami ( heat ) operator on the manifold on which the data resides , regardless of the possibly non - uniform sampling of points on the manifold .this normalization is therefore best suited for learning the geometry of the dataset , as it separates the geometry of the manifold from the statistics on it .our analysis thus reveals the intimate connection between the eigenvalues and eigenfunctions of different random walks on the finite graph to the underlying geometry and probability distribution from which the dataset was sampled .these findings lead to a better understanding of the advantages and limitations of diffusion maps as a tool to solve different tasks in the analysis of high dimensional data .consider a finite dataset .we consider two different possible scenarios for the origin of the data . in the first scenario ,the data is not necessarily derived from a dynamical system , but rather it is randomly sampled from some arbitrary probability distribution . in this casewe define an associated potential so that . in the second scenario , we assume that the data is sampled from a dynamical system in equilibrium .we further assume that the dynamical system , defined by its state at time , satisfies the following generic stochastic differential equation ( sde ) where a dot on a variable means differentiation with respect to time , is the free energy at ( which , with some abuse of nomenclature , we will also call the potential at ) , and is an -dimensional brownian motion process . in this casethere is an explicit notion of time , and the transition probability density of finding the system at location at time , given an initial location at time ( ) , satisfies the forward fokker - planck equation ( fpe ) with initial condition similarly , the backward fokker - planck equation for the density , in the backward variables ( ) is where differentiations in ( [ backward_fpe ] ) are with respect to the variable , and the laplacian is a negative operator , defined as . as time the steady state solution of ( [ fpe ] ) is given by the equilibrium boltzmann probability density , where is a normalization constant ( known as the partition function in statistical physics ) , given by in what follows we assume that the potential is shifted by the suitable constant ( which does not change the sde ( [ sde ] ) ) , so that . also , we use the notation interchangeably to denote the ( invariant ) probability measure on the space . note that in both scenarios , the steady state probability density , given by ( [ mu_x ] ) is identical .therefore , for the purpose of our initial analysis , which does not directly take into account the possible time dependence of the data , it is only the features of the underlying potential that come into play . the langevin equation ( [ sde ] ) or the corresponding fokker - planck equation ( [ fpe ] ) are commonly used to describe mechanical , physical , chemical , or biological systems driven by noise .the study of their behavior , and specifically the decay to equilibrium has been the subject of much theoretical research . in general , the solution of the fokker - planck equation ( [ fpe ] ) can be written in terms of an eigenfunction expansion where are the eigenvalues of the fp operator , with , are their corresponding eigenfunctions , and the coefficients depend on the initial conditions .obviously , the long term behavior of the system is governed only by the first few eigenfunctions , where is typically small and depends on the time scale of interest . in low dimensions ,e.g. for example , it is possible to calculate approximations to these eigenfunctions via numerical solutions of the relevant partial differential equations . in high dimensions, however , this approach is in general infeasible and one typically resorts to simulations of trajectories of the corresponding sde ( [ sde ] ) . in this case , there is a need to employ statistical methods to analyze the simulated trajectories , identify the slow variables , the meta - stable states , the reaction pathways connecting them and the mean transition times between them .let , denote the samples , either merged from many different simulations of the stochastic equation ( [ sde ] ) , or simply given without an underlying dynamical system . in , coifman andlafon suggested the following method , based on the definition of a markov chain on the data , for the analysis of the geometry of general datasets : for a fixed value of ( a metaparameter of the algorithm ) , define an isotropic diffusion kernel , assume that the transition probability between points and is proportional to , and construct an markov matrix , as follows where is the required normalization constant , given by for large enough values of the markov matrix is fully connected ( in the numerical sense ) and therefore has an eigenvalue with multiplicity one and a sequence of additional non - increasing eigenvalues , with corresponding eigenvectors .the diffusion map at time is defined as the mapping from to the vector for some small value of . in , it was demonstrated that this mapping gives a low dimensional parametrization of the geometry and density of the data . in the field of data analysis ,this construction is known as the _ normalized graph laplacian_. in , shi and malik suggested using the first non - trivial eigenvector to compute an approximation to the optimal normalized cut of a graph , while the first few eigenvectors were suggested by weiss et al . for clustering .similar constructions , falling under the general term of kernel methods have been used in the machine learning community for classification and regression . in this paperwe elucidate the connection between this construction and the underlying potential . to analyze the eigenvalues and eigenvectors of the normalized graph laplacian, we consider them as a finite approximation of a suitably defined diffusion operator acting on the continuum probability space from which the data was sampled .we thus consider the limit of the above markov chain process as the number of samples approaches infinity . for a finite value of , the markov chain in discrete time and space converges to a markov process in discrete time but continuous space .then , in the limit , this jump process converges to a diffusion process on , whose local transition probability depends on the non - uniform probability measure .we first consider the case of a fixed , and take .using the similarity of ( [ k_epsilon ] ) to the diffusion kernel , we view as a measure of time and consider a discrete jump process at time intervals , with a transition probability between points and proportional to .however , since the density of points is not uniform but rather given by the measure , we define an associated normalization factor as follows , and a forward transition probability equations ( [ p_ve ] ) and ( [ m_f ] ) are the continuous analogues of the discrete equations ( [ p_ve_discrete ] ) and ( [ m_discrete ] ) . for future use, we also define a symmetric kernel as follows , note that is an estimate of the local probability density at , computed by averaging the density in a neighborhood of radius around . indeed , as , we have that we now define forward , backward and symmetric chapman - kolmogorov operators on functions defined on this probability space , as follows , ( { { \mbox{\boldmath } } } ) = \int m_f ( { { \mbox{\boldmath } } } | { { \mbox{\boldmath } } } ) \varphi ( { { \mbox{\boldmath } } } ) d\mu ( { { \mbox{\boldmath } } } ) \ ] ] ( { { \mbox{\boldmath } } } ) = \int m_f ( { { \mbox{\boldmath } } } | { { \mbox{\boldmath } } } ) \varphi ( { { \mbox{\boldmath } } } ) d\mu ( { { \mbox{\boldmath } } } ) \ ] ] and ( { { \mbox{\boldmath } } } ) = \int m_s ( { { \mbox{\boldmath } } } , { { \mbox{\boldmath } } } ) \varphi ( { { \mbox{\boldmath } } } ) d\mu ( { { \mbox{\boldmath } } } ) \ ] ] if is the probability of finding the system at location at time , then x ] is the mean ( average ) value of that function at time for a random walk that started at , and so ( { { \mbox{\boldmath } } } ) ] at temperature , color - coded by their local density . on the right we plotted the first two diffusion map coordinates . notice how in the diffusion map space one can clearly see a triangle where each vertex corresponds to one of the points .this figure shows very clearly that there are two possible pathways to go from to . a direct ( short ) way and an indirect longer way , that passes through the shallow well centered at .+ we conclude this section with a diffusion map analysis of one of the most popular multivariate datasets in pattern recognition , the iris data set .this set contains 3 distinct classes of samples in four dimensions , with 50 samples in each class . in figure [ f : iris ] we see on the left the result of the three dimensional diffusion map on this dataset .this picture clearly shows that all 50 points of class 1 ( blue ) are shrunk into a single point in the diffusion map space and can thus be easily distinguished from classes two and three ( red and green ) . in the right plotwe see the results of re - running the diffusion map on the 100 remaining red and green samples .the 2-d plot of the first two diffusion maps coordinates shows that there is no perfect separation between these two classes .however , clustering according to the sign of gives misclassifications rates similar to those of other methods , of the order of 6 - 8 samples depending on the value chosen for the kernel width .in this paper , we introduced a mathematical framework for the analysis of diffusion maps , via their corresponding infinitesimal generators .our results show that diffusion maps are a natural method for the analysis of the geometry and probability distribution of empirical data sets .the identification of the eigenvectors of the markov chain as discrete approximations to the corresponding differential operators provides a mathematical justification for their use as a dimensional reduction tool and gives an alternative explanation for their empirical success in various data analysis applications , such as spectral clustering and approximations of optimal normalized cuts on discrete graphs .we generalized the standard construction of the normalized graph laplacian to a one - parameter family of graph laplacians that provides a low - dimensional description of the data combining the geometry of the set with the probability distribution of the data points .the choice of the diffusion map depends on the task at hand .if , for example , data points are known to approximately lie on a manifold , and one is solely interested in recovering the geometry of this set , then an appropriate normalization of a gaussian kernel allows to approximate the laplace - beltrami operator , regardless of the density of the data points .this construction achieves a complete separation of the underlying geometry , represented by the knowledge of the laplace operator , from the statistics of the points .this is important in situations where the density is meaningless , and yet points on the manifold are not sampled uniformly on it . in a different scenario ,if the data points are known to be sampled from the equilibrium distribution of a fokker - planck equation , the long - time dynamics of the density of points can be recovered from an appropriately normalized random walk process . in this case, there is a subtle interaction between the distribution of the points and the geometry of the data set , and one must correctly account for the density of the points .while in this paper we analyzed only gaussian kernels , our asymptotic results are valid for general kernels , with the appropriate modification that take into account the mean and covariance matrix of the kernel .note , however , that although asymptotically in the limit and , the choice of the isotropic kernel is unimportant , for a finite data set the choice of both and the kernel can be crucial for the success of the method .finally , in the context of dynamical systems , we showed that diffusion maps with the appropriate normalization constitute a powerful tool for the analysis of systems exhibiting different time scales .in particular , as shown in the different examples , these time scales can be separated and the long time dynamics can be characterized by the top eigenfunctions of the diffusion operator .last , our analysis paves the way for fast simulations of physical systems by allowing larger integration steps along slow variable directions .the exact details required for the design of fast and efficient simulations based on diffusion maps will be described in a separate publication .* acknowledgments : * the authors would like to thank the referee for helpful suggestions and for pointing out ref . .in this appendix , we present the calculation of the infinitesimal generators for the different diffusion maps characterized by a parameter .suppose that the data set consists of a riemannian manifold with a density and let be a gaussian kernel .it was shown in that if is scaled appropriately , then for any function on , where is a function that depends on the riemannian geometry of the manifold and its embedding in . using the notations introduced in section [ ref : anisotropic diffusion maps ] , it is easy to verify that and consequently , let then , the normalization factor is given by \ ] ] therefore , the asymptotic expansion of the backward operator gives and its infinitesimal generator is inserting the expression into the last equation gives similarly , the form of the forward infinitesimal operator is r. r. coifman , s. lafon , a. b. lee , m. maggioni , b. nadler , f. warner and s. zucker , geometric diffusions as a tool for harmonic analysis and structure definition of data , part i : diffusion maps , _ proc ._ , in press .m. saerens , f. fouss , l. yen and p. dupont , _ the principal components analysis of a graph and its relationships to spectral clustering _ , proceedings of the 15th european conference on machine learning ( ecml 2004 ) , lecture notes in artificial intelligence , vol . 3201 , springer - verlag , berlin , 2004 , pp 371 - 383 .kevrekidis , c.w .gear , j. m. hyman , p.g .kevrekidis , o. runborg , c. theodoropoulos , `` equation free multiscale computation : enabling microscopic simulators to perform system - level tasks '' , _ comm ._ , submitted .
a central problem in data analysis is the low dimensional representation of high dimensional data , and the concise description of its underlying geometry and density . in the analysis of large scale simulations of complex dynamical systems , where the notion of time evolution comes into play , important problems are the identification of slow variables and dynamically meaningful reaction coordinates that capture the long time evolution of the system . in this paper we provide a unifying view of these apparently different tasks , by considering a family of _ diffusion maps _ , defined as the embedding of complex ( high dimensional ) data onto a low dimensional euclidian space , via the eigenvectors of suitably defined random walks defined on the given datasets . assuming that the data is randomly sampled from an underlying general probability distribution , we show that as the number of samples goes to infinity , the eigenvectors of each diffusion map converge to the eigenfunctions of a corresponding differential operator defined on the support of the probability distribution . different normalizations of the markov chain on the graph lead to different limiting differential operators . for example , the normalized graph laplacian leads to a backward fokker - planck operator with an underlying potential of , best suited for spectral clustering . a specific anisotropic normalization of the random walk leads to the backward fokker - planck operator with the potential , best suited for the analysis of the long time asymptotics of high dimensional stochastic systems governed by a stochastic differential equation with the same potential . finally , yet another normalization leads to the eigenfunctions of the laplace - beltrami ( heat ) operator on the manifold in which the data resides , best suited for the analysis of the geometry of the dataset , regardless of its possibly non - uniform density .
statistical physics furnishes diverse tools for the study of human social dynamics . despite the apparent dissimilarities between social and physical systems , many social processes present a phenomenology that resembles that found , for instance , in the physics of frustrated or disordered materials is because , despite the high heterogeneity of the individuals , and their interactions , not all details are relevant for the emergence of collective patterns .collective behaviors make social systems interesting for the physicist and , reciprocally , the physicist might contribute with a new perspective to the comprehension of social phenomena .one of the basic ingredients to be taken into account for modeling people s interactions is imitation , or social contagion .in fact , imitation is observed in diverse social contexts , such as in the dynamics of language learning or decision making .the recurrent conformity to the attitudes , opinions or decisions of other individuals , or groups of individuals , has led to the formulation of models based on social contagion as the primary rule of opinion dynamics , e.g. , the voter , sznajd and majority rule models , to give just a few examples .however , aside from imitation , individuals also dissent and resist to be influenced , in several ways . in the present work , we study the effect of nonconformity attitudes through a kinetic exchange model . within this modeling ,the influences that an individual exerts over another are modulated by a coupling strength .the coupling strength between connected individuals typically takes positive values but also , with certain probability , it can adopt negative ones .negative values represent negative influences that , instead of imitation , promote dissent .notice that this kind of dissent is not a characteristic of the individual but of the tie ( or link ) between each pair of individuals .there are also other types of nonconformity , which are not associated to the links but to the individuals ( or nodes in the network of contacts ) , as taken into account in several models .one of these types is anticonformity .an anticonformist actively dissents from other people s opinions , which is the case contemplated , for example , by galam s contrarians . actually , although these anticonformists defy other people s opinions or the group norm , they are similar to conformists in the sense that they take into account other s opinions too .a different kind of nonconformity is independence , such that the individual tends to resist the influences of other agents or groups of agents , ignoring their choices in the adoption process .it can be thought either as an attribute of the agent , that acts as independent with certain probability , or an attitude that any individual can assume with certain frequency .it is in the latter sense that we will incorporate independence into the model .another important ingredient for modeling opinion dynamics is conviction or persuasion ( a kind of stubbornness or resistance to change mind ) .we will consider the heterogeneity of the individuals in that respect .each agent will be characterized by a parameter that measures its level of conviction about the opinion it supports .this parameter , typically defined positive , will be allowed to take also negative values to represent volatile individuals that change mind easily .heterogeneities and disorder can act on opinion dynamics as stochastic drivings able to promote a phase transition , playing the role of a source of randomness or noise similar to a social temperature .we will focus on the impact of all the abovementioned sources of disorder on the steady state distribution of opinions in the population , and investigate the occurrence of nonequilibrium phase transitions .our model belongs to the class of kinetic exchange opinions models .we consider a fully - connected population of size participating in a public debate .each agent in this artificial society has an opinion .most models of opinion dynamics deal with a discrete state space , which may be enough to tackle certain problems that involve binary or several particular choices .however , to investigate , for instance , the emergence of extreme opinions , it seems to be more suitable to represent opinions by means of a continuous variable , to reflect the possible shades of peoples attitudes about a given subject .then , we will consider a continuous state space , where opinions can take values in the real range ] , independently of the current states and .that is , as commented above , we consider that independence is not an attribute of the individual but an attitude that any individual can occasionally assume with certain probability . in that opportunity, individuals choose their own position ( state ) independently of the other individuals .3 . otherwise , with probability , the agent acts as a partially conformist individual , and will be influenced by agent by means of a kinetic exchange . in this case , the opinion of agent will be updated according to the rule where and are real parameters that measure the level of conviction of agent , and the strength of the influence that agent suffers from agent , respectively .the opinions are restricted to the range ] introduces a nonlinearity in the mapping given by eq .( [ eq1 ] ) , that becomes linear by parts . smoothing this nonlinearity , for instance through ] or in ] or in ] . in the following, we will describe separately two distinct cases characterized by : ( i ) the existence of competitive positive / negative interactions among pairs of agents , while convictions are homogeneous ( with , ) , and ( ii ) the heterogeneity of agents convictions , which interactions are always positive ( , for all ) . in both cases ,the noise introduced by independent attitudes is included .we calculate the parameter given by where denotes average over disorder or configurations , computed at the steady states .notice that is a kind of order parameter that plays the role of the `` magnetization per spin '' in magnetic systems .it is sensitive to the unbalance between positive and negative opinions .a state with a large value of ( ) means that an extremist position reached consensus .intermediate values indicate : ( i ) the dominance of either one of the extreme opinions , ( ii ) that opinions are moderate but one of the sides wins , or ( iii ) a combination of both .all these states can be identified with ordered ones in the sense that the debate has a clear result ( favorable or unfavorable ) , be extremist or moderate .a small value ( ) indicates a symmetric distribution of opinions : ( i ) polarization such that opposite opinions balance , ( ii ) the dominance of very moderate or undecided opinions around the neutral state , or ( iii ) a combination of both . in all symmetric cases ,the debate will not have a clear winner position . in that sensethe collective state can be identified with a disordered one .we consider also the fluctuations ( or `` susceptibility '' ) of parameter , and the binder cumulant , defined as all these quantities will be used to characterize the phase transitions between ordered and disordered phases .additionally , those phases will be described by means of the pattern that the distribution of opinions presents .let us focus first on the effect of competitive interactions ( with a fraction of negative couplings ) and independence ( which occurs with probability ) , by studying the case of homogeneous agents with for all .in figure [ fig1 ] we exhibit as a function of the independence parameter , for different values of .as observed in the figure , independence makes the system undergo a phase transition , an effect which is typical of social temperatures . versus the independence probability , for several values of the fraction of negative interaction strengths , with for all .one can observe transitions at different critical points , but the maximal value of decreases with and the transition is eliminated for sufficiently large values of .the population size is and data are averaged over simulations.,scaledwidth=30.0% ] the transition occurs even when couplings are all positive ( ) , then the mere presence of independence leads the system to undergo a phase transition .a similar effect was observed in the discrete version of the present model , considering only opinions , , , although the critical point has a different value ( 1/4 in the discrete case ) .when we introduce negative interaction strengths ( ) , states characterized by , meaning consensus of extreme opinions , become unlikely ( see figure [ fig1 ] ) .moreover , the critical values decrease with and , above sufficiently large , the system becomes disordered for any .the critical value is in accord with that found in ref . . to estimate the critical points , we performed a finite - size scaling ( fss ) analysis .as a typical example , we exhibit in figure [ fig2 ] the results of fss for .the critical exponents are , and .thus , as expected , the universality class known for the mean - field implementation of the model when is not altered by the presence of the microscopic disorder introduced by independence ., for and for all .the best data collapse was found for , , and .,title="fig:",scaledwidth=30.0% ] , for and for all .the best data collapse was found for , , and .,title="fig:",scaledwidth=30.0% ] + , for and for all .the best data collapse was found for , , and .,title="fig:",scaledwidth=30.0% ] the fss analysis also provides the critical values , that allow to build a phase diagram in the plane versus , depicted in figure [ fig3 ] .the boundary separates the ordered and disordered phases . in the ordered phase , for , one of the sides ( positive or negative opinions ) will win the debate , while in the disordered phase , there will be balance of opposite opinions and/or dominance of moderate ones . following the analytical expression found for the discrete kinetic exchange opinion models , namely a quotient of two first order polynomials in , we propose a qualitative description of the phase boundary through which for fitting parameters gives a heuristic description of the phase boundary , in good agreement with the critical points obtained from fss analysis , as shown in figure [ fig3 ] .the extreme critical values and are given respectively by and .the latter value is in agreement with the estimate found in ref . , which deals with such limiting case .( probability of independence ) versus ( probability of negative interactions ) , when .the symbols are the numerical estimates of the transition points , whereas the dashed line is given by eq .( [ eq5 ] ) .the error bars were estimated from the fss analysis . , scaledwidth=30.0% ] ( top , left pannel ) , ( top , right pannel ) and ( bottom ) , for several values of , when .empty ( full ) symbols represent ordered ( disordered ) states . in the insetswe show a zoom of the main frames , excluding the extremist agents with majority opinions .the population size is , and data are accumulated over independent simulations , as explained in the text ., title="fig:",scaledwidth=30.0% ] ( top , left pannel ) , ( top , right pannel ) and ( bottom ) , for several values of , when .empty ( full ) symbols represent ordered ( disordered ) states . in the insetswe show a zoom of the main frames , excluding the extremist agents with majority opinions .the population size is , and data are accumulated over independent simulations , as explained in the text ., title="fig:",scaledwidth=30.0% ] + ( top , left pannel ) , ( top , right pannel ) and ( bottom ) , for several values of , when .empty ( full ) symbols represent ordered ( disordered ) states . in the insetswe show a zoom of the main frames , excluding the extremist agents with majority opinions .the population size is , and data are accumulated over independent simulations , as explained in the text ., title="fig:",scaledwidth=30.0% ] since there is ambiguity in the interpretation of the values of parameter , we complemented the previous analysis , computing the distribution of opinions in the population at the steady states of the model . in figure [ fig4 ], we exhibit the outcomes for , and , at several values of .each normalized histogram is obtained from independent simulations , for population size .when there is unbalance of positive and negative opinions , we arbitrarily selected simulations with dominance of positive opinions to build the histograms . in this way , each histogram is representative of each single realization but with improved statistics . instead , if we would have chosen the simulations with predominantly negative outcomes , the distribution would be the symmetric counterpart of those shown in figure [ fig4 ] .first , we notice that there is condensation of opinions at the extreme values .this is a consequence of the type of nonlinearity introduced in eq .( [ eq1 ] ) .if a smoother form were used instead , then there will be a spread of the extreme values in the condensates , which will not affect significantly the number of individuals that can be identified as those who are extremists . for each frame ( fixed value of ) , we observe the following scenario . both the winner and loser sides present a bulk of moderate individuals as well as a condensation at ( extremists ) . when is small enough so that the system becomes ordered ,unbalance between positive and negative opinions emerges , consistently with a non null order parameter .when becomes too large , it is not possible the formation of an ordered phase , where one of the sides dominates , then the distribution becomes flat , with the coexistence of all opinions ( except for the symmetrical condensation in the extreme states ) , as can be seen in the insets of figure [ fig4 ] .moreover , we see that the number of extremists ( opinions ) decreases with increasing values of , as well as with increasing . that is , the inclusion of dissent interactions , as well as the growth of independent attitudes , inhibit extremism growth , leading to a more homogeneous distribution of opinions , which is associated to the disordered phase in the diagram of figure [ fig4 ] .in this section we consider independence together with the heterogeneity of the convictions , which can take negative values .recall that we assume that are ( quenched ) random variables that are uniformly distributed in ] with the complementary probability .hence , the parameter denotes the fraction of negative convictions .positive convictions mean that the agent is sure about the adopted belief and therefore contributes in eq .( [ eq1 ] ) to maintain the current opinion .a conviction close to zero reflects uncertainty about the belief and high susceptibility to other people s opinions ( conformist behavior ) , while a negative conviction contributes to form an indifferent or an opposite position to the current one . when all the couplings are , the curves for the order parameter vs , are qualitatively similar to those shown in figure [ fig1 ] ( after the substitution ) , although the critical values are not the same . moreover the distribution of opinions are also similar .hence the cases ( i ) for all , and variable , and ( ii ) for all with variable , are in some way equivalent . since case ( ii ) does not present a new phenomenology with respect to case ( i ) , then we will exhibit , instead of , the results for a more realistic case , where all the interactions are positive but random , uniformly distributed in the range $ ] , which corresponds to .in figure [ fig5 ] we exhibit the behavior of the order parameter as a function of for several values of .although the curves are different from those in figure [ fig1 ] , the effect of convictions is rather similar to that introduced by negative interactions , in the sense that the increase of , as well as the increase of , both tend to disorder the system .however , the population does not reach the state anymore , differently to the case of figure [ fig1 ] . versus the independence probability , for several values of the fraction of negative convictions , when .one can observe order - disorder transitions at different points , but the transition is suppressed for sufficiently large values of .the population size is and data are averaged over simulations ., scaledwidth=30.0% ] in addition , one observes that the critical values are much smaller than in sec .[ case1 ] , where for all but negative pairwise interactions ( i.e. , ) were allowed .we estimated the values of the critical points by means of the crossing of the binder cumulant curves , as well as the critical exponents , as illustrated in figure [ fig6 ] for the case .these exponents are the same as the ones of the previous section , namely , and .thus , the universality class of the model is not changed by the presence of disorder in the convictions , as expected . and .the best data collapse was found for , , and .,title="fig:",scaledwidth=30.0% ] and .the best data collapse was found for , , and .,title="fig:",scaledwidth=30.0% ] + and .the best data collapse was found for , , and .,title="fig:",scaledwidth=30.0% ] figure [ fig7 ] shows the resulting phase diagram in the plane versus , using the values of obtained from fss .the boundary separates the ordered and disordered phases . following the same analysis of sec .[ case1 ] , we estimated the qualitative behavior of the order - disorder frontier by means of eq .( [ eq5 ] ) , with the change .the result is plotted in figure [ fig7 ] together with the critical points obtained from fss analysis .one can see a good agreement between the data and the qualitative frontier . in this case , and the extreme critical values are and .( probability of independence ) versus ( probability of negative convictions ) , when .the symbols are the numerical estimates of the transition points , whereas the dashed line is obtained by substituting by in eq .( [ eq5 ] ) , as explained in the text .the error bars were estimated from the fss analysis.,scaledwidth=30.0% ] to understand the nature of the disordered phase , also in this case we obtained the distribution of opinions in the steady state as done in sec .[ case1 ] . in figure [ fig8 ]we show the normalized histograms for several values of and .each histogram is built from independent simulations , for population size , as in sec .[ case1 ] .notice that the distributions are completely different from those in figure [ fig4 ] , both when the system is in the ordered and disordered phases . for values of the parameters in the disordered phase ,i.e. at the right of critical line in figure [ fig7 ] , the plot is not flat in the center , as it was in the case of figure [ fig4 ] , but the distribution becomes symmetrically concentrated around the neutral state where there is a peak .that is , the population becomes essentially neutral and opposite opinions are balanced .when we move away to the left of the critical line in figure [ fig8 ] , the distribution of opinions loses symmetry , in such a way that one of the sides of the debate ( the negative opinions in the example of the figure ) tends to disappear , while the opposite ( positive ) side tends to become uniform , except for the condensation peak at , that is the distribution becomes bimodal .this effect is more pronounced in the absence of negative convictions ( ) , which represents the farther points from the frontier , at fixed . for increasing values of ,the fraction of extremists decreases ( this effect was quantified in figure [ fig9 ] ) , as well as the fraction of agents with moderate opinions , while negative opinions emerge in the population . from other viewpoint, the absence of negative convictions ( ) leads to the emergence of extremists in the population , and the introduction of volatile agents with negative convictions makes moderate and indifferent opinions dominant , which seem realistic features of the model .in addition , if the independent behavior becomes more frequent ( increasing ) , the pdfs become more symmetric and there are less agents sharing extreme opinions. see figure [ fig9 ] , where we exhibit the fraction of extremists ( ) as a function of for three different values of .notice that the increase of independent attitudes ( increasing ) tends to reduce the fraction of extremists in the population .the increase of volatile attitudes ( increasing ) favors the emergence of moderate opinions and also reduces condensation at the extreme positions .indeed , the increase of volatile individuals reduces , even extinguishes , the fraction of extremists and also prevents the arrival to consensus .finally , let us comment that when both and are allowed to take negative values i.e. , , the phenomenology is similar to that discussed in this section , although the domain of the ordered phase shrinks ( not shown , as far as there are not new qualitative features ) .therefore , the opinion patterns observed in this section can be attributed to the joint heterogeneity in the convictions and in the interactions , be them all positive or not .when individuals have all the same conviction , or the same interaction strength , then , patterns of the type observed in figure [ fig4 ] emerge .the substitution of by positive heterogeneous convictions , like in the case , is enough to yield patterns of opinions similar to those shown in figure [ fig8 ] , when we vary instead of .( top , left pannel ) , ( top , right pannel ) and ( bottom ) , for several values of ( and ) .empty ( full ) symbols represent ordered ( disordered ) states .the population size is , and data are accumulated over independent simulations , as explained in the text.,title="fig:",scaledwidth=30.0% ] ( top , left pannel ) , ( top , right pannel ) and ( bottom ) , for several values of ( and ) .empty ( full ) symbols represent ordered ( disordered ) states .the population size is , and data are accumulated over independent simulations , as explained in the text.,title="fig:",scaledwidth=30.0% ] + ( top , left pannel ) , ( top , right pannel ) and ( bottom ) , for several values of ( and ) .empty ( full ) symbols represent ordered ( disordered ) states .the population size is , and data are accumulated over independent simulations , as explained in the text.,title="fig:",scaledwidth=30.0% ] as a function of ( probability of negative convictions ) for typical values of ( probability of independence ) , when .one can see that the number of extremists in the population can be reduced ( even to zero ) for sufficiently large values of .population size is and averages were performed over simulations.,scaledwidth=30.0% ]in this work , we have studied a kinetic model of opinion formation with continuous states .we considered pairwise interactions among randomly chosen agents , together with the possibility of independent behaviors that occur with probability .two sources of heterogeneity were included : random competitive couplings ( interaction term ) and random convictions ( self - interaction term ) , that can take negative values with probabilities and , respectively .we characterized the critical transitions , as well as the emerging phases , in terms of moderate and extremist individuals .parameters , and rule microscopic disorder in the system , and their increase triggers transitions from collective order to disorder .these transitions are characterized by mean - field exponents and their critical curves are shown in figures [ fig3 ] and [ fig7 ] . when only one source of heterogeneity is present , like in the case of sec .[ case1 ] , where the convictions are given by for all , but there is a probability of negative couplings , patterns of opinions like those shown in figure [ fig4 ] occur .qualitatively similar patterns arise when the pairwise interactions are for all but convictions are random . in the disordered phase , opinions are evenly distributed . in the ordered ones , extremists dominate .the steady states show the presence of a large number of extremists with opinions either or .the fraction of these extremists decreases for increasing values of the independence parameter , and the system tends to become disordered , indicating that independent attitudes reduce extreme opinions .different patterns emerge when there are two sources of heterogeneity , with negative values or not , like in the case of sec .[ case2 ] , where the couplings and the convictions are random . in the disordered phase , opinions are not evenly distributed , but moderate opinions dominate , with a peak around the neutral state . in the ordered phases , one of the opinion sides disappears .when a second source of heterogeneity is present , the increase of volatile individuals ( increasing ) or of dissent interactions ( positive ) , reduces , even extinguishes , the fraction of extremists and also prevents the arrival to consensus .the models considered in this work contribute to the analysis of the role of the diverse features that characterize individuals and their interactions . as can be seen in this paper ,each modification yields phase diagrams of order parameters that seem similar , however a quite different phenomenology arises in terms of the distribution of opinions .this hinders unification , justifying a separate analysis of different versions of the model .these versions shed light on the origin and role of undecided individuals and extremism uprise , and the effect of realistic characteristics like conviction and independence in the dynamics of opinion formation and evolution .in particular , the models contribute to understand the circumstances which favor the emergence and development of extremism in a fraction of the population , relating that emergence with the existence of individuals with strong positive convictions in the population . on the other hand , the presence of negative convictions , that represent volatile individuals that have a propensity to change mind , as well as nonconformity represented in the model by the independent behavior , lead to the dominance of moderate individuals .the authors acknowledge financial support from the brazilian funding agencies cnpq and faperj .
we investigate opinion formation in a kinetic exchange opinion model , where opinions are represented by numbers in the real interval $ ] and agents are typified by the individual degree of conviction about the opinion that they support . opinions evolve through pairwise interactions governed by competitive positive and negative couplings , that promote imitation and dissent , respectively . the model contemplates also another type of nonconformity such that agents can occasionally choose their opinions independently of the interactions with other agents . the steady states of the model as a function of the parameters that describe conviction , dissent and independence are analyzed , with particular emphasis on the emergence of extreme opinions . then , we characterize the possible ordered and disordered phases and the occurrence or suppression of phase transitions that arise spontaneously due to the disorder introduced by the heterogeneity of the agents and/or their interactions . keywords : dynamics of social systems , collective phenomena , computer simulations , critical phenomena
congruence is a fundamental concept in number theory .two integers and are said to be congruent modulo a positive integer if their difference is integrally divisible by , written as .congruence theory has been widely used in physics , biology , chemistry , computer science , and even music and business . because of the limited computational and storage ability of computers , congruence arithmetic is particularly useful and applicable to computing with numbers of infinite length .significant and representative applications include generating random numbers , designing hash functions and checksumming in error detections . as a cornerstone of modern cryptography ,congruence arithmetic has been successfully used in public - key encryption , secret sharing , digital authentication , and many other data security applications . despite the well - established congruence theory with a broad spectrum of applications , a comprehensive understanding of the congruence relation among natural numbers is still lacking .our purpose is to uncover some intrinsic properties of the network consisting of natural numbers with congruence relations .a link in the congruence network is defined in terms of the congruence relation , where is the reminder of divided by . for a fixed value of , we discern an infinite set of integer pairs . for each pair of such integers ,a directed link from to ( suppose ) characterizes the congruence relation between and , giving rise to a congruence network for a given reminder .let denote a congruence network , where is the largest integer considered .note that congruence networks associated with different values of share the same set of nodes ( integers ) , thereby a multiplex network with a number of layers is formed , as shown in fig .[ fig1](a ) . to our knowledge, the multiplex congruence network ( mcn ) has not been explored in spite of some effort dedicated to complex networks associated with natural numbers .we will demonstrate several unique and prominent properties of the mcn regarding some typical dynamical processes .specifically , analytical results will show that all layers of the mcn are sparse with the same power - law degree distribution .a counterintuitive property of the mcn is that every layer of the mcn has an extremely strong controllability , which significantly differs from ordinary scale - free networks requiring a large fraction of driver nodes . to steer the network in a layer , the minimum number of driver nodes is nothing but the reminder that is negligible as compared to the network size .the controllability of the mcn is also very robust against targeted removal of nodes but relatively vulnerable to random failures , which is also in sharp contrast to ordinary scale - free networks .this amazing robustness against attacks can be interpreted in terms of the multi - chain structure in mcn .the mcn therefore sheds light on the design of heterogeneous networks with high searching efficiency and strong controllability simultaneously .another application of the mcn is that it can graphically solve the simultaneous congruences problem in a more intuitive way than currently used methods , such as the garner s algorithm .the solution of the simultaneous congruences problem is to locate common neighbors of relevant numbers in different layers .this alternative approach by virtue of the mcn may have implication in cryptography based on simultaneous congruences . to our knowledge, the multiplex congruence network ( mcn ) has not been explored in spite of some effort dedicated to complex networks associated with natural numbers . we will demonstrate several unique and prominent properties of the mcn regarding some typical dynamical processes .specifically , analytical results will show that all layers of the mcn are sparse with the same power - law degree distribution .a counterintuitive property of the mcn is that every layer of the mcn has an extremely strong controllability , which significantly differs from ordinary scale - free networks requiring a large fraction of driver nodes . to steer the network in a layer , the minimum number of driver nodes is nothing but the number of reminder that is negligible as compared to the network size .the controllability of the mcn is also very robust against targeted removal of nodes but relatively vulnerable to random failures , which is also in sharp contrast to ordinary scale - free networks .this amazing robustness against attacks can be interpreted in terms of the multi - chain structure in mcn .the mcn therefore sheds light on the design of heterogeneous networks with high searching efficiency and strong controllability simultaneously .one of the most important applications of the mcn is that it allows to solve the simultaneous congruences problem in a much more efficient way than existing methods using the gauss algorithm . to tackle the problem with high computational efficiency, the network of each layer should be stored in advance , e.g. , in a distributed storage system .the solution of the simultaneous congruences problem is to locate common neighbors of relevant numbers in different layers .this new efficient approach by virtue of the mcn offers deeper and broader insight into the modern cryptography and has potential applications in many other fields , such as communications and computer science .mcn consists of a number of congrence networks ( layers ) , as shown in fig . [ fig1](a ) .each layer contains all the natural numbers larger than but less than or equal to , so the size ( number of nodes ) of a layer is .the remainder is a parameter that determines the structure of congruence network .when , the congruence network reduces to a divisibility network , in which the dividend links to all of its divisors except itself .the out - degrees of nodes are heterogeneous in each layer of the mcn .we have analytically derived the distribution of the out - degrees in the thermodynamic limit ( see details in the section of * methods * ) : for large , the out - degree distribution becomes , thus is a typical scale - free network .all have similar out - degree distributions , as shown in fig .[ fig1](b ) , but the divisibility network has a different out - degree distribution . for small , of deviates from the other networks .the main factor that accounts for the difference lies in that half of the nodes in have no outgoing links , but in there are only nodes without outgoing links .analytical results demonstrate that the average degree of any layer increases logarithmically with the network size ( see fig .[ fig1](b ) and the section of * methods * for details ) , and a larger value of corresponds to a sparser layer .these results indicate that is always a sparse network .hence , the mcn is compatible with a sparse storage , which is important for applying the mcn to solve real - world problems .according to the definition of mcn , the numbers in each layer can be classified into arithmetic sequences : where ( denotes the largest integer not greater than ) .the consecutive numbers in the sequence are linked from small to large , resulting in chains traversing all nodes in a layer , as shown in fig .[ fig2](a ) .the root node of a chain is the minimum number in the chain .there are totally root nodes associated with chains .the end of a chain is always the maximum number in the chain .note that is a special case , because the arithmetic sequence does not exist in the layer , rendering the absence of the multi - chain structure in the divisibility network .the above results indicate that although the divisibility network is a special case of , it has some fundamental differences from .only when the multi - chain structure emerges , and the number of nodes without outgoing links is negligible . some evidence has suggested that the multi - chain structure and the absence of nodes with low out - degrees play an important role in the controllability of complex networks . in the next section, we will further investigate the controllability properties of the mcn . in principle , the mcn composed of natural numbers is not a dynamical system , such that it can not be controlled .however , because of the multi - chain structure , the mcn provides significant insight into the design of heterogeneous networked systems with strong controllability .thus , we treat the mcn as a dynamical system and explore its unique and outstanding controllability properties .the central problem of controlling complex networks is to discern a minimum set of driver nodes , on which external input signals are imposed to fully control the whole system .let denote the minimum number of driver nodes and denote the fraction of driver nodes in a network .in general , a network with a smaller value of is said to be more controllable . according to the exact controllability theory for complex networks and the sparsity of mcn, we can prove that ( see in the section of * methods * ) and for large ( namely ) , and is considered as highly controllable .furthermore , according to both the exact controllability theory and the structural controllability theory , the driver nodes are the root nodes of the chains in ( see details in the section of * methods * ) . meanwhile , the driver nodes are the hub nodes with the maximum degree . in comparison , due tothe absence of chains , the divisibility network with ( denotes the smallest integer not less than ) is hard to control for large .it has been recognized that scale - free networks are often difficult to control . in particular , liu et al . have analytically found that when the network size , one must control almost all nodes in order to fully control a scale - free network with scaling index .the mcn is a scale - free network with scaling exponent , but one just needs to control root nodes to achieve full control .such a strong controllability stems from the inherent multi - chain structure in mcn . from the perspective of structural controllability ,all nodes in the chains in mcn are matched except the roots , which need to be controlled .thus , the mcn is valuable for designing heterogeneous networks with strong controllability .we also found that the mcn is strongly structurally controllable ( ssc ) because of the multi - chain structure ( see * methods * ) , which provides significant insight into the design of heterogeneous and controllable networks without exact link weights .a network is said to be ssc if and only if its controllability will not be affected by the link weights in its adjacency matrix , or equivalently , for any distribution of link weights , the network will be fully controllable from the same set of driver nodes .the ssc property implies that the mcn is robust against the fluctuation and uncertainty of link weights .this is an outstanding feature with practical significance since sometimes link weights are hard to be exactly measured and they are sometimes time - varying in real situations .the robustness against attacks is also a significant problem for the design of a controllable networked system .we explore the robustness of the controllability of the mcn against attacks on nodes and find some unique properties , which is useful for the design of practical networks . on the one hand , due to the existence of chains rooted in driver nodes in mcn , targeted attacks to driver nodes will not destroy the multi - chain structure nor increase . here, nodes critical for targeted attacks can be identified based on the rank of node degrees or their hierarchical structure . in mcn , driver nodes ( the root nodes )become such critical nodes . in thisregard , mcn is robust against targeted attacks . on the other hand , random attacks to nodesmay cut some chains . as a result, an additional driver is required to control each new breakpoint , leading to an increase of .thus , the controllability of mcn is unusual in resisting attacks in the sense that it is robust against intentional attacks but vulnerable to random attacks , which significantly differs from general scale - free networks .the results in comparing the mcn and sf networks generated by using static model are shown in fig .[ fig2](b ) . to make an unbiased comparison , a scale - free network with the same scaling exponent and as the mcn is necessary .however , because of the graphicality constraint , it is not possible to generate a random sf network with .thus , we slightly release the requirement by using static model to generate sf networks with the same but with .indeed , one can see that remains nearly unchanged under targeted attacks to driver nodes ; whereas random attacks to node causes clear increase of . this phenomenon is consistent with our analysis in terms of the multi - chain structure .moreover , for the presence of random attacks , in a wide range of the fractions of failed nodes , the controllability of mcn is still better than sf networks in general .one of the applications of mcn is that one can graphically solve the simultaneous congruences problem , which has implication in communication security and computer science . in particular, one can find that mcn is exactly a topological representation of the system of simultaneous congruences .a system of simultaneous congruences is a set of congruence equations : if the moduli are pairwise coprime , then a unique solution modulo exists .this is the _chinese remainder theorem _ ( crt ) , which has many applications in computing , coding and cryptography . a well - known algorithm to solve the simultaneous congruences in crt is the gaussian algorithm , also known as _ in ancient china . here , we present an intuitive approach based on mcn to solve the simultaneous congruences in eq .( [ eq : sim ] ) .firstly , we construct an mcn containing subnetworks with remainders , respectively . to focus on the minimum solution of eq .( [ eq : sim ] ) , we set the maximum number in the network as .then , we find the common successor neighbor of the nodes in this mcn , which is precisely the solution .we use a well - known example of crt , recorded in _ sunzi suanjing _ , to demonstrate our approach .the problem in this example is : ` suppose we have an unknown number of objects . when grouped in threes , 2are left out , when grouped in fives , 3 are left out , and when grouped in sevens , 2 are left out .how many objects are there ? 'this problem is equivalent to the following simultaneous congruences to solve the problem , we first construct an mcn of two layers , and , as shown in fig .then , we find the common successor neighbor of the three moduli , 3 , 5 and 7 , and finally get the result .it is noteworthy that the traditional algorithm for solving the simultaneous congruences problem , e.g. , the garner s algorithm , is more efficient than our algorithm based on the mcn that is essentially a brute - force search .thus , it is infeasible to immediately use the graphical approach in data security .however , the graphical algorithm offers new routes to the simultaneous congruences problem in the viewpoint of a complex network , which may be useful to improve the currently used algorithm .is , and similarly and in the lower layer .thus the common successor neighbor of the three nodes is , which is the solution of the simultaneous congruences problem described by in eq .( [ eq : sz]).,width=340 ]we have defined a multiplex congruence network composed of natural numbers and uncovered its unique topological features .analytical results demonstrate that every layer of the multiplex network is a sparse and scale - free subnetwork with the same degree distribution .counterintuitively , every layer with a scale - free structure has an extremely strong controllability , which significantly differs from ordinary scale - free networks . in general , a scale - free network with power - law degree distribution is harder to control than homogeneous networks .this is attributed to the presence of hub nodes , at which dilation arises according to the structural control theory . as a result ,downstream neighbor nodes of hubs are difficult to control . moreover , due to a large number of nodes connecting to hubs , scale - free networks are usually of weak controllability with a large fraction of driver nodes . in contrast , in spite of the scale - free structure of the congruence network , the long chains in each layer considerably inhibit dilation and reduce the number driver nodes .furthermore , an interesting finding is that every layer is also strong structurally controllable in that link weights have no effect on the controllability .this indicates that the controllability of the multiplex congruence network is extremely robust against the inherent limit to precisely accessing link weights in the real situation . to our knowledge ,a scale - free network with strong structural controllability has not been reported prior to our congruence network .an unusual controllability property is that the controllability of each layer is robust against targeted attacks to driver nodes , but relatively fragile to random failures of nodes , which is also different from common scale - free networks .previously reported results demonstrate that targeted removal of high degree nodes and nodes in the top level of a hierarchical structure causes maximum damage to the network controllability .under the two kinds of intentional removals , a network is easier to break to pieces , such that more driver nodes are required to achieve full control .thus , targeted attacks are defined in terms of the two types of node removals . in the congruence network ,high degree nodes and high level nodes are exactly identical , leading to the combination of the targeted attacks . strikingly , the congruence network is robust against the targeted attacks , because of the existence of the chains .targeted attacks will not destroy the chains in the downstream of the attacked node . as a result , the number of driver nodes nearly does not increase , even when a large fraction of nodes has been targeted attacked .the outstanding structural and controllability properties of the multiplex congruence network are valuable for designing heterogeneous networks with strong controllability and high searching efficiency rooted in the scale - free structure .another application of the multiplex congruent network is to solve the simultaneous congruences problem in a graphical and intuitive manner .the multiplex congruence network by converting the algebraic problem of solving simultaneous congruences equations to be a graphical problem of finding common neighbors in a graph , offers an alternative route to the traditional approaches . despite this property , the traditional algorithms , such as gaussian algorithm andgarner s algorithm , outperform the graphical method in computational efficiency .hence , the graphical method is not applicable in data security at the present .nevertheless , the graphical approach may inspire the combination of the graphical and algebraic method to improve the current algorithm , which is potentially valuable in communication security , computer science and many fields relevant to cryptography .our work may also stimulate further effort toward studying of networks arising from natural relationships among numbers , with outstanding features and applied values .many topological insights can be expected from complex networks consisting of natural numbers .for a sub - network in mcn , the total number of nodes is and the number of nodes without out - links is .the out - degree of a node labelled in the range of ] is 2 , because node can only link to two nodes i.e. nodes and ; similar scenarios appear to the other nodes .thus , we can derive the distribution of out - degrees in the thermodynamic limit , as follows : in , the numbers larger than have no out - links , i.e. , and the numbers in the range of ] is contributed by the number of linearly independent rows , hence the input signals specified via should be imposed on the linearly dependence rows in so as to eliminate all linear correlations in eq .( [ eq : cond ] ) .apparently , the first rows in the coupling matrix of are all zero rows ( see eq.([eq : mtx1 ] ) ) , hence the driver nodes that need to be controlled to maintain full control are just the minimum nodes of the congruence network , i.e. the roots of the chains in the congruence network ( see fig . [ fig2](a ) ) .the coupling matrix of the divisibility network is also a strictly lower - triangular matrix and in a column echelon form , but the rank of the matrix is , because in the node with labelled number larger than has no out - links , namely , the last columns of the matrix are all zeros .an example of is therefore , according to eq .( [ eq : pbh ] ) , we finally obtain the minimum number of driver nodes of as , indicating that one must control half of the nodes in order to control the whole divisibility network .moreover , the value of non - zero elements in does not affect , which indicate that is ssc .thus , the mcn composed of and is ssc .* acknowledgments : * we thank dr .zhengzhong yuan for useful discussions .y.y . was supported by nsfc under grant nos .61304177 , 71525002 and the fundamental research funds of bjtu under grant no .2015rc042 . w .-x.w . was supported by nsfc under grant no .r.c . was supported by the hong kong research grants council under the grf grant cityu11208515 .was supported by nsfc under grant no .
congruence theory has many applications in physical , social , biological and technological systems . congruence arithmetic has been a fundamental tool for data security and computer algebra . however , much less attention was devoted to the topological features of congruence relations among natural numbers . here , we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network . analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale - free topology . counterintuitively , every layer has an extremely strong controllability in spite of its scale - free structure that is usually difficult to control . another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures , which also differs from normal scale - free networks . the multi - chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature . the multiplex congruence network offers a graphical solution to the simultaneous congruences problem , which may have implication in cryptography based on simultaneous congruences . our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations .
in recent years , distributed source coding ( dsc ) has received an increasing attention from the signal processing community .dsc considers a situation in which two ( or more ) statistically dependent sources and must be encoded by separate encoders that are not allowed to talk to each other . performing separate lossless compressionmay seem less efficient than joint encoding .however , dsc theory proves that , under certain assumptions , separate encoding is optimal , provided that the sources are decoded jointly .for example , with two sources it is possible to perform standard " encoding of the first source ( called _ side information _ ) at a rate equal to its entropy , and conditional " encoding of the second one at a rate lower than its entropy , with no information about the first source available at the second encoder ; we refer to this as asymmetric " slepian - wolf ( s - w ) problem .alternatively , both sources can be encoded at a rate smaller than their respective entropy , and decoded jointly , which we refer to as symmetric " s - w coding .dsc theory also encompasses lossy compression ; it has been shown that , under certain conditions , there is no performance loss in using dsc , and that possible losses are bounded below 0.5 bit per sample ( bps ) for quadratic distortion metric . in practice , lossy dscis typically implemented using a quantizer followed by lossless dsc , while the decoder consists of the joint decoder followed by a joint dequantizer .lossless and lossy dsc have several potential applications , e.g. , coding for non co - located sources such as sensor networks , distributed video coding , layered video coding , error resilient video coding , and satellite image coding , just to mention a few .the interested reader is referred to for an excellent tutorial .traditional entropy coding of an information source can be performed using one out of many available methods , the most popular being arithmetic coding ( ac ) and huffman coding . conditional " ( i.e. , dsc ) coders are typically implemented using channel codes , by representing the source using the syndrome or the parity bits of a suitable channel code of given rate .the syndrome identifies sets of codewords ( cosets " ) with maximum distance properties , so that decoding an ambiguous description of a source at a rate less than its entropy ( given the side information ) incurs minimum error probability . if the correlation between and can be modeled as a virtual " channel described as , with an additive noise process , a good channel code for that transmission problem is also expected to be a good s - w source code .regarding asymmetric s - w coding , the first practical technique has been described in , and employs trellis codes .recently , more powerful channel codes such as turbo codes have been proposed in , and low - density parity - check ( ldpc ) codes have been used in .turbo and ldpc codes can get extremely close to channel capacity , although they require the block size to be rather large .note that the constituent codes of turbo - codes are convolutional codes , hence the syndrome is difficult to compute . in cosets are formed by all messages that produce the same parity bits , even though this approach is somewhat suboptimal , since the geometrical properties of these cosets are not as good as those of syndrome - based coding . in a syndrome former is used to deal with this problem .multilevel codes have also be addressed ; in trellis codes are extended to multilevel sources , whereas in a similar approach is proposed for ldpc codes .besides techniques based on channel coding , a few authors have also investigated the use of source coders for dsc .this is motivated by the fact that existing source coders obviously exhibit nice compression features that should be retained in a dsc coder , such as the ability to employ flexible and adaptive probability models , and low encoding complexity . in problem of designing a variable - length dsc coder is addressed ; it is shown that the problem of designing a zero - error such coder is np - hard . in a similar approach is followed ; the authors consider the problem of designing huffman and arithmetic dsc coders for multilevel sources with zero or almost - zero error probability .the idea is that , if the joint density of the source and the side information satisfies certain conditions , the same codeword ( or the same interval for the ac process ) can be associated to multiple symbols .this approach leads to an encoder with a complex modeling stage ( np - hard for the optimal code , though suboptimal polynomial - time algorithms are provided in ) , while the decoding process resembles a classical arithmetic decoder . as for symmetric s - w codes , a few techniques have been recently proposed .a symmetric code can be obtained from an asymmetric one through time sharing , whereby the two sources alternatively take the role of the source and the side information ; however , current dsc coders can not easily accommodate this approach .syndrome - based channel code partitioning has been introduced in , and extended in to systematic codes .a similar technique is described in , encompassing non - systematic codes .syndrome formers have also been proposed for symmetric s - w coding .moreover , techniques based on the use of parity bits can also be employed , as they can typically provide rate compatibility .a practical code has been proposed in using two turbo codes that are decoded jointly , achieving the equal rate point ; in an algorithm is introduced that employs turbo codes to achieve arbitrary rate splitting .symmetric s - w codes based on ldpc codes have also been developed .although several near - optimal dsc coders have been designed for simple ideal sources ( e.g. , binary and gaussian sources ) , the applications of practical dsc schemes to realistic signals typically incurs the following problems .* channel codes get very close to capacity only for very large data blocks ( typically in excess of symbols ) . in many applications , however , the basic units to be encoded are of the order of a few hundreds to a few thousands symbols . for such block lengths ,channel codes have good but not optimal performance . * the symbols contained in a blockare expected to follow a stationary statistical distribution .however , typical real - world sources are not stationary .this calls for either the use of short blocks , which weakens the performance of the s - w coder , or the estimation of conditional probabilities over contexts , which can not be easily accommodated by existing s - w coders . * when the sources are strongly correlated ( i.e. , in the most favorable case ) , very high - rate channel codes are needed ( e.g. , rate- codes ). however , capacity - achieving channel codes are often not very efficient at high rate .* in those applications where dsc is used to limit the encoder complexity , it should be noted that the complexity of existing s - w coders is not negligible , and often higher than that of existing non - dsc coders .this seriously weakens the benefits of dsc . *upgrading an existing compression algorithm like jpeg 2000 or h.264/avc to provide dsc functionalities requires at least to redesign the entropy coding stage , adopting one of the existing dsc schemes . among these issues , the block length is particularly important . while it has been shown that , on ideal sources with very large block length , the performance of some practical dsc coders can be as close as 0.09 bits to the theoretical limit ,so far dsc of real - world data has fallen short of its expectations , one reason being the necessity to employ much smaller blocks .for example , the prism video coder encodes each macroblock independently , with a block length of 256 samples .for the coder in , the block length is equal to the number of 8x8 blocks in one picture ( 1584 for the cif format ) .the performance of both coders is rather far from optimal , highlighting the need of dsc coders for realistic block lengths .a solution to this problem has been introduced in , where an extension of ac , named distributed arithmetic coding ( dac ) , has been proposed for asymmetric s - w coding . moreover , in dac has been extended to the case of symmetric s - w coding of two sources at the same rate ( i.e. , the mid - point of the s - w rate region ) .dac and its decoding process do not currently have a rigorous mathematical theory that proves they can asymptotically achieve the s - w rate region ; such theory is very difficult to develop because of the non - linearity of ac .however , dac is a practical algorithm that was shown in to outperform other existing distributed coders . in this paper, we build on the results presented in , providing several new contributions . for asymmetric coding ,we focus on i.i.d .sources as these are often found in many dsc applications ; for example , in transform - domain distributed video coding , dac could be applied to the bit - planes of transform coefficients , which can be modeled as i.i.d .we optimize the dac using an improved encoder termination procedure , and we investigate the rate allocation problem , i.e. , how to optimally select the encoding parameters to achieve a desired target rate . we evaluate the performance of this new design comparing it with turbo and ldpc codes , including the case of extremely correlated sources with highly skewed probabilities .this is of interest in multimedia applications because the most significant bit - planes of the transform coefficients of an image or video sequence are almost always equal to zero , and are strongly correlated with the side information . for symmetric coding ,we extend our previous work in by introducing dac encoding and rate allocation procedures that allow to encode an arbitrary number of sources with arbitrary combination of rates .we develop and test the decoder for two sources .finally , it should be noted that an asymmetric dac scheme has been independently and concurrently developed in using quasi - arithmetic codes .quasi - arithmetic codes are a low - complexity approximation to arithmetic codes , providing smaller encoding and decoding complexity .these codes allow the interval endpoints to be only a finite set of points .while this yields suboptimal compression performance , it makes the arithmetic coder a finite state machine , simplifying the decoding process with side information .this paper is organized as follows . in sect .[ sec : dac_decoder ] we describe the dac encoding process for the asymmetric case , in sect .[ sec : dac_decoder ] we describe the dac decoder , and in sect .[ sec : rate_sel ] we study the rate allocation and parameter selection problem . in sect .[ sec : symm ] we describe the dac encoder , decoder and rate allocator for the symmetric case . in sect .[ sec : results_asymm ] and [ sec : results_symm ] we report the dac performance evaluation results in the asymmetric and symmetric case respectively . finally , in sect .[ sec : concl ] we draw some conclusions .before describing the dac encoder , it should be noted that the ac process typically consists of a modeling stage and a coding stage . the modeling stage has the purpose of computing the parameters of a suitable statistical model of the source , in terms of the probability that a given bit takes on value 0 or 1 .this model can be arbitrarily sophisticated , e.g. , by using contexts , adaptive probability estimation , and so forth .the coding stage takes the probabilities as input , and implements the actual ac procedure , which outputs a binary codeword describing the input sequence .let be a binary memoryless source that emits a semi - infinite sequence of random variables , , with probabilities and .we are concerned with encoding the sequence ] .we assume that and are i.i.d .sources , and that and are statistically dependent for a given .the entropy of is defined as , and similarly for .the conditional entropy of given is defined as . for dac ,three blocks can be identified , as in fig .[ fig : modeling]-b , namely the modeling , rate allocation , and coding stages .the modeling stage is exactly the same as in the classical ac .the coding stage will be described in sect .[ sec : dac_enc ] ; it takes as inputs , the probabilities and , and the parameter , and outputs a codeword . unlike a classical ac , where the expected rate is function of the source probabilities , and hence can not be selected _ a priori _ , the dac allows to select any desired rate not larger than the expected rate of a classical acthis is very important , since in a dsc setting the rate for should depend not only on how much compressible " the source is , but also on how much correlated and are .for this reason , in dac we also have a rate allocation stage that takes as input the probabilities and and the conditional entropy , and outputs a parameter that drives the dac coding stage to achieve the desired target rate . in this paperwe deal with the coding and rate allocation stages , and assume that the input probabilities , and conditional entropy are known _ a priori_. this allows us to focus on the distributed coding aspects of the proposed scheme , and , at the same time , keeps the scheme independent of the modeling stage .we first review the classical ac coding process , as this sets the stage for the description of the dac encoder ; an overview can be found in .the binary ac process for is based on the probabilities and , which are used to partition the interval into sub - intervals associated to possible occurrences of the input symbols . at initialization the current " interval is set to . for each input symbol , the current interval is partitioned into two adjacent sub - intervals of lengths and , where is the length of .the sub - interval corresponding to the actual value of is selected as the next current interval , and this procedure is repeated for the next symbol . after all symbols have been processed ,the sequence is represented by the final interval .the codeword can consist in the binary representation of any number inside ( e.g. , the number in with the shortest binary representation ) , and requires approximately bits . similarly to other s - w coders , dac is based on the principle of inserting some ambiguity in the source description during the encoding process .this is obtained using a modified interval subdivision strategy . in particular, the dac employs a set of intervals whose lengths are proportional to the modified probabilities and , such that and . in order to fit the enlarged sub - intervals into the interval, they are allowed to partially overlap .this prevents the decoder from discriminating the correct interval , unless the side information is used .the detailed dac encoding procedure is described in the following . at initialization the current " interval is set to . for each input symbol , the current interval subdivided into two partially overlapped sub - intervals whose lengths are and .the interval representing symbol is selected as the next current interval .after all symbols have been processed , the sequence is represented by the final interval .the codeword can consist in the binary representation of any number inside , and requires approximately bits .this procedure is sketched in fig .[ fig : dac ] . at the decoder side ,whenever the codeword points to an overlapped region , the input symbol can not be detected unambiguously , and additional information must be exploited by the joint decoder to solve the ambiguity .it is worth noticing that the dac encoding procedure is a generalization of ac . letting and leads to the ac encoding process described in sect .[ sec : ac ] , with and .it should also be noted that , for simplicity , the description of the ac and dac provided above assumes infinite precision arithmetic . the practical implementation used in sect . [sec : results_asymm ] and [ sec : results_symm ] employs fixed - point arithmetic and interval renormalization .the objective of the dac decoder is joint decoding of the sequence given the correlated side information .the arithmetic decoding machinery of the dac decoder presents limited modifications with respect to standard arithmetic decoders ; a fixed - point implementation has been employed , with the same interval scaling and overlapping rules used at the encoder . in the following the arithmetic decoder state at the -th decoding stepis denoted as .the data stored in represent the interval and the codeword at iteration .the decoding process can be formulated as a symbol - driven sequential search along a proper decoding tree , where each node represents a state , and a path in the tree represents a possible decoded sequence .the following elementary decoding functions are required to explore the tree : * - one - symbol_ : it computes the sub - intervals at the -th step , compares them with and outputs either an unambiguous symbol ( if belongs to one of the non - overlapped regions ) , or an ambiguous symbol . in case of unambiguous decoding , the new decoder state is returned for the following iterations .* - one - symbol_ : it forces the decoder to select the sub - interval corresponding to the symbol regardless of the ambiguity ; the updated decoder state is returned . in fig .[ fig : dec_tree ] an example of a section of the decoding tree is shown . in this examplethe decoder is not able to make a decision on the -th symbol , as _ test - one - symbol _ returns . as a consequence ,two alternative decoding attempts are pursued by calling _ force - one - symbol _ with respectively . in principle , by iterating this process , the tree , representing all the possible decoded sequences , can be explored .the best decoded sequence can finally be selected applying the _ maximum a posteriori _ ( map ) criterion . in general , exhaustive search can not be applied due to the exponential growth of .a viable solution is obtained applying the breadth - first sequential search known as -algorithm ; at each tree depth , only the nodes with the best partial metric are retained .this amounts to visiting only a subset of the most likely paths in .the map metric for a given node can be evaluated as follows : metric can be expressed into additive terms by setting : where and represent the additive metric to be associated to each branch of .the pseudocode for the dac decoder is given in algorithm [ alg : dac_decoder ] , where represents the list of nodes in explored at depth ; each tree node stores its corresponding arithmetic decoder state and the accumulated metric .initialize with root node ( ) set symbol counter - one - symbol_ - one - symbol_ insert in insert in sort nodes in according to metric keep only the nodes with best metric in output ( sequence corresponding to the first node stored in ) it is worth pointing out that has to be selected as a trade - off between the memory / complexity requirements and the error probability , i.e. , the probability that the path corresponding to the original sequence is accidentally dropped .as in the case of standard viterbi decoding , the path metric turns out to be stable and reliable as long as a significant amount of terms , i.e. , number of decoded symbols , are taken into account . in the pessimistic case when all symbol positions trigger a decoder branching , given , one can guarantee that at least symbols are considered for metric comparisons and pruning . on the other hand , in practical cases ,the interval overlap is only partial and branching does not occur at every symbol iteration .all the experimental results presented in sect .[ sec : results_asymm ] have been obtained using , while the trade - off between performance and complexity is analyzed in sect .[ sec : perf_compl ] . finally , metric reliability can not be guaranteed for the very last symbols of a finite - length sequence . for channel codes , e.g. , convolutional codes , this issue is tackled by imposing a proper termination strategy , e.g. , forcing the encoded sequence to end in the first state of the trellis .a similar approach is necessary when using dac .examples of ac termination strategies are encoding a known termination pattern or end - of - block symbol with a certain probability or , in the case of context - based ac , driving the ac encoder in a given context . for dac, we employ a new termination policy that is tailored to its particular features . in particular , termination is obtained by encoding the last symbols of the sequence without interval overlap , i.e. , using , for all symbols with . as a consequence ,no nodes in the dac decoding tree will cause branching in the last steps , making the final metrics more reliable for the selection of the most likely sequence .however , there is a rate penalty for the termination symbols .the length of codeword is determined by the length of the final interval , which in turn depends on how much and are larger than and . as a consequence , in order to select the desired rate , it is important to quantitatively determine the dependence of the expected rate on the overlap , because this will drive the selection of the desired amount of overlap .moreover , we also need to understand how to split the overlap in order to achieve good decoding performance . in the following we derive the expected rate obtained by the dac as a function of the set of input probabilities and the amount of overlap .we are interested in finding the expected rate ( in bps ) of the codeword used by the dac to encode the sequence .this is given by the following formula : this can be derived straightforwardly from the property that the codeword generated by an ac has an expected length that depends on the size of the final interval , that is , on the product of the probabilities , and hence on the amount of overlap .the expectation is computed using the true probabilities .we set , where , so that .this amounts to enlarging each interval by an amount proportional to the overlap factors .the expected rate achieved by the dac becomes where , and .note that represents the rate contribution of symbol yielded by standard ac , while represents the decrease of this contribution , i.e. , the average number of bits saved in the binary representation of the -th input symbol .once a target rate has been selected , the problem arises of selecting . as an example, a possible choice is to take equal overlap factors .this implies that each interval is enlarged by a factor that does not depend on the source probability .this leads to a target rate it can be shown that this choice minimizes the rate for a given total amount of overlap ; the computations are simple and are omitted for brevity .this choice is not necessarily optimal in terms of the decoder error probability .however , optimizing for the error probability is impractical because of the nonlinearity of the arithmetic coding process . in practice, one also has to make sure that the enlarged intervals and are both contained inside the interval .e.g. , taking equal overlap factors as above does not guarantee this .we have devised the following rule that allows to achieve any desired rate satisfying the constraint above .we apply the following constraint : with a positive constant independent of .this leads to this can be interpreted as an additional constraint that the rate reduction for symbols 0 " and 1 " depends on their probabilities , i.e. , the least probable symbol undergoes a larger reduction . using ( [ eq : criterion ] ), it can be easily shown that the expected rate achieved by the dac can be written as thus , the allocation problem for an i.i.d .source is very simple .we assume that the conditional entropy is available as in fig .[ fig : modeling]-b , modeling the correlation between and . in asymmetric dsc, should be ideally coded at a rate arbitrarily close to . in practice , due to the suboptimality of any practical coder, some margin should be taken .hence , we assume that the allocation problem can be written as . since is a constant and and given , one can solve for and then perform the encoding process .finally , it should be noted that , while we have assumed that and are i.i.d . , the dac concept can be easily extended to a nonstationary source .this simply requires to consider all probabilities and overlap factors as depending on index ; all computations , including the design of the overlap factors and the derivation of the target rate , can be extended straightforwardly .a possible application is represented by context - based coding or markov modeling of correlated sources .there is one caveat though , in that , if the probabilities and context of each symbol are computed by the decoder from past symbols , decoding errors can generate significant error propagation .in many applications , it is preferable to encode the correlated sources at similar rather than unbalanced rates ; in this case , symmetric s - w coding can be used . considering a pair of sources , in symmetric s - w coding both and encoded using separate dacs .we denote as and the codewords representing and , and and the respective rates . with dac, the rate of and can be adjusted with a proper selection of the parameters and for the two dac encoders .however , it should be noted that , for the same total rate , not all possible choices of and are equally good , because some of them could complicate the decoder design , or be suboptimal in terms of error probability . to highlight the potential problems of a straightforward extension of the asymmetric dac ,let us assume that and can be chosen arbitrarily .this would require a decoder that performs a search in a symbol - synchronous tree where each node represents _ two _ sequential decoder states for and respectively .if the interval selection is ambiguous for both sequences , the four possible binary symbol pairs ( 00,01,10,11 ) need to be included in the search space ; this would accelerate the exponential growth of the tree , and quickly make the decoder search unfeasible .this example shows that some constraints need to be put on and in order to limit the growth rate of the search space . to overcome this problem, we propose an algorithm that applies the idea of time - sharing to the dac .the concept of time - shared dac has been preliminarly presented in for a pair of sources in the subcase , i.e. providing only the mid - point of the s - w rate region . in the followingwe extend this to an arbitrary combination of rates , and show how this can be generalized to an arbitrary number of sources . for two sources ,the idea is to divide the set of input indexes in two disjoint sets such that , at each index , ambiguity is introduced in at most one out of the two sources . in particular , for sequences and of length , let and be the subsets of even and odd integer numbers in respectively .we employ a dac on and , but the choice of parameters and differs . in particular , we let the parameters depend on the symbol index , i.e. , and .the dac of employs parameter for all , and otherwise .vice versa , is encoded with parameter for all , and otherwise . as a consequence of these constraints , at each step of the decoding process , ambiguity appears in at most one out the two sequences . in this way, the growth rate of the decoding tree remains manageable , as no more than two new states are generated at each transition , exactly as in the asymmetric dac decoder ; this also makes the map metric simpler .the conceptual relation with time - sharing is evident . since , during the dac encoding process , for each input symbol the ambiguity is introduced in at most one out the two encoders , this corresponds to switching the role of side information between either source on a symbol - by - symbol basis . by varying the parameters and , all combinations of rates can be achieved .the achieved rates can be derived repeating the same computations described in sect .[ sec : rate_sel ] , and can be expressed as and .the rate allocation problem amounts to selecting suitable rates and such that , , and . in practice onewill typically take some margin , such that ; for safety , a margin should also be taken on and with respect to the conditional entropy .since the prior probabilities of and are given , one can solve for and , and then perform the encoding process .thus , the whole s - w rate region can be swept . similarly to the asymmetric case, the symmetric decoding process can be viewed as a search along a tree ; however , specifically for the case of two correlated sources , each node in tree represents the decoding states of two sequential arithmetic decoders for and respectively . at each iteration , sequential decodingis run from both states .the time - sharing approach guarantees that , for a given index , the ambiguity can be found only in one of the two decoders .therefore , at most two branches must be considered , and the tree can be constructed using the same functions introduced in sect .[ sec : dac_decoder ] for the asymmetric case .this would be the same also for sources .in particular , for , _ test - one - symbol_( ) yields an unambiguous symbol , whereas ambiguity can be found only while attempting decoding for with _ test - one - symbol_( ) .in conclusion , from the node the function _ test - one - symbol _ is used on both states . if ambiguity is found on , _ force - one - symbol _is then used to explore the two alternative paths for , whereas is used as side information for branch metric evaluation . in the casethat , the roles of and are exchanged .therefore , algorithm [ alg : dac_decoder ] can be easily extended to the symmetric case by alternatively probing either or for ambiguity , and possibly generating a branching .the joint probability distribution can be written as the symmetric encoder and decoder can be easily generalized to an arbitrary number of sources .the idea is to identify subsets of input indexes such that , at each symbol index , ambiguity is introduced in at most one out of the sources . in particular , for sequences of length , let be disjoint subsets of .we denote the dac parameters as .the dac of employs parameter for all , and otherwise . as a consequence of these constraints , at each step of the decoding process , ambiguity appears in at most one out the sequences .note that this formulation also encompasses the case that one or more sources are independent of each other and from all the others ; these sources can be coded with a classical ac , taking for this source .the selection of the sets and the overlap factors , for , is still somewhat arbitrary , as the expected rate of source depends on both the cardinality of and the value of . in a realistic applicationit would be more practical to fix the sets once and for all , and to modify the parameters so as to obtain the desired rate .this is because , for time - varying correlations , one has to update the rate on - the - fly . ina distributed setting , varying one parameter requires to communicate the change only to source , while varying the sets requires to communicate the change to all sources .therefore , we define such that the statistically dependent sources take in turns the role of the side information .any additional independent sources are coded separately using . in particular , we set , where denotes the remainder of the division between two integers , and .the dac encoder for the -th source inserts ambiguity only at time instants . at each node ,the decoder stores the states of the arithmetic decoders , and possibly performs a branching if the codeword related to the only potentially ambiguous symbol at the current time is actually ambiguous .although this encoding and decoding structure is not necessarily optimal , it does lead to a viable decoding strategy .in the following we provide results of a performance evaluation carried out on dac . we implement a communication system that employs a dac and a joint decoder , with no feed - back channel ; at the decoder , pruning is performed using the m - algorithm , with m=2048 .the side information is obtained by sending the source through a binary symmetric channel with transition probability , which measures the correlation between the two sources .we simulate a source with both balanced ( ) and skewed ( ) symbol probabilities .the first setting implies and , where depends on .the closer to 0.5 , the less correlated the sources , and hence the higher . in the skewed case , given , fixed , whereas both and depend on . unless otherwise specified , each point of the figures / tables presented in the following has been generated averaging the results obtained encoding samples . as a first experiment ,the benefit of the termination policy is assessed .stationary source emits sequences of symbols , with and , which are encoded with dac at fixed rate bps , i.e. , bps higher than the theoretical s - w bound .for we assume ideal lossless encoding at average rate bps , so that the total average rate of and is 1.5 bps .the bit error rate ( ber ) yielded by the decoder is measured for increasing values of the number of termination symbols . the same simulation is performed with . in all simulated cases , the dac overlap has been selected to compensate for the rate penalty incurred by the termination , so as to achieve the 1.5 bps overall target rate .the overlap factors are selected according to .the results are shown in fig .[ fig : term ] ; it can be seen that the proposed termination is effective at reducing the ber .there is a trade - off in that , for a given rate , increasing reduces the effect of errors in the last symbols , but requires to overlap the intervals more .it is also interesting to consider the position of the first decoding error as , without termination , errors tend to cluster at the end of the block . for ,the mean position value is 191 , 178 , 168 , 161 and 95 , with standard deviation 13 , 18 , 25 , 36 and 49 , respectively for equal to 0 , 5 , 10 , 15 and 20 .for , the mean value is 987 , 954 , 881 , 637 and 536 , with standard deviation 57 , 124 , 229 , 308 and 299 .the optimal values of are around 15 - 20 symbols .therefore , we have selected and used this value for all the experiments reported in the following .( number of termination symbols ) ; , total rate = 1.5 bps , rate of = 0.5 bps , .,width=453 ] next , an experiment has been performed to validate the theoretical analysis of the effects of different overlap designs shown in sect .[ sec : rate_sel]-b . in fig .[ fig : overlap ] the performance obtained by using the design of equations and respectively is shown .the experimental settings are , , fixed rate for of 0.5 bps , and total average rate for and equal to 1.5 bps , with ideal lossless encoding of at rate .the ber is reported as a function of the source correlation expressed in terms of .it is worth noticing that the performance yielded by different overlap design rules are almost equivalent .note that the rule in consistently outperforms that in , confirming that this latter is only optimal for the rate .there is some difference when is very high ( i.e. , for weakly correlated sources ) .however , this case is of marginal interest since the performance is poor ( the ber is of the order of 0.1 ) . , total rate = 1.5 bps).,width=453 ] the performance of the proposed system is compared with that of a system where the dac encoder and decoder are replaced by a punctured turbo code similar to that in .we use turbo codes with rate- generator ( 17,15 ) octal ( 8 states ) and ( 31,27 ) octal ( 16 states ) , and employ s - random interleavers , and 15 decoder iterations .we consider the case of balanced source ( ) and skewed source ( in particular and ) . for a skewed source , as an improvement with respect to ,the turbo decoder has been modified by adding to the decoder metric the _ a priori _ term , as done in .block sizes , and have been considered ( with s - random interleaver spread of 5 , 11 and 25 respectively ) ; this allows to assess the dac performance at small and medium block lengths .besides turbo codes , we also considered the rate - compatible ldpc codes proposed in .for these codes , a software implementation is publicly available on the web ; among the available pre - designed codes , we used the matrix for , which is comparable with the block lengths considered for the dac and the turbo code .the results are worked out in a fixed - rate coding setting as in , i.e. , the rate is the same for each sample realization of the source . fig .[ fig:1 ] reports the results for the balanced source case ; the abscissa is , and is related to .the performance is measured in terms of the residual ber after decoding , which is akin to the distortion in the wyner - ziv binary coding problem with hamming metric .both the dac and the turbo code generate a description of at fixed rate 0.5 bps ; the total average rate of and is 1.5 bps , with ideal lossless encoding of at rate .since , we also have that .this makes it possible to compare these results with the case of skewed sources which is presented later in this section , so as to verify that the performance is uniformly good for all distributions .the wyner - ziv bound for a doubly symmetric binary source with hamming metric is also reported for comparison . as can be seen, the performance of dac slightly improves as the block length increases .this is mostly due to the effect of the termination . as the number of bits used to terminate the encoder is chosen independently of the block length , the rate penalty for non overlapping the last bits weights more when the block length is small , while the effect vanishes for large block length . in , where the termination effect is not considered ,the performance is shown to be almost independent of the block size .it should also be noted that the value of required for near - optimal performance grows exponentially with the block size . as a consequence ,the memory which leads to near - optimal performance for or limits the performance for .we compared both 8-states and 16-states turbo codes .the 8-states code is often used in practical applications , as it exhibits a good trade - off between performance and complexity ; the 16-states code is more powerful , and requires more computations .it can be seen that , for block length and , the proposed system outperforms the 8-states and 16-states turbo codes . for block length ,the dac performs better than the 8-states turbo code , and is equivalent to the 16-states code .it should be noted that , in this experiment , only the channel coding performance " of the dac is tested , since for the balanced source no compression is possible as .consequently , it is remarkable that the dac turns out to be generally more powerful than the turbo code at equal block length .note that the performance of the 16-states code is limited by the error floor , and could be improved using an ad - hoc design of the code or the interleaver ; the dac has no error floor , but its waterfall is less steep . for , a resultnot reported in fig .[ fig:1 ] shows that the dac with and also outperform the 8-state turbo - coder with . in fig .[ fig:1 ] and in the following , it can be seen that turbo codes do not show the typical cliff - effect .this is due to the fact that , at the block lengths considered in this paper , the turbo code is still very far from the capacity ; its performance improves for larger block lengths , where the cliff - effect can be seen . in terms of the rate penalty , setting a residual ber threshold of , for the dac is almost 0.3 bps away from the s - w limit , while the best 16-state turbo code simulated in this paper is 0.35 bps away ; for the dac is 0.26 bpp away , while the best 8-state turbo code is 0.30 bps away .the performance of the ldpc code for is halfway between the turbo codes for and , and hence very similar to the dac ., total rate = 1.5 bps , rate for = 0.5 bps ) : dac versus turbo coding , balanced source .dac : distributed arithmetic coding ; tc8s and tc16s : 8- and 16-state turbo code with s - random interleaver ; ldpc - r and ldpc - i : regular and irregular ldpc codes from .,width=453 ] the results for a skewed source are reported in fig . [ fig:3 ] for . in thissetting , we select various values of , and encode at fixed rate such that the total average rate for and equals 1.5 bps , with ideal lossless encoding of at rate . for fig .[ fig:3 ] , from left to right , the rates of are respectively 0.68 , 0.67 , 0.66 , 0.64 , 0.63 , 0.61 , 0.59 , and 0.58 bps . consistently with , all turbo codes considered in this work perform rather poorly on skewed sources . in this behavior is explained with the fact that , when the source is skewed , the states of the turbo code are used with uneven probability , leading to a smaller equivalent number of states . on the other hand ,the dac has good performance also for skewed sources , as it is designed to work with unbalanced distributions .the performance of the ldpc codes is similar to that of the best turbo codes , and slightly worse than the dac .similar remarks can be made in the case of , which is reported in fig .[ fig:2 ] . in this case, we have selected a total rate of 1 bps , since the source is more unbalanced and hence easier to compress .the rates for are respectively 0.31 , 0.34 , 0.37 , 0.39 , 0.42 , 0.44 , and 0.47 bps . in this casethe turbo code performance is better than in the previous case , although it is still poorer than dac .this is due to the fact that the sources are more correlated , and hence the crossover probability on the virtual channel is lower .therefore , the turbo code has to correct a smaller number of errors , whereas for the correlation was weaker and hence the crossover probability was higher . , total rate = 1.5 bps ) : dac versus turbo coding , skewed source .dac : distributed arithmetic coding ; tc8s and tc16s : 8- and 16-state turbo code with s - random interleaver ; ldpc - r and ldpc - i : regular and irregular ldpc codes from .,width=453 ] , total rate = 1 bps ) : dac versus turbo coding , skewed source .dac : distributed arithmetic coding ; tc8s and tc16s : 8- and 16-state turbo code with s - random interleaver ; ldpc - r and ldpc - i : regular and irregular ldpc codes from .,width=453 ] we also considered the case of strongly correlated sources , for which high - rate channel codes are needed .these sources are a good model for the most significant bit - planes of several multimedia signals . due to the inefficiency of syndrome - based coders, practical schemes often assume that no dsc is carried out on those bit - planes , e.g. , they are not transmitted , and at the decoder they are directly replaced by the side information .the results are reported in tab .[ tab : corr ] for the dac and the 16-state turbo code , when a rate of 0.1 bps is used for .the table also reports the cross - over probability , corresponding , for a balanced source , to the performance of an uncoded system that reconstructs as the side information .as can be seen , the dac has similar performance to the turbo codes and ldpc codes , and becomes better when the source is extremely correlated , i.e. , . & & dac & tc16s + 0.1 & & & + 0.01 & & & + 0.001 & & & + + & & dac & tc16s + 0.1 & & & + 0.01 & & & + 0.001 & & & + + & & ldpc - r & ldpc - i + 0.1 & & & + 0.01 & & & + 0.001 & & & + [ tab : corr ] finally , the coding efficiency of dac is measured in terms of expected rate required to achieve error - free decoding .this amounts to re - encoding the sequence at increasing rates , and represents the optimal dac performance if the encoder could exactly predict the decoder behavior .since each realization of the source is encoded using a different number of bits , this case is referred to as variable - rate encoding .this scenario is representative of practical distributed compression settings , e.g. , , in which one seeks the shortest code that allows to reconstruct without errors each realization of the source process . for this simulation ,the following setup is used .the source correlation is kept constant and , for each sample realization of the source , the total rate is progressively increased beyond the s - w bound , in steps of 0.01 bps , until error - free decoding is obtained .this operation is repeated on 1000 different realizations of the source ; the mean value and standard deviation of the rates yielding correct decoding are then computed .the results have been worked out for block length , with probabilities and . for , the conditional entropy (i.e. , the s - w bound ) has been set to 0.5 bps . for , the joint entropy has been set to 1 bps ; this amounts to coding at the ideal rate of bps , with a s - w bound bps .the results are reported in tab . [tab : vrate ] .as can be seen , the dac has a rate loss of about 0.06 bps with respect to the s - w bound for both the symmetric and skewed source .the turbo code exhibits a loss of about 0.2 bps and 0.13 bps .the ldpc - r code has a relatively small loss , i.e. , 0.12 bps in the symmetric case and 0.10 in the skewed one .the ldpc - i code has a slightly smaller loss , i.e. , 0.09 bps in the symmetric case and 0.075 in the skewed one .however , the dac still performs slightly better .it should be noted that , while for ldpc and turbo codes the encoding is done only once thanks to rate - compatibility , for the dac multiple encodings are necessary , leading to higher complexity ..performance comparison for variable - rate coding : mean and standard deviation of rate needed for lossless compression .[ cols="^,^,^,^,^ " , ] [ tab : complexity ]in the following we provide results for the symmetric dac . we consider two sources with balanced ( ) and unbalanced ( ) distribution with arbitrary rate splitting , and use . for fixed rate, we set the total rate of and equal to 1.5 bps .we consider two cases of rate splitting . in the first casethe rate is equally split ; we choose so as to achieve a rate of 0.75 bps for each source . in the second casewe encode at 0.6 bps and at 0.9 bps .the performance of the symmetric dac is worked out for and . since symmetric dsc coders typically reconstructs each sequence either without any errors or with a large number of errors , we report the frame error rate ( fer ) instead of the residual ber , i.e. the probability that a data block contains at least one error after joint decoding . for each point , we simulated at least bits .[ fig : symm_fr ] shows the results for the symmetric dac .comparisons with other algorithms can be done based on the following remarks . in , a symmetric s - w coder is proposed employing turbo codes , which can obtain any rate splitting . in the case that one source is encoded without ambiguity , this reduces to the asymmetric turbo - based s - w coder we have employed in sect .[ sec : results_asymm ] .in it is reported that this algorithm achieves its best performance in the asymmetric points of the s - w region , while it is slightly poorer in the intermediate points .therefore , in fig . [fig : symm_fr ] we report the fer corresponding to the best turbo code shown in fig .[ fig:1 ] for and , as this lower - bounds the fer achieved by over the entire s - w region .moreover , we also report the fer achieved by irregular ldpc codes with block length .the asymmetric algorithm in has been extended in to arbitrary rate splitting , showing that the performance is uniformly good over the entire s - w region .finally , we also report the fer curve of the asymmetric dac for . , total rate = 1.5 bps ) .dac : distributed arithmetic coding ; tc16s : 16-state turbo code with s - random interleaver ; ldpc - i : irregular ldpc codes from .,width=453 ] in fig .[ fig : symm_fr ] , the results for symmetric coding are very similar to what has been observed in the asymmetric case .the dac achieves very similar ber for and ; hence , the fer is smaller for .the results are almost independent of the rate splitting between and , as can be seen by comparing the two rate - splitting cases as well as the asymmetric dac .the turbo codes for and , and the irregular ldpc code , exhibit poorer performance than dac . for variable rate coding, we consider the same two settings as in sect .[ sec : res_vr ] , i.e. , block length , with probabilities and ; in the first case the conditional entropy has been set to 0.5 bps , while in the second case the joint entropy has been set to 1 bps .the results are shown in fig .[ fig : symm_vr ] . as can be seen , the performance of the symmetric dac is uniformly good over the entire s - w region , and is significantly better than turbo codes and ldpc codes . in particular ,the dac suboptimality is between 0.03 - 0.06 bps , as opposed to 0.07 - 0.09 for the irregular ldpc code , and 0.14 - 0.21 for the turbo code .it should be noted , however , that variable rate coding requires feedback , while the s - w bound is achievable with no feedback , with vanishing error probability as . in our simulations we re -encode the sequence at increasing rates ( in steps of 0.01 bps ) , which represents the optimal dac performance if the encoder could exactly predict the decoder behavior . , and those in the bottom - left corner to .dac : distributed arithmetic coding ; tc16s : 16-state turbo code with s - random interleaver ; ldpc - i : irregular ldpc codes from .the solid curves represent the s - w bound.,width=453 ]we have proposed dac as an alternative to existing dsc coders based on channel codes .dac can operate in the entire s - w region , providing both asymmetric and symmetric coding .dac achieves good compression performance , with uniformly good results over the s - w rate region ; in particular , its performance is comparable with or better than that of turbo and ldpc codes at small and medium block lengths .this is very important in many applications , e.g. , in the multimedia field , where the encoder partitions the compressed file into small units ( e.g. , packets in jpeg 2000 , slices and nalus in h.264/avc ) that have to be coded independently . as for encoding complexity , which is of great interest for dsc, dac has linear encoding complexity , like a classical ac .turbo codes and the ldpc codes in also have linear encoding complexity , whereas general ldpc codes typically have more than linear , and typically quadratic complexity . as a consequence ,the complexity of dac is suitable for dsc applications .a major advantage of dac lies in the fact that it can exploit statistical prior knowledge about the source very easily .this is a strong asset of ac , which is retained by dac .probabilities can be estimated on - the - fly based on past symbols ; context - based models employing conditional probabilities can also be used , as well as other models providing the required probabilities .these models allow to account for the nonstationarity of typical real - world signals , which is a significant advantage over dsc coders based on channel codes .in fact , for channel codes , accounting for time - varying correlations requires to adjust the code rate , which can only be done for the next data block , incurring a significant adaptation delay .moreover , with channel codes it is not easy to take advantage of prior information ; for turbo codes it has been shown to be possible , employing a more sophisticated decoder .another advantage of the proposed dac lies in the fact that the encoding process can be seen as a simple extension of the ac process . as a consequence , it is straightforward to extend an existing scheme employing ac as final entropy coding stage in order to provide dsc functionalities .liveris , z. xiong , and c.n .georghiades , `` distributed compression of binary sources using conventional parallel and serial concatenated convolutional codes , '' in _ proc . of ieee data compression conference _ , 2003 , pp . 193202 .a. majumdar , j. chou , and k. ramchandran , `` robust distributed video compression based on multilevel coset codes , '' in _ proceedings of thirty - seventh asilomar conference on signals , systems and computers _ , 2003 , pp .
distributed source coding schemes are typically based on the use of channels codes as source codes . in this paper we propose a new paradigm , named distributed arithmetic coding " , which extends arithmetic codes to the distributed case employing sequential decoding aided by the side information . in particular , we introduce a distributed binary arithmetic coder for the slepian - wolf coding problem , along with a joint decoder . the proposed scheme can be applied to two sources in both the asymmetric mode , wherein one source acts as side information , and the symmetric mode , wherein both sources are coded with ambiguity , at any combination of achievable rates . distributed arithmetic coding provides several advantages over existing slepian - wolf coders , especially good performance at small block lengths , and the ability to incorporate arbitrary source models in the encoding process , e.g. , context - based statistical models , in much the same way as a classical arithmetic coder . we have compared the performance of distributed arithmetic coding with turbo codes and low - density parity - check codes , and found that the proposed approach is very competitive . distributed source coding , arithmetic coding , slepian - wolf coding , wyner - ziv coding , compression , turbo codes , ldpc codes .
the mean value has long been used as a measure of location of the center of a distribution . however , in many applications , there are occasions when analysts are interested to observe and analyze different points of the distribution . the distribution function of a random variablecan be characterized by infinite number of points spanning its support .these points are called quantiles .thus , quantiles are points taken at regular intervals from .the quantile of a data distribution , , is interpreted as the value such that there is of mass on its right side and of mass on its left side . in particular , for a continuous random variable , the quantile of is the value which solves ( we assume that this value is unique ) , where .thus , the population lower quartile , median and upper quartile are the solutions to the equations , and , respectively . compared to mean value , quantiles are useful measures because they are less susceptible to skewed distributions and outliers .this fact form the building block of quantile regression which unlike its standard mean regression counterpart , lies in its flexibility and ability in providing a more complete investigation of the entire conditional distribution of the response variable distribution given its predictors . to this end, quantile regression has become increasingly popular since the pioneering research of . after its introduction ,quantile regression has attracted considerable attention in recent literature .it has been applied in a wide range of fields such as agriculture , body mass index , microarray study , financial portfolio , economics , ecology , climate change , survival analysis and so on .a comprehensive account of other recent applications of quantile regression can be found in and .longitudinal data is encountered in a wide variety of applications , including economics , finance , medicine , psychology and sociology .it is repeatedly measured from independent subjects over time and correlation arises between measures from the same subject . since the pioneering work by ,the mixed models with random effects have become common and active models to deal with longitudinal data .a number of books and a vast number of research papers published in this area have been motivated by laird and ware s mixed models .the majority of these books and research papers focuses on standard mean regression .see for example , , , , , , and , among others .in contrast , limited research papers have been conducted on quantile regression for longitudinal data .for example , proposed the regularization quantile regression model , studied quantile regression for longitudinal data in different contexts and developed resampling approaches for inference . suggested a bayesian quantile regression method for longitudinal data using the skewed laplace distribution ( sld ) for the errors , proposed a flexible bayesian quantile regression method for dependent and independent data using an infinite mixture of normals for the errors , studied quantile regression for longitudinal data with nonignorable intermittent missing data and proposed a method for regularization in mixed quantile regression models .longitudinal data with ordinal responses routinely appear in many applications , including economics , psychology and sociology .existing approaches in classical mean regression are typically designed to deal with such data . at present time , the most common method in classical mean regression for modelling such data employs the cumulative logit model .there exists a large literature on the analysis of longitudinal data with ordinal responses , and we refer to for an overview . in contrast ,quantile regression approaches for estimating the parameters of ordinal longitudinal data have not been proposed , yet .the goal of this paper is to fill this gap by introducing an ordinal random effects quantile regression model that is appropriate for such data . in section[ methods ] , we present a random effects ordinal quantile regression model for analysis of longitudinal data with ordinal outcome by using a data augmentation method .we also discuss prior elicitation . in section [ gibbs sampler ] , we outline the bayesian estimation method via gibbs sampler . in section [ simulation_studies ] ,we carry out simulation scenarios to investigate the performance of the proposed method , and in section [ rd ] , we illustrate our proposed method using a real dataset .we conclude the paper with a brief discussion in section [ conclusion ] .an appendix contains the gibbs sampler details .given training data with covariate vector and outcome of interest .the quantile regression model for the response given the covariate vector takes the form of where is the inverse cumulative distribution function ( cdf ) and is the unknown quantile coefficients vector .this is what makes the quantile estimators can be considered nonparametric maximum likelihood estimators . unlike the standard mean regression, the error term does not appear in ( [ eq1 ] ) because all the random variation in the conditional distributions is accounted for by variation in the quantile , .consequently , quantile regression does not require any assumption about the distribution of the errors and , unlike standard mean regression , is more robust to outliers and non - normal errors , offering greater statistical efficiency if the data have outliers or is non - normal .it belongs to a robust model family , which can provide a more complete picture of the predictor effects at different quantile levels of the response variable rather than focusing solely on the center of the distribution . one attractive feature of quantile regression is that the linear quantile regression model can be used to estimate the parameters of the nonlinear model because the quantile regression estimators are equivariant to linear or nonlinear monotonic transformations of the response variable , i.e. , . in general , this is a very important property since it tells us that quantile regression provides consistent back transformation and easy in interpretation in the case of transformations such as the logarithm and the square root .the quantile estimators have the same interpretation as those of a standard mean regression model except for the indexed quantile levels where each is estimated .for example , if the slope is -0.78 for the response variable given the predictor in the quantile would indicate that the quantile of the response variable decreased by 0.78 for each 1 unit increase in .the unknown quantity is estimated by where is the check loss function ( clf ) at a quantile , , and is the indicator function .by contrast , standard mean regression method based on the quadratic loss . observed that the clf ( [ eq2 ] ) is closely related to the skewed laplace distribution ( sld ) and consequently the unknown quantity can be estimated through exploiting this link .this observation opens new avenues when dealing with quantile regression and its applications .the density function of a sld is minimizing the clf ( [ eq2 ] ) is equivalent to maximizing the likelihood function of by assuming from a sld . by utilizing the link between the clf ( [ eq2 ] ) and the sld , proposed a bayesian framework for quantile regression using the sld for the error distribution and show the propriety of the conditional distribution of under an improper prior distribution .unfortunately , the joint posterior distribution under this framework does not have a known tractable form and consequently update the unknown quntity from its posterior using the metropolis - hastings ( m - h ) algorithm . in this context, proposes a simple and efficient gibbs sampling algorithm for updating by motivating the sld as a member of the scale mixture of normals . if , then the sld for arises when has an exponential distribution with rate parameter . under this formulation, presented a bayesian framework for structured additive quantile regression models , developed bayesian quantile regression for longitudinal data and presented bayesian lasso mixed quantile regression .let , where denote the response for the subject measured at the time .then , the quantile regression model for ordinal longitudinal data can be formulated in terms of an ordinal latent variable as follows : where is the inverse cdf of the unobserved latent response conditional on a location - shift random effect , , and is a vector of predictors .the observed ordinal response is assumed to be related to the unobserved response by + + where are cut - points whose coordinates satisfy here , and are respectively defines the lower and upper bounds of the interval corresponding to observed outcome .assuming that the error of the unobserved response has a sld as in ( [ eq3 ] ) , we have . here ,the latent variable follows an exponential distribution with rate parameter , and follows the standard normal distribution .then , the cdf for the c category of the observed response is : where is the standard normal cdf .using , we can calculate as follows : prior distribution selection is an essential step in any bayesian inference ; however , in the bayesian paradigm it is particularly crucial as issues can arise when default prior distributions are used without caution . for the fixed effects ,a typical choice is to assign a zero mean normal prior distribution on each , which leads to the ridge estimator .however , this prior performs poorly if there are big differences in the size of fixed effects .an generalization of the ridge prior is a laplace prior , which is equivalent to the lasso model .this prior has received considerable attention in the recent literature ( for example see , ) . in this paper, we assign a laplace prior on each takes the form of according to , the prior ( [ eq8 ] ) can be written as from ( [ eq9 ] ) , it can be seen that we assign a zero - mean normal prior distribution with unknown variance for each .we specify an exponential prior distributions with rate parameter for the variances assuming they are independent .then , we put a gamma prior on with shape parameter and rate parameter .since , this motivates us to consider an inverse gamma prior on with shape parameter and scale parameter .following and , we consider an order statistics from distribution , for the unknown cut - points : where and because and we observe if , the posterior distribution of all the parameters and latent variables is given by where , , , , and .the full conditional distributions for , and are summarized below and details of all derivations are provided in appendix [ appendix ] .using the data augmentation procedure as in , a gibbs sampling method for the ordinal quantile regression model with longitudinal data is constructed by updating , , , and from their full conditional distributions . from ( [ eq12 ] ) , we can construct a tractable algorithm for efficient posterior computation that works as follows : 1 .sample from the generalized inverse gaussian distribution gig , where and .sample from n where 3 .sample from gig , where and .4 . sample from gamma distribution with shape parameter and rate parameter .sample from n , where 6 .sample from inverse gamma distribution with shape parameter and scale parameter from truncated normal ( tn ) distribution tn .sample from a uniform distribution on the interval ] , and were sampled independently from a logistic distribution with location parameter and scale parameter .three values of were considered , and .the cut - points used were and .then the outcome was sampled according to : for each choice of and , we generated 200 data sets .we ran the proposed gibbs sampler algorithm for 20,000 iterations , after a burn - in period of 2000 iterations .cccccccccccccc & & + & parameter & bias & eff & & bias & eff & & bias & eff & & bias & eff & + & & -0.051 & 1.000 & & -0.079 & 1.103 & & 0.082 & 1.012 & & -0.068 & 1.009 + & & -0.002 & 1.000 & & 0.082&1.113 & & 0.093 & 1.034&&-0.134 & 1.071 + & & 0.019 & 1.000 & & 0.169&1.350 & & 0.110 & 1.131&&0.167 & 1.025 + 5 & & -0.012 & 1.000 & & -0.011 & 1.091 & & -0.036 & 1.113&&-0.031 & 1.004 + & & 0.003 & 1.000 & & -0.010&1.014 & & -0.009 & 1.013&&-0.016 & 1.037 + & & 0.003 & 1.000 & & -0.024&1.213 & & -0.001 & 0.944&&0.002 & 1.056 + & & 0.012 & 1.000 & & -0.011 & 1.203 & & 0.013 & 1.147&&0.016&1.007 + + & & -0.058 & 1.000 & & -0.099 & 1.128 & & -0.165 & 1.211 & & -0.119 & 1.018 + & & 0.022 & 1.000 & & 0.132 & 1.059 & & -0.033&1.067&&-0.100 & 1.009 + & & 0.020 & 1.000 & & 0.026 & 1.056 & & -0.055&1.029&&-0.032&1.035 + 10 & & -0.002 & 1.000 & & 0.009 & 1.007 & & -0.001&1.004&&-0.002 & 1.021 + & & -0.001 & 1.000 & & 0.004 & 0.999 & & -0.003&0.998&&0.002 & 1.048 + & & -0.003 & 1.000 & & 0.038 & 1.006 & & 0.002&1.003&&-0.003 & 1.025 + & & -0.001 & 1.000 & & 0.035 & 1.014 & & 0.000&0.998&&0.006 & 1.008 + + & & -0.011 & 1.000 & & -0.032 & 1.117 & & -0.030 & 1.071 & & -0.107 & 1.051 + & & -0.010 & 1.000 & & 0.077 & 1.019 & & -0.162&1.131&&-0.116 & 1.119 + & & 0.004 & 1.000 & & 0.042 & 1.107 & & 0.029&1.224&&0.108 & 1.103 + 20 & & -0.005 & 1.000 & & -0.016 & 1.008 & & -0.001&0.999&&-0.003&0.999 + & & 0.003 & 1.000 & & -0.017 & 1.015 & & 0.014&1.007&&-0.004 & 1.028 + & & 0.000 & 1.000 & & -0.029 & 1.073 & & -0.002&0.996&&-0.016 & 1.008 + & & 0.003 & 1.000 & & -0.036 & 1.088 & & 0.002&0.999&&-0.006 & 1.034 + in table [ table.infer1 ] , we present the simulation results of simulation study 1 for and , including the estimated relative bias and the estimated relative efficiency .in general , it can be seen that the absolute bias obtained by the proposed model ( bqol ) when is much smaller than its competing models . in most cases ,bqol was better than the other methods in terms of bias and the relative efficiency .the results suggest that our method performs well compare to other approaches .we see that the bayesian quantile regression approach for ordinal data ( bqror ) performs poorly compared to the other methods because it ignores the nature of the longitudinal data .we also see that as increases , the bayesian logistic ordinal regression ( blor ) for longitudinal data yields low bias and more efficiency . in table [ table.infer2 ] , we present the simulation results of simulation study 1 for bayesian quantile regression methods when and 0.75 . again , in most cases , bqol was better than bqror in terms of bias and the relative efficiency .cccccccccccccc & & + & & + & parameter & bias & eff & & bias & eff & & bias & eff & & bias & eff & + & & 0.128 & 1.000&&0.194&1.738&&-0.174&1.000&&-0.191 & 1.223 + & & -0.057&1.000&&-0.058&2.014&&-0.070&1.000&&0.091 & 2.031 + & & 0.016 & 1.000&&0.018&1.374&&0.083&1.000&&0.066 & 1.319 + 5 & & 0.035 & 1.000&&-0.030&1.238&&-0.003&1.000&&-0.014 & 1.011 + & & 0.018 & 1.000&&-0.039&1.098&&0.005&1.000&&-0.059 & 1.015 + & & 0.013 & 1.000&&-0.021&1.043&&0.010&1.000&&-0.040 & 1.056 + & & 0.020 & 1.000&&-0.025&1.051&&0.022&1.000&&-0.027 & 1.037 + + & & -0.132&1.000&&0.137&1.035&&-0.182&1.000&&-0.183 & 1.159 + & & -0.049&1.000&&0.064&1.836&&-0.054&1.000&&-0.074 & 1.037 + & & -0.003&1.000&&0.002&1.005&&0.011&1.000&&0.083 & 1.215 + 10 & & -0.016 & 1.000&&0.037&1.016&&-0.008&1.000&&0.020 & 1.117 + & & -0.009 & 1.000&&0.040&1.142&&-0.008&1.000&&0.031 & 1.027 + & & -0.008 & 1.000&&0.033&1.115&&-0.007&1.000&&0.004 & 1.006 + & & 0.019 & 1.000&&0.036&1.003&&-0.018&1.000&&0.028 & 1.014 + + & & -0.026&1.000&&-0.037&1.731&&0.013&1.000&&0.018 & 1.005 + & & -0.156 & 1.000&&0.210&1.079&&-0.177&1.000&&0.235 & 1.371 + & & 0.037 & 1.000&&-0.044&1.117&&0.013&1.000&&0.018 & 1.009 + 20 & & -0.014&1.000&&0.064&1.008&&-0.012&1.000&&-0.009 & 1.004 + & & -0.007 & 1.000&&0.005&1.116&&-0.009&1.000&&0.016 & 1.032 + & & 0.000 & 1.000&&0.030&1.138&&-0.008&1.000&&0.016 & 1.098 + & & 0.008 & 1.000&&0.070&1.129&&-0.006&1.000&&0.010 & 1.007 + cccccccccccccc & & + & parameter & mean & sd & & mean & sd & & mean & sd + & & -5.169&0.738&&-5.310&1.003&&-5.661&1.069 & & & + & & -10.028&0.771&&-9.838&1.035&&-9.682&1.036 & & & + & & 15.259&0.688&&15.392&0.935&&14.273&1.053 & & & + 5 & & -0.862&0.061&&-0.842&0.083&&-0.871&0.088 & & & + & & -0.249&0.066&&-0.237&0.064&&-0.231&0.071 & & & + & & 0.251&0.063&&0.261&0.079&&0.261&0.069 & & & + & & 0.849&0.073&&0.857&0.083&&0.881&0.093 & & & + + & & -5.132&0.935&&-5.116&0.998&&-5.842&1.014 & & & + & & -10.239&0.722&&-10.117&1.028&&-9.773&0.998 & & & + & & 15.024&0.776&&15.317&0.993&&15.235&1.037 & & & + 10 & & -0.848&0.064&&-0.842&0.083&&-0.863&0.087 & & & + & & -0.258&0.061&&-0.244&0.089&&-0.255&0.073 & & & + & & 0.258&0.073&&0.246&0.076&&0.259&0.079 & & & + & & 0.848&0.059&&0.845&0.037&&0.868&0.081 & & & + + & & -4.992&0.739&&-5.162&0.893&&-5.337&0.982 & & & + & & -10.039&0.812&&-10.337&0.926&&-10.241&1.013 & & & + & & 15.040&0.699&&14.893&0.794&&14.753&1.045 & & & + 20 & & -0.856&0.066&&-0.853&0.081&&-0.863&0.079 & & & + & & -0.263&0.085&&-0.257&0.063&&-0.247&0.066 & & & + & & 0.249&0.081&&0.249&0.066&&0.261&0.071 & & & + & & 0.842&0.076&&0.853&0.087&&0.857&0.081 & & & + the setup for this simulation study is the same as simulation 1 , except we sampled the latent variable as follows : this allows us to examine the performance of the proposed model in the case of random effects . in this simulation study , we only consider the performance of the bayesian methods for longitudinal data with ordinal outcome ( bqol and blor ) . in table[ table.infer3 ] , we present the estimates of the parameters and , when and 0.25 . from table [ table.infer3 ] we can see that , our approach tends to give less biased parameter estimates for and compared to blor .the convergence of the proposed gibbs sampling algorithm in this simulation study was monitored using the multivariate potential scale reduction factor ( mpsrf ) reported by . from figure ( [ fig1 ] ) , it can be observed that the mpsrf becomes stable and very close to 1 after about the first 5000 iterations for each quantile level under consideration .hence , the convergence to the posterior distribution was quick and the mixing was good .in this section , we consider a data set from the national institute of mental health schizophrenia collaborative ( nimhsc ) study previously analysed by .the objective of this study is to assess treatment - related changes in illness severity over time . specifically , we studied item 79 ( imps79o ; severity of illness ) of the inpatient multidimensional psychiatric scale .this item was measured on a seven point scale as in table [ seven1 ] : [ pointscale ] .seven point scale for the nimhsc study [ seven1 ] [ cols="<,<",options="header " , ] table [ rd ] lists parameter estimations obtained using the bayesian methods ( bqol and blor ) .the methods are assessed based on credible intervals and the deviance information criterion ( dic ; * ? ? ?clearly , it can be seen that the credible intervals ( 95% ci ) obtained using the bqol when are generally shorter than the credible intervals obtained using the blor , suggesting an efficiency gain and stable estimation from the posterior distributions .in addition , dic was computed for our model when and as well as for blor and the numbers were 3311.32 , 3615.48 and 3417.39 , respectively .hence , under , model comparison using dic indicates that quantile ordinal models can give a better model fit compared to the bayesian logistic ordinal regression ( blor ) for longitudinal data .this shows that the model uesd for the errors in ( [ eq3 ] ) is a working model with articial assumptions , employed on the outcome variable to achieve the equivalence between maximising sld and the minimising proplem in ( [ eq2 ] ) .since bayesian quantile methods for estimating ordinal models with longitudinal data have not been proposed , yet .this paper fills this gap and presents a random effects ordinal quantile regression model for analysis of longitudinal data with ordinal outcome of interest .an efficient gibbs sampling algorithm was derived for fitting the model to the data based on a location - scale mixture representation of the skewed double exponential distribution .the proposed approach is illustrated using simulated data and a real data example .results show that the proposed approach performs well .one of the most desirable features of the proposed method is its model robustness in the sense that makes very minimal assumptions on the form of the error term distribution and thus is able to accommodate non - normal errors and outliers , which are popular in many real world applications .the full conditional distribution of each , denoted by is proportional to .thus , we have \big\ } \\ & \propto & v_{ij}^{-1/2 } \exp\big\ { -\frac{1}{2}\big[\frac{(l_{ij}-\mbox{\boldmath}_{ij}'\mbox{\boldmath}-\alpha_i)^2}{2}v_{ij}^{-1}+\big(\frac{1}{2 } \big ) v_{ij}\big]\big\ } \\ & \propto & v_{ij}^{-1/2 } \exp\big\ { -\frac{1}{2}\big[\varrho_1 ^ 2 v_{ij}^{-1}+\varrho_2 ^ 2 v_{ij}\big]\big\ } , \\\end{aligned}\ ] ] where and .thus , the full conditional distribution of each is a generalized inverse gaussian distribution gig , where and . recall that if gig then the pdf of is given by where and is so called _ modified bessel function of the third kind _ " .the full conditional distribution of each , denoted by is proportional to , where is the vector excluding the element .thus , we have \big\ } , \\\end{aligned}\ ] ] where and .then the full conditional distribution for is normal with mean and variance , where the full conditional distribution of each , denoted by is thus , the full conditional distribution of is a gig , where and .the full conditional distribution of each , denoted by is proportional to .thus , we have \big\ } , \\\end{aligned}\ ] ] where .then the full conditional distribution for is normal with mean and variance , where the full conditional distribution of , denoted by , is proportional to .thus , we have that is , the full conditional distribution of is a inverse gamma distribution .the full conditional distribution of each , denoted by is proportional to .thus , we have that is , the full conditional distribution of is a truncated normal distribution . at last ,the full conditional posterior distribution of , denoted by is proportional to .thus , we have following and , the full conditional distribution of is that is , the full conditional distribution of is a uniform distribution .bottai , m. , frongillo , e. a. , sui , x. , oneill , j. r. , mckeown , r. e. , burns , t. l. , liese , a. d. , blair , s. n. , and pate , r. r. ( 2014 ) .`` use of quantile regression to investigate the longitudinal association between physical activity and body mass index . '' , 22(5 ) : e149e156 .lipsitz , s. r. , fitzmaurice , g. m. , molenberghs , g. , and zhao , l. p. ( 1997 ) .`` quantile regression methods for longitudinal data with drop - outs : application to cd4 cell counts of patients infected with the human immunodeficiency virus . '' , 46(4 ) : 463476 .montesinos - lpez , o. a. , montesinos - lpez , a. , crossa , j. , burgueo , j. , and eskridge , k. ( 2015 ) . `` genomic - enabled prediction of ordinal data with bayesian logistic ordinal regression . '' , 5(10 ) : 21132126 .
since the pioneering work by , quantile regression models and its applications have become increasingly popular and important for research in many areas . in this paper , a random effects ordinal quantile regression model is proposed for analysis of longitudinal data with ordinal outcome of interest . an efficient gibbs sampling algorithm was derived for fitting the model to the data based on a location - scale mixture representation of the skewed double exponential distribution . the proposed approach is illustrated using simulated data and a real data example . this is the first work to discuss quantile regression for analysis of longitudinal data with ordinal outcome .
in part i , the double well potential problem ( dwp ) is defined by where is an real symmetric matrix , is an real matrix , , and . by introducing a continuous variable transformation , the double well potential problem ( dwp )can be transformed into the following equivalent quadratic program over one nonhomogeneous quadratic constraint ( qp1qc ) : the dual problem of ( qp1qc ) and the dual of the dual were studied in part 1 ( theorem 1 ) in order to find a global minimum solution to problem ( dwp ) . for practical applications , knowing only the global minimum of a double well potential function may not be sufficient .for example , the double well potential model can be used to describe the ion - molecule reactions , where the intermediate molecule complexes must go across the energy barrier to cause reactions .researchers have to know the potential difference between the energy wells ( caused by local minima ) and energy barrier ( caused by local maximum ) .the understanding of all types of critical points of a double well function is thus necessary .mathematically , we are motivated by the pioneering work of martnez which showed that a trust - region subproblem ( trs ) of the following form ( with being a positive scalar ) has at most one local , but non - global , minimizer .please notice that , on one hand , problem ( qp1qc ) can be regarded as an extension of problem ( trs ) towards the nonhomogeneous and possibly singular case . on the other hand ,the penalty version of the trust - region subproblem , namely , ( with the penalty parameter being sufficiently large ) is clearly a special case of the double well potential problem ( dwp ) .therefore , our approach to analyzing the local non - global minimizer of a double well potential problem extends the results of .moreover , when restricted to problem ( trs ) , our approach simplifies the proof provided in .although , in general , a double well potential problem may have infinitely many local , but non - global , minimizers ( see figure 1 ) , we ll show that , after taking the space reduction technique developed in section 2 of part i , the reduced nonsingular problem has at most one local non - global minimizer and at most one local maximizer .[ fig:0 ] we remark that characterizing the local maximizer of the trust - region subproblem ( [ tr:1])-([tr:2 ] ) can be reduced to the problem of finding a local minimizer of ( [ tr:1 ] ) with being replaced by .however , due to the non - symmetric nature , it is no longer the case for the double well potential problem ( [ d - well ] ). hence martnez s approach may not be able to characterize the local maximizer for a general ( dwp ) problem . in the rest of the paper , a characterization of the local , butnon - global , minimizer of a double well function is provided in section 2 .then , a characterization of the global minimizer of a double well function is given in section 3 , while the local maximizer is characterized in section 4 .computational algorithms for each type of the optimizers of a double well potential function are proposed in section 5 with some illustrative examples .some concluding remarks are given in section 6 .here we define some notations to be used throughout the paper .let be the set of all -dimensional symmetric real matrices , be the set of all -dimensional positive semi - definite matrices , and be the set of all n - dimensional positive definite matrices .for any , means that matrix and means that matrix .we sometimes write for and for .the smallest eigenvalue of is denoted by and the determinant of by .the -dimensional identity matrix is denoted by . for a vector , ( ) means that each component of is nonnegative ( positive ) and diag is an -dimensional diagonal matrix with diagonal components being .moreover , for a number , sign if , otherwise sign .following the space reduction technique developed in part i , without loss of generality , we may assume that , in problem ( bwp ) , is positive definite such that , matrices and are simultaneously diagonalizable via congruence , i.e. , there is a nonsingular matrix such that with and .it follows immediately that .let then we have and for simplicity , we define and .by dropping the constant terms , we can rewrite problem ( dwp ) defined in ( [ d - well ] ) as recall that the canonical primal problem defined in ( 19 ) of part i is to minimize the form in ( [ d - well:4 ] ) is a further simplified version of form ( [ d - well:19 ] ) by setting . in this way, the third order term in problem ( dwp ) is eliminated and the complexity is decreased for analysis .it s interesting to note that , in the finite deformation theory , the diagonal matrix represents the material constants , the first order coefficient vector stands for the external forces , and the cauchy - green strain measures the square of local changes in distance due to deformation . as we shall observe below , the first order and the second order necessary conditions of ( [ d - well:4 ] ) ( see ) are highly related to the term of , which is the sum of the cauchy - green strain and the material constants .our first result of lemma [ lem:2 ] will show that , at a local minimum of the double well potential function , the cauchy - green strain can not be too small , at least no smaller than the negative of the second smallest material constant .[ lem:1 ] assume that is a local minimizer of ( [ d - well:4 ] ) .it holds that [ lem:2 ] assume that and is a local minimizer of ( [ d - well:4 ] ) .it holds that furthermore , if , then suppose that the statement ( [ lem : eq ] ) is false , then .hence .let and . if , by the necessary condition ( [ con:2 ] ) , we have which causes a contradiction . on the other hand , if , then , by ( [ con:2 ] ) again , we have it again causes a contradiction .therefore , the statement ( [ lem : eq ] ) must be true .when , suppose that the statement ( [ lem : ineq ] ) is false , then we have by ( [ lem : eq_1 ] ) , we know that the second order necessary condition ( [ con:2 ] ) becomes since the first two leading principal minors of the matrix in ( [ x_2 ] ) are nonnegative , we have and +\left[\begin{array}{cc}\alpha_1-\alpha_2&0\\0&0 \end{array}\right ] \right\}= ( \alpha_1-\alpha_2)\underline{w}_2^{2}\ge 0.\ ] ] remember that , inequality ( [ x_0 ] ) implies that .moreover , inequality ( [ x1 ] ) implies that .together with ( [ con:1 ] ) , we obtain that and without loss of generality , we assume that , and hence .this implies , from ( [ lem : eq_1 ] ) and the fact that , we have consider the following parametric curve in : where , i.e. , passes through at .evaluating on , we have it is not difficult to see that is a local minimum point of since is a local minimizer of . however , this conclusion contradicts to the fact that therefore , the statement ( [ lem : ineq ] ) must be true , if .the next result lemma shows that any critical point of the double well potential function having a sufficiently large cauchy - green strain ( larger than the negative of all the material constants ) must be a global minimum point .[ lem:3 ] let be a critical point of the function in problem ( [ d - well:4 ] ) with . if then is a global minimizer of problem ( [ d - well:4 ] ) .in particular , a local minimizer of problem ( [ d - well:4 ] ) satisfying condition ( [ con:3 ] ) must be a global minimizer .define . by the assumption that , it follows that ] . using the second derivative of in ( [ con:2 ] ) , we have which shows that is indeed convex and any local optimum becomes globally optimal .\(ii ) if is a local minimizer and , then inequality ( [ lem : eq ] ) in lemma [ lem:2 ] and lemma [ lem:3 ] imply that is indeed the global minimizer of problem ( [ d - well:4 ] ) .\(iii ) if , then equation ( [ con:1 ] ) leads to either or . by property ( [ con:2 ] ), further implies that . using lemma [ lem:3 ] ,both cases lead to be a global minimizer .\(iv ) in this case , the secular function actually does not have any solution in its domain .the key result of establishing a necessary and sufficient condition for local , non - global minimizer is provided below .[ cor:0 ] the double well potential problem ( [ d - well:4 ] ) has a local - nonglobal minimizer if and only if there is a such that the secular function and .moreover , when it exists , the local non - global minimizer is given by suppose that with . for the defined by ( [ tw ] ) , we have and satisfies the first order necessary condition ( [ con:1 ] ) . moreover , the diagonal matrix is nonsingular with positive diagonal elements except for the first one . by weyl s inequality( see , theorem 4.3.1 ) , we can estimate the largest eigenvalues of the second order matrix by since , by ( [ determinant1 ] ) and ( [ determinant2 ] ) , we have where is defined in ( [ gamma:1 ] ) . combining ( [ new:1 ] ) with ( [ new:2 ] ) , we know that the smallest eigenvalue of the second order matrix must be positive , i.e. , this is a sufficient condition to guarantee that is a local minimizer of problem ( [ d - well:4 ] ) . on the hand side ,let be a local , non - global minimizer of problem ( [ d - well:4 ] ) , which is unique quaranteed by theorem [ theorem1 ] .let . by the proof of theorem [ theorem1 ] , we know .moreover , can be expressed by as in ( [ tw ] ) because satisfies the first order necessary condition ( [ con:1 ] ) .also we have and .it remains for us to show that .suppose that , by contradiction , .from ( [ new:2 ] ) , we have and thus there is a such that from ( [ new:000 ] ) , we can write then , implies that consider the double well potential function along the direction defined by .it is routine to verify that by the first order necessary condition ( [ con:1 ] ) , we have . by ( [ con:2 ] ) and ( [ new:000 ] ) , we further have . however , ( [ nzero ] ) implies that this result contradicts to the fact that is a local minimizer of problem ( [ d - well:4 ] ) .therefore , and the proof is complete .in this section , we try to characterize different aspects of the global minimizer of the double well potential problem .we first observe that the double well potential function tends to as .therefore , the global minimizer of problem ( [ d - well:4 ] ) always exists .our first result is that each component of the global minimizer must be of the same sign as the corresponding component of the external force ( i.e. , the first - order term vector ) .[ lem:4 ] if is the global minimizer of ( [ d - well:4 ] ) , then .\ ] ] let .since the only odd - order term in is the linear term , we have hence we know .a similar argument applies for any other components .the next result shows that the sufficient condition in lemma [ lem:3 ] is indeed necessary for a critical point to become the global minimizer .[ thm : suf - nec ] is a global minimizer of ( [ d - well:4 ] ) if and only if and the sufficiency is clear from lemma [ lem:3 ] .in addition , we can observe that the necessity of ( [ glob : s2 ] ) follows immediately from equation ( [ con:1 ] ) .it remains to show that ( [ glob : s1 ] ) is also a necessary condition . to avoid triviality, we may assume that . otherwise , by substituting into ( [ lem : eq ] ) , we can obtain the result at once .suppose that then ( [ con:2 ] ) implies that hence we have . using ( [ glob : s2 ] ) , we have this causes a contradiction to lemma [ lem:4 ] and the proof follows .an immediate consequence of theorem [ thm : suf - nec ] is that the sign of the first component of the local non - global minimizer , if it exists , must be opposite to that of the first component of a global minimum solution .[ cor:3 ] if be the local non - global minimizer and is a global minimizer of of problem ( [ d - well:4 ] ) , then since both and are critical points , theorem [ thm : suf - nec ] implies that and .it follows from condition ( iii ) of corollary [ cor:1 ] that and , from ( [ con:1 ] ) , consequently , in section 4 of part i , we have shown that the dual of the dual of the canonical primal problem ( p ) ( see equation ( 19 ) of part i ) is equivalent to only a portion of ( p ) subject to linear constraints ( see equation ( 35 ) of part i ) .moreover , that portion contains the global minimizer . in the simplified version here , we have the third order term coefficient , which reduces the dual of the dual problem in part i to the following problem : the portion of ( p ) corresponding to ( [ dualofdual ] ) becomes under the nonlinear one - to - one map : from lemma [ lem:4 ] , we know that the portion specified by ( [ sdcdwpconstraint ] ) contains the global minimizer . however , due to the opposite sign behavior on the first component , corollary [ cor:3 ] implies that the local non - global minimizer is not in that portion .the mapping ( [ x - y ] ) was used to reveal the hidden convexity of ( qp1qc ) in part i , but the local non - global minimizer is definitely excluded from the transformation .the missing of the local non - global minimizer can been seen clearly in examples 1 and 2 of part i.it is not difficult to see that the global maximum of problem ( [ d - well:4 ] ) goes to as grows without a bound .hence there is no global maximizer of the problem . in this section ,we provide an analytic study of the local maximizer of the simplified problem ( [ d - well:4 ] ) .[ lem:1n ] if is a local maximizer of ( [ d - well:4 ] ) , then the proof is easy .moreover , it follows directly from ( [ con:2n ] ) that in other words , at the local maximizer , the value of the cauchy - green strain is smaller than the negative value of all material constants .[ max : re ] if is a local maximizer of ( [ d - well:4 ] ) , then it follows from ( [ con:1n ] ) that if , then . on the other hand ,if , it implies from that either or ( or both ) .suppose that .it follows from ( [ con:2n ] ) that therefore , must be also , and the proof follows .[ mc:1 ] if , then the double well potential problem ( [ d - well:4 ] ) has no local maximizer .if , then ( [ a2 ] ) can not be true and we have the conclusion .now , assume that .if ( [ d - well:4 ] ) has a local maximizer , then it follows from ( [ a2 ] ) that or , equivalently , by lemma [ max : re ] , we have .it is routine to verify that since and , we have notice that consequently , is not a local maximizer and we reached a contradiction .this completes the proof .[ mc:2 ] if and , then the double well potential problem ( [ d - well:4 ] ) has a unique local maximizer .since and , we have therefore , is a local maximizer of problem ( [ d - well:4 ] ) .lemma [ max : re ] further guarantees that is the unique local maximizer .[ mc:3 ] if and , then the double well potential problem ( [ d - well:4 ] ) has at most one local maximizer .suppose is a local maximizer of ( [ d - well:4 ] ) .since , we let ] .suppose that , similar to ( [ gw_n ] ) , we can verify that and this is a contradiction to the fact that being a local maximizer of probem ( [ d - well:4 ] ) .therefore , .next , we show that , if not so , we consider by ( [ det:41 ] ) , we have consequently , matrix is singular and there exists a such that equivalently , since , we know define then , we have similar to the proof of theorem [ cor:0 ] , we can consider .it follows that since satisfies the first order necessary condition ( [ con:1n ] ) , . by ( [ second-0 ] ), we have .moreover , ( [ nzero:1 ] ) implies that consequently , is not a local maximizer of and is not a local maximizer , which causes a contradiction .therefore , we know .this completes the proof .the next result shows that , when it exists , the unique local maximizer is `` surrounded '' by all local ( non - global and global ) minimizers .[ theorem4 ] if is a local minimizer and is the local maximizer of the double well potential problem ( [ d - well:4 ] ) , then \(i ) : if , following lemma [ lem:2 ] and ( [ a2 ] ) , we have otherwise , and we assume that . applying lemma [ lem:2 ] and ( [ a2 ] ) again , we have since both and are critical points of , lemma [ lem:3 ] implies that both of them are global minimizers , which is impossible .therefore , .\(ii ) : if , then the first order necessary condition ( [ con:1n ] ) implies that either or . since can not be a global minimizer , the latter case is eliminated and thus . to prove ( [ relat ] ) , it is sufficient to show that .suppose that , then the second order necessary condition ( [ con:2 ] ) implies that by corollary [ cor:1 ] ( i ) , is convex and hence the local maximizer does not exist , which causes a contradiction to the setting of the theorem .if , then for any local minimizer or maximizer . therefore , and are two solutions to the following equation : from the proofs of theorem [ theorem1 ] and theorem [ theorem2 ] , we have since is strictly convex , it has two distinct solutions satisfying the above first order conditions only when this completes the proof .according to corollaries [ cor:0 ] and [ cor:00 ] , the local , non - global minimizer and the local maximizer of the simplified version of ( [ d - well:4 ] ) , if they exist , are closely related to the convex secular function over different intervals .the convex secular function is a convenient substitute for the first order necessary condition , while the intervals capturing the root of reflect the second order necessary condition .the sign of the first derivative of at the root provides necessary and sufficient conditions for the type of a local extremum , namely , positive sign for the local , but non - global , minimizer ; negative sign for the local maximizer .the necessary and sufficient condition for the global minimum in theorem [ thm : suf - nec ] can also be expressed in terms of the secular function . from ( [ glob : s1 ] ) , we have . if , ( [ glob : s2 ] ) implies that .moreover , by ( [ a4 ] ) , it implies that is monotonically decreasing on and the unique root must recover . otherwise ,if , the secular function is singular at and there could be multiple global minimum solutions . in this case , let be the index such that .the first order necessary condition can be solved by letting , where denotes the moore - penrose generalized inverse ; are free parameters and is the -th column of .then , we can establish the following _ generalized secular equation _ from which we try to find solution(s ) .since the vector is perpendicular to each vector of , we have if and , there are exactly two global optimal solutions .if and , there are infinitely many global solutions which form a -dimensional sphere .the result coincides with theorem 1 of part i. if , the optimal solution set degenerates to a singleton . in summary, we provide three algorithms for finding the global minimizers , local non - global minimizer and local maximizer , respectively .[ alg1 ] ( finding global minimizers ) + * solve the equation of one variable ^{-1 } \psi \|^2-t=0,~ t\in ( 2\nu-2\alpha_1,\infty).\ ] ] if there is a solution , stop ! the unique global minimizer of the double well potential problem ( [ d - well:4 ] ) is ^{-1}\psi.\ ] ] otherwise , go to step 2 . *if and =1 , solve equation ( [ secular - eq : root ] ) for at most two solutions : if , the double well potential problem ( [ d - well:4 ] ) has exactly two global minimizers of the form if , is the unique global minimizer . *if , the double well potential problem ( [ d - well:4 ] ) has one or infinitely many global minimizers : where are obtained by solving ( [ secular - eq : root ] ) .+ if is the unique optimal solution .+ otherwise , the global optimal solutions form a sphere centered at with the radius of .[ alg2 ] ( finding local non - global minimizer ) + solve the equation ^{-1 } \psi \|^2-t=0,~ t\in ( \max\{2\nu-2\alpha_2,0\},2\nu-2\alpha_1).\ ] ] if there is a solution such that , the unique local non - global minimizer of the double well potential problem ( [ d - well:4 ] ) is ^{-1}\psi.\ ] ] otherwise , declare that there is no local non - global minimizer .[ alg3 ] ( finding the local maximizer ) + * if , declare that there is no local maximizer .+ if and , then is the unique local maximizer .+ otherwise , go to step 2 . *solve the equation ^{+ } \psi \|^2-t = 0,~t\in[0 , 2\nu-2\alpha_n).\ ] ] if there is a solution such that , then the unique local maximizer of the double well potential problem ( [ d - well:4 ] ) is ^{+ } \psi.\ ] ] + otherwise , declare that there is no local maximizer . notice that each of the above three algorithms can be done in a polynomial time sincethe main computation involved is to solve the secular equation in one variable . to illustrate their numerically behavior, we use the same data set of ( ) of the three examples in part i of this paper and apply the space reduction in section 2 to convert the testing problems into the format of ( [ d - well:4 ] ) .( example 1 of part i : ) let and the double well potential problem becomes the corresponding function is shown in figure 2 .in this example , there are one global minimizer , one local non - global minimizer and one local maximizer .the secular function is shown in figure 2 . by finding the root of ( [ m:1 ] ) in , algorithm [ alg1 ] provides a solution and we find the global minimizer with the value of . for the local non - global minimizer, we apply algorithm [ alg2 ] to find the root of ( [ m:1 ] ) in .algorithm [ alg2 ] returned with , which concluded that the local non - global minimizer is with the value of .as for the local maximizer , by finding the root of ( [ m:1 ] ) in , algorithm [ alg3 ] returned with .it led to the local maximizer with the value of .notice that the signs of the two minimizers , and , are different , which demonstrates corollary [ cor:3 ] .the numerical results also showed that the local maximizer locates between the two minimizers , which is claimed by theorem [ theorem4 ] .[ fig:1 ] in example 1 ( ).,scaledwidth=50.0% ] [ fig:1:2 ] ) .,scaledwidth=50.0% ] ( example 2 of part i : ) applying the space reduction technique , we obtain the double well potential problem in the format of ( [ d - well:4 ] ) with the data and ,~ \psi = \left[\begin{array}{c } -22.0487\\-502.0209\end{array}\right].\ ] ] the corresponding function and its contour are shown in figure 4 .its secular function becomes ( shown in figure 5 ) .finding the root of ( [ m:2 ] ) on results in , algorithm [ alg1 ] gives the global minimizer ] with the value of .notice that the signs of the first component of the two minimizers are different , which demonstrates corollary [ cor:3 ] . finally , since , algorithm [ alg3 ] says that there is no local maximizer for this example . [ fig:2 ] in example 2 and its contour ( ).,title="fig:",scaledwidth=50.0% ] in example 2 and its contour ( ).,title="fig:",scaledwidth=50.0% ] [ fig:2:2 ] ) .,scaledwidth=50.0% ] ( maxican hat example ) in this example , which is already in the format of ( [ d - well:4 ] ) with , ,~ \psi = \left[\begin{array}{c } 0\\0\end{array}\right].\ ] ] the graph of the maxican hat function and its contour are shown in figure 6 .since and , the secular function has a unique solution and it becomes singular at .algorithm [ alg1 ] stopped at step 3 and claimed that is the set of global optimal solutions with the optimal value of .since , algorithm [ alg2 ] returned an answer that there is no local non - global minimizer .it is clear that ( [ m:3 ] ) has a unique root on and .since , algorithm [ alg3 ] returned the unique local maximizer $ ] .[ fig:3 ] in example 3 and its contour ( ).,title="fig:",scaledwidth=50.0% ] in example 3 and its contour ( ).,title="fig:",scaledwidth=50.0% ]in this paper we have characterized the local minimizers and maximizers of the double well potential problem . by analyzing the first and the second order necessary conditions and through the study of the corresponding secular functions ,we are able to estimate the number of local optimizers and locate each of them .moreover , the convex secular functions ( equations ) are used to characterize sufficient and necessary conditions for all types of optimizers with explicit computational algorithms developed for finding them .the ( dwp ) problem is a special case of the more general quadratic programming problem with one quadratic constraint ( qp1qc ) .we expect that the analytical techniques developed in this paper can be extended to study ( qp1qc ) and other quadratic programming problems .this research was undertaken while y. xia visited the national cheng kung university , tainan , taiwan .sheu s research work was sponsored partially by taiwan nsc 98 - 2115-m-006 -010 -my2 and by national center for theoretic sciences ( the southern branch ) .xia s research work was supported by national natural science foundation of china under grants 11001006 and 91130019/a011702 , and by the fund of state key laboratory of software development environment under grant sklsde-2013zx-13 .xing s research work was supported by nsfc no .
in contrast to taking the dual approach for finding a global minimum solution of a double well potential function , in part ii of the paper , we characterize a local minimizer , local maximizer , and global minimizer directly from the primal side . it is proven that , for a nonsingular " double well function , there exists at most one local , but non - global , minimizer and at most one local maximizer . moreover , when it exists , the local maximizer is surrounded " by local minimizers in the sense that the norm of the local maximizer is strictly less than that of any local minimizer . we also establish some necessary and sufficient optimality conditions for the global minimizer , local non - global minimizer and local maximizer by studying a convex secular function over specific intervals . these conditions lead to three algorithms for identifying different types of critical points of a given double well function . * keywords * : double well potential , local minimizer , local maximizer , global minimum .
with xml being the de facto standard for business and web data representation and exchange , storage and querying of large xml data collections is recognized as an important and challenging research problem .a number of xml databases have been developed to serve as a solution to this problem .while xml databases can employ various storage models , such as relational model or native xml tree model , they support standard xml query languages , called xpath and xquery . in general , an xml query specifies which nodes in an xml tree need to be retrieved .once an xml tree is stored into an xml database , a query over this tree usually requires two steps : ( 1 ) finding the specified nodes , if any , in the xml tree and ( 2 ) reconstructing and returning xml subtrees rooted at found nodes as a query result .the second step is called xml subtree reconstruction and may have a significant impact on query response time .one approach to minimize xml subtree reconstruction time is to cache xml subtrees rooted at frequently accessed nodes as illustrated in the following example .consider an xml tree in figure [ fig : xmltree ] that describes a sample bookstore inventory .the tree nodes correspond to xml elements , e.g. , _ bookstore _ and _ book _ , and data values , e.g. , _`` arthur '' _ and _ `` bernstein '' _ , and the edges represent parent - child relationships among nodes , e.g. , all the _ book _ elements are children of _ bookstore_. in addition , each element node is assigned a unique identifier that is shown next to the node in the figure . as an example , in figure [ fig : relation ] , we show how this xml tree can be stored into a single table in an rdbms using the edge approach .the edge table stores each xml element as a separate tuple that includes the element i d , i d of its parent , element name , and element data content .a sample query over this xml tree that retrieves books with title `` database systems '' can be expressed in xpath as : = = = = = = + + this query can be translated into relational algebra or sql over the edge table to retrieve ids of the _ book _ elements that satisfy the condition : = = = = = = + + + + + + + where , , and are aliases of table .for the edge table in figure [ fig : relation ] , the relational algebra query returns i d `` 2 '' , that uniquely identifies the first _ book _ element in the tree .however , to retrieve useful information about the book , the query evaluator must further retrieve all the descendants of the _ book _ node and reconstruct their parent - child relationships into an xml subtree rooted at this node ; this requires additional self - joins of the edge table and a reconstruction algorithm , such as the one proposed in . instead , to avoid expensive xml subtree reconstruction, the subtree can be explicitly stored in the database as an xml reconstruction view ( see figure [ fig : view ] ) .this materialized view can be used for the above xpath query or any other query that needs to reconstruct and return the _ book _ node ( with i d `` 2 '' ) or its descendant . in this work, we study the problem of selecting xml reconstruction views to materialize : given a set of xml elements from an xml database , their access frequencies ( aka workload ) , a set of ancestor - descendant relationships among these elements , and a storage capacity , find a set of elements from , whose xml subtrees should be materialized as reconstruction views , such that their combined size is no larger than . to our best knowledge, our solution to this problem is the first one proposed in the literature .our main contributions and the paper organization are as follows . in section [ sec : rw ] , we discuss related work . in section [ sec : problem ] , we formally define the xml reconstruction view selection problem . in sections [ sec :complexity ] and [ sec : fptas ] , we prove that the problem is np - hard and describe a fully polynomial - time approximation scheme ( fptas ) for the problem .we conclude the paper and list future work directions in section [ sec : conclude ] .we studied the xml subtree reconstruction problem in the context of a relational storage of xml documents in , where several algorithms have been proposed . given an xml element returned by an xml query , our algorithms retrieve all its descendants from a database and reconstruct their relationships into an xml subtree that is returned as the query result . to our best knowledge , there have been no previous work on materializing reconstruction views or xml reconstruction view selection .materialized views have been successfully used for query optimization in xml databases .these research works rewrite an xml query , such that it can be answered either using only available materialized views , if possible , or accessing both the database and materialized views .view maintenance in xml databases has been studied in .there have been only one recent work on materialized view selection in the context of xml databases . in , the problem is defined as : find views over xml data , given xml databases , storage space , and a set of queries , such that the combined view size does not exceed the storage space .the proposed solution produces minimal xml views as candidates for the given query workload , organizes them into a graph , and uses two view selection strategies to choose views to materialize .this approach makes an assumption that views are used to answer xml queries completely ( not partially ) without accessing an underlying xml database .the xml reconstruction view problem studied in our work focuses on a different aspect of xml query processing : it finds views to materialize based on how frequently an xml element needs to be reconstructed .however , xml reconstruction views can be complimentarily used for query answering , if desired .finally , the materialized view selection problem have been extensively studied in data warehouses and distributed databases .these research results are hardly applicable to xml tree structures and in particular to subtree reconstruction , which is not required for data warehouses or relational databases .in this section , we formally define the xml reconstruction view selection problem addressed in our work. * problem formulation . * given xml elements , , and an ancestor - descendant relationship over such that if , then is an ancestor of , let be the access cost of accessing unmaterialized , and let be the access cost of accessing materialized .we have since reconstruction of takes time .we use to denote the memory capacity required to store a materialized xml element , and for any . given a workload that is characterized by representing the access frequency of .the _ xml reconstruction view selection problem _ is to select a set of elements from to be materialized to minimize the total access cost under the disk capacity constraint where if or for some ancestor of , , otherwise . denotes the available memory capacity , .next , let means the cost saving by materialization , then one can show that function is minimized if and only if the following function is maximized under the disk capacity constraint where represents all the materialized xml elements and their descendant elements in , it is defined as .in this section , we prove that the xml reconstruction view selection problem is np - hard .first , the maximization problem is changed into the equivalent decision problem .* equivalent decision problem . * given , , , , and as defined in section [ sec : problem ] , let denotes the cost saving goal , .is there a subset such that and represents all the materialized xml elements and their descendant elements in , it is defined as . in order to study this problem in a convenient model, we have the following simplified version .the input is a tree , in which every node has a size and profit , and the size limitation .the target is to find a subset of subtrees rooted at nodes such that , and is maximal .furthermore , there is no overlap between any two subtrees selected in the solution .we prove that the decision problem of the xml reconstruction view selection is an np - hard . a polynomial time reduction from _ knapsack _ to it is constructed .[ theoremnpcomplete ] the decision problem of the xml reconstruction view selection is np - complete . it is straightforward to verify that the problem is in np . restrict the problem to the well - known np - complete problem _ knapsack_ by allowing only problem instances in which : assume that a knapsack problem has input , and parameters and .we need to determine a subset such that and .build a binary tree with exactly leaves .let leaf have profit and size .furthermore , each internal node , which is not leaf , has size and profit .clearly , any solution can not contain any internal due to the size limitation .we can only select a subset of leaves .this is equivalent to the knapsack problem .finally , we state the np - hardness of the xml reconstruction view selection problem .the xml reconstruction view selection problem is np - hard .it follows from theorem [ theoremnpcomplete ] , since the equivalent decision problem is np - complete .we assume that each parameter is an integer .the input is xml elements , which will be represented by an tree , where each edge in shows a relationship between a pair of parent and child nodes .we have a divide and conquer approach to develop a fully approximation scheme .given an tree with root , it has subtrees derived from the children of .we find a set of approximate solutions among and another set of approximate solutions among .we merge the two sets of approximate solutions to obtain the solution for the union of subtrees .add one more solution that is derived by selecting the root of .group those solutions into parts such that each part contains all solutions with similar .prune those solution by selecting the one from each part with the least size .this can reduce the number of solution to be bounded by a polynomial .we will use a list to represent the selection of elements from . for a list of elements , define , and .define be the largest product of the node degrees along a path from root to a leaf in the tree .assume that is a small constant with .we need an approximation .we maintain a list of solutions , where is a list of elements in .let with .let and .partition the interval ] and ] , where . two lists and , are in the same region if there exist such that both and are . for two lists of partial solutions and , their link . * prune * ( ) input : is a list of partial solutions ; partition into parts such that two lists and are in the same part if and are in the same region .for each , select such that is the least among all in ; * end of prune * * merge * ( ) input : and are two lists of solutions .let ; for each and each append their link to ; return ; * end of merge * * union * ( ) input : are lists of solutions . if then return ; return prune(merge(union , union ; * end of union * * sketch * ( ) input : is a set of elements according to their . if only contains one element , return the list with two solutions .partition the list into subtrees according to its children .let be the list that only contains solution . for to let =sketch( ); return union ; * end of sketch * for a list of elements and an tree , ] is \circ \cdots \circ p[j_k] ] and .let .let be the solution in the same region with and has the least .therefore , since and , we also have [ lemma-1 ] assume that is an arbitrary solution for the problem with tree .for ( ) , there exists a solution in the list such that and .we prove by induction .the basis at is trivial .we assume that the claim is true for all .now assume that and has children which induce subtrees .let ] and ))$ ] .let .by lemma [ lemma - union ] , there exists such that )\\ & \le & f^{\log \chi(j)}\lambda(p[j_1,\cdots , j_{k}])\\ & = & f^{\log \chi(j)}\lambda(p).\end{aligned}\ ] ] and )\\ & = & \mu(p ) .\end{aligned}\ ] ] [ time - lemma ] assume that .then the computational time for sketch( ) is , where is the number of nodes in .the number of intervals is .therefore the list of each ( ) is of length .let be the time for union .it satisfies the recursion .this brings solution .let be the computational time for prune( ). denote to be the number of edges in .we prove by induction that for some constant .we select constant enough so that merging two lists takes steps .we have that we now complete the common procedure . before we give the fptas for our problems , we first give the following lemma which facilitates our proof for fptas .one can refer the algorithm book for its proof .[ basic - lemma ] ( 1)for , .( 2)for real , .* algorithm * approximate( , ) input : is an tree with elements and is a small constant with ; let ; select from the list that has the optimal cost ; * end of the algorithm * [ main - theorem ] for any instance of of an tree with elements , there exists an time approximation scheme , where .assume that is the optimal solution for input .let ( ) . by lemma [ lemma-1 ], we have that satisfies the condition of lemma [ lemma-1 ] . furthermore , .the computational time follows from lemma [ time - lemma ] .it is easy to see that .we have the following corollary .for any instance of of an tree with elements , there exists an time approximation scheme , where .in this section , we show an approximation scheme for the problem with an input of multiple trees .the input is a series of trees .for any instance of of an tree with elements , there exists an time approximation scheme , where and is a tree via connecting all into a single tree under a common root .build a new tree with a new node such that are the subtrees under .apply the algorithm in in theorem [ main - theorem ] .in this work , we studied the problem of xml reconstruction view selection that promises to improve query evaluation in xml databases .we were first to formally define this problem : given a set of xml elements from an xml database , their access frequencies ( aka workload ) , a set of ancestor - descendant relationships among these elements , and a storage capacity , find a set of elements from , whose xml subtrees should be materialized as reconstruction views , such that their combined size is no larger than .next , we showed that the xml reconstruction view selection problem is np - hard . finally , we proposed a fully polynomial - time approximation scheme ( fptas ) that can be used to solve the problem in practice .future work for our research includes two main directions : ( 1 ) an extension of the proposed solution to support multiple xml trees and ( 2 ) an implementation and performance study of our framework in an existing xml database .
query evaluation in an xml database requires reconstructing xml subtrees rooted at nodes found by an xml query . since xml subtree reconstruction can be expensive , one approach to improve query response time is to use reconstruction views - materialized xml subtrees of an xml document , whose nodes are frequently accessed by xml queries . for this approach to be efficient , the principal requirement is a framework for view selection . in this work , we are the first to formalize and study the problem of xml reconstruction view selection . the input is a tree , in which every node has a size and profit , and the size limitation . the target is to find a subset of subtrees rooted at nodes respectively such that , and is maximal . furthermore , there is no overlap between any two subtrees selected in the solution . we prove that this problem is np - hard and present a fully polynomial - time approximation scheme ( fptas ) as a solution .
advances in detector technology led to the construction of time - of - flight ( tof ) positron emission tomography ( pet ) scanners , resulting in enhanced signal - to - noise ratio .goal of this work is the improvement of time resolution of scintillation detectors for pet , by a better understanding of photon propagation inside scintillators and the influence of the cherenkov effect .the time resolution of scintillation detectors depends on several factors .it can be distinguished between the time resolution of the scintillator , the photon detector and the readout electronics .the magnitude of time resolution of the scintillator itself has two major origins : statistical processes of the scintillation ( ) and the photon propagation from the point of interaction to the photon detector ( ) .the statistical processes are in principle influenced by scintillation rise- and decay - times and the light yield .the photon propagation process is influenced by factors , such as refractive index , scintillator geometry and surface finishing .cherenkov photons in scintillators are emitted by electrons , ionised by incident 511kev photons and propagate faster than the speed of light in the scintillator , with _n _ being the speed of light in vacuum and the refractive index , respectively . making use of the cherenkov effect is very promising for tof - pet detectors , as the time spread of this process is smaller than for scintillation in inorganic materials .the direction of the cherenkov photons can be described as a cone relative to the electron motion .the opening angle is determined by electron velocity and the refractive index of the crystal .since the electron can be scattered to any direction , the direction of the photons is quasi - random .the number of emitted cherenkov photons for lso : ce is about 15 per 511kev photon .monte carlo simulations were performed to investigate the impact of photon propagation and the cherenkov effect inside scintillators on the time resolution .these simulations will be described in the following sections .for the simulations geant4 was used .lso : ce was chosen as scintillator material , since this is a common scintillating material for pet .the optical properties , such as , refractive index and transmission spectrum were taken from . for the light yield , decay and rise time , typical values of photons / mev , 40ns and 100ps were chosen , respectively . in order to evaluate only the effect of photon production and photon propagation , the detection efficiency of the photon detectors was set to 100% over all wavelengths . instead of simulating a positron source , photons with energies of 511kevwere generated in a point source and emitted into a defined direction .the photon detectors recorded arrival time , wavelength and creation process of the photons .compton scattered events were discriminated .besides photon statistics , the coincidence time resolution ( ctr ) of pet - like detector systems is influenced by variations of the depth of interaction ( doi ) in the opposing scintillators , resulting in differences of the photon propagation lengths to the photon detectors .this effect can be reduced by shortening the crystals , however , leading to decreasing detection efficiency for 511kev photons . to evaluate the coincidence time resolution , a basic coincidence setup with two finger - like scintillators , each connected to one photon detector ,was simulated , see figure [ fig : coincidencesetup ] ( a ) .the crystal sizes were with z ranging from 1 mm to 30 mm . in figure[ fig : ctr_lso ] , the simulated ctr is shown for various crystal lengths .two curves are plotted , one representing the detection of scintillation photons only , the other one showing the improvement of time resolution if the detection of cherenkov photons is included . for both linesthe dependency on the crystal lengths is clearly visible . for crystal lengths from 1 mm to 30 mmthe ctr ranges from 32ps to 144ps fwhm for scintillation and from 12ps to 125ps fwhm by including the cherenkov effect , respectively .the reason for this behaviour is the decrease in localisation of the 511kev photon interaction inside both of the crystals , simply due to the increasing size of the crystals .this uncertainty of localisation causes increasing time spread as a function of the crystal length .the impact of the cherenkov effect on the ctr is clearly visible in figure [ fig : ctr_lso ] .although time resolution improves with decreasing crystal lengths , longer crystals , providing reasonable sensitivity to the 511kev photons , are used for real pet systems .the simulated detection efficiency of coincidences ranges from 2% , for mm to above 50% for mm . for real tof - pet scannersa trade - off between time resolution and sensitivity has to be made .in the following , the impact of the doi on the photon arrival rates and the photon output of the scintillator at the photon detector will be discussed . for the simulations , the setup of figure [ fig : coincidencesetup ] ( b )the size of the simulated crystal was , and was connected to a photon detector .a source of 511kev photons was placed at the side of the scintillator and the distance _d _ of the source relative to the photon detector was varied over the whole crystal length from 0 mm to 30 mm . by knowing the distance _d _ , the doi is determined .the simulated photon arrival rates for three distances _ d _ at the photon detector are shown in figure [ fig : arrivaltimes ] .for the photon arrival rates coming from the scintillation process , a fast rise can be seen at early times , followed by an intermediate plateau until a second , smaller and slower increase of the photon rate is visible . the width of the plateau is directly related to the doi of the penetrating 511kev photons and vanishes for dois reaching the length of the scintillator .the reason is originated in the isotropic emission of scintillation photons .the photons , emitted towards the photon detector form the first rise of the number of scintillation photons and the consecutive plateau .the second rise is caused by photons emitted away from the photon detector , getting reflected at the end of the scintillator and reaching the photon detector with a delay , depending on their travel path .for the cherenkov photons this effect is more obvious , since the duration of the cherenkov process is shorter , compared to the scintillation process .the cherenkov photons form two subsequent peaks with a distance and width proportional to the effective travel path inside the scintillator . compared to the scintillation photons ,the rates of cherenkov photons form sharp peaks , providing accurate time information of the interactions of the 511kev photons . from the simulations ,the mean trigger times at single photon level were determined for various dois , see figure [ fig : meanarrivaltime ] . until large dois , the trigger times are depending linearly on the doi , but deviate at dois mm .the reason is that for low dois , photons emitted towards the photon detector are triggering the detector , the photons emitted into the opposite direction arrive at the detector with a significant delay .with increasing doi , the time delay of these two consecutive peaks decreases until they merge at high dois , resulting in a higher photon rate at the photon detector and , therefore , in earlier trigger probability .furthermore , the number of detected photons is dependent on the doi , see figure [ fig : nbofphotons ] .it is clearly visible that the light output of the crystal decreases with increasing doi .since cherenkov photons provide very fast response to the photon interaction , for good time resolution it is beneficial to detect as many cherenkov photons as possible .unfortunately , many of them are lost in real detector systems due to low quantum efficiencies of photon detectors in the blue and uv - range and the cut - off frequencies of photon transmission in scintillators . analyzing scintillation pulse shapes and detection of the first and second rise of photon arrivals , can provide information about doi , and help reducing parallax errors of pet systems . due to limited time resolution of state - of - the - art photon detectorsit is difficult to discriminate the first rise , the width of the plateau and the second rise of the photon arrival rate .nevertheless , this effect should be observable as variation of the rise time of scintillation pulses .therefore , measuring the rise time or the number of detected photons not only parallax errors can be reduced by estimation of the doi but also improved time resolution by determination of a corrected time stamp of interactions of the 511kev photons inside the scintillator can be achieved . a dependency of scintillation rise times and the number of detected photons on the doi has been measured by .for investigations on the time resolution of a scintillator , a setup similar to that in figure [ fig : coincidencesetup ] ( b ) was used , but with the only difference , that the photons were emitted from top , towards the photon detector .the time stamps were determined triggering on the first arriving photon ( scintillation and cherenkov effect ) .contrary to the simulation in section [ sec : photontransport ] , the doi is unknown .the resulting photon arrival times can be seen in figure [ fig : scatter ] ( a ) .figure [ fig : scatter ] ( b ) shows a scatter plot of the number of optical photons per 511kev photon and the time when the first photon is arriving at the detector .compton scattered events are discriminated with a discriminator threshold of 1600 photons per event .92% of the 511kev photons were detected , and 48% of them were compton events . in figure[ fig : scatter ] ( b ) the time walk is visible . by calculating the mean photon arrival times for increasing amplitudes ,time walk correction was applied .the corrected arrival time spectra can be seen in figure [ fig : scatter ] ( c ) and ( d ) .the total time resolution of the scintillator is .the corrected data of figure [ fig : scatter ] ( c ) gives .using results in .the corrected time resolution still includes the standard deviation of the scintillation process , the contribution of the cherenkov process and a contribution of photon propagation .note , that the name of the variables do not imply normal distributions , but are measures for the standard variation .simulations for determination of the time resolution of lso : ce crystals were performed .the influence of crystal sizes on the ctr for pet - like detector systems was shown and ranged from 32ps to 144ps fwhm for crystal lengths of 1 mm to 30 mm . including the detection of cherenkov photons showed a significant improvement of the ctr ( 12 to 125ps fwhm ) .it was shown that the light output and the photon rate distribution at the photon detector are dependent on the doi and that applying time walk correction significantly improves the time resolution from to .the contribution of the time walk was . by measuring the pulse amplitude or the rise time of a scintillation pulse, the doi can be estimated .information on the doi allows to reduce parallax errors for pet and determination of the accurate time stamps of photon interactions results in improvement of tof for pet .as the development of readout electronics proceeds quickly , extracting amplitude and rise - time information is realistic also for full tof - pet systems . for real pet - systems , photon detectors with high quantum efficiency in the blue- and uv - range and scintillators withincreased transmission in these wavelength - bands would help to improve tof for pet by making use of the cherenkov effect .however , for time resolutions of state - of - the - art photon detectors the benefit from the cherenkov effect is small , but becomes increasingly important if the time resolution of photon detectors approximates the time resolution of the scintillators .this work was partly funded by eu - project hadronphysics3 ( project 283286 ) .
we present results of simulations on the influence of photon propagation and the cherenkov effect on the time resolution of lso : ce scintillators . the influence of the scintillator length on the coincidence time resolution is shown . furthermore , the impact of the depth of interaction on the time resolution , the light output and the arrival time distribution at the photon detector is simulated and it is shown how these information can be used for time walk correction . positron emission tomography ( pet ) , time - of - flight ( tof ) , time resolution , cherenkov effect , depth of interaction ( doi ) , time walk
clustering is a commonly encountered problem in many areas such as marketing , engineering , and biology , among others . in a typical clustering problem ,the goal is to group entities together according to a certain similarity measure .such a measure can be defined in many different ways , and it determines the complexity of solving the relevant clustering problem . clustering problems with the similarity measure defined by regression errors is especially challenging because it is coupled with regression .consider a retailer that needs to forecast sales at the stock keeping unit ( sku ) level for different promotional plans and mechanisms ( e.g. , 30% off the selling price ) using a linear regression model .a sku is a unique identifying number that refers to a specific item in inventory .each sku is often used to identify product , product size , product type , and the manufacturer .seasonality is an important predictor and is modeled using an indicator dummy input variable for each season , with the length of one season being one week .the usable data for each sku is limited compared to the possible number of parameters to estimate , among which the seasonality dummies compose a large proportion .more significant and useful statistical results can be obtained by clustering skus with similar seasonal effects from promotions together , and estimating seasonality dummies for a cluster instead of a single sku .however , the seasonal effects of skus correspond to regression coefficients , which can only be obtained after grouping skus with similar seasonality .a two - stage method can be used to solve such difficult clustering problems that are intertwined with regression . in the first stage ,entities are clustered based on certain approximate measures of their regression coefficients . in the second stage , regressions are performed over the resultant clusters to obtain estimates for the regression coefficients for each cluster . however , good approximate measures are difficult to obtain a priori before carrying out the regressions .a better alternative is to perform clustering and regression simultaneously , which can be achieved through cluster - wise linear regression ( clr ) , which is also referred to as `` regression clustering '' in the literature .other application areas of clr include marketing , pavement condition prediction , and spatial modeling and analysis .more details about these other application areas can be found in openshaw , desarbo and cron , desarbo , and luo and chou .the clr problem bears connection to the minimum sum - of - squares clustering ( mssc ) problem , the objective of which is to find clusters that minimize the sum of squared distances from each entity to the centroid of the cluster which it belongs to . contrary to clustering entities directly based on distances , clr generates clusters according to the effects that some independent variables have on the response variable of a preset regression model . each entityis represented by a set of observations of a response variable and the associated predictors .clr is to group entities with similar regression effects into a given number of clusters such that the overall sum of squared residuals within clusters is minimal .although the mssc problem has been extensively studied by researchers from various fields ( e.g. , statistics , optimization , and data mining ) , the work for the clr problem is limited , most of which concerns adapting the lloyd s algorithm based heuristic algorithms of the mssc problem to the clr problem .the lloyd s algorithm starts randomly from some initial partition of clusters , then calculates the centroids of clusters , and assigns entities to their closest centroids until converging to a local minimum .recently , several exact approaches have been proposed by carbonneau et al , which are discussed in detail in sections 1.1 and 2 .we tackle the problem of clustering entities based on their regression coefficients by modeling it as a generalized clr problem , in which we allow each entity to have more than one observation .we propose both a mixed integer quadratic program formulation and a set partitioning formulation for generalized clr .our mixed integer quadratic program formulation is more general than the one proposed by bertsimas and shioda , which can not be directly applied to the sku clustering problem since they assume each clustering entity to have only one observation and this assumption does not hold for the sku clustering problem .we identify a connection between the generalized clr and mssc problems , through which we prove np - hardness of the generalized clr problem .column generation is an algorithmic framework for solving large - scale linear and integer programs .vanderbeck and wolsey and barnhart _ et al . _ overview column generation for solving large integer program .we design a column generation ( cg ) algorithm for the generalized clr problem using its set partitioning formulation .the corresponding pricing problem is a mixed integer quadratic program , which we show to be np - hard . to handle larger instances in the column generation framework, we also propose a heuristic algorithm , referred to as the cg heuristic algorithm .this heuristic algorithm , inspired by bertsimas and shioda , first clusters entities to a small number of groups and then performs our column generation algorithm on these groups of entities .in addition , we propose a metaheuristic algorithm , named the ga - lloyd algorithm , which uses an adapted lloyd s clustering algorithm to find locally optimal partitions and relies on the genetic algorithm ( ga ) to escape local optimums .furthermore , we introduce a two - stage approach , used frequently in practice due to its simplicity , which performs clustering first and regression second .we test our algorithms using real - world data from a large retail chain .we compare the performance of the ga - lloyd , the cg heuristic , and the two - stage algorithms on two larger instances with 66 and 337 skus , corresponding to two representative subcategories under the retailer s product hierarchy .we observe that the ga - lloyd algorithm performs much better than the two - stage algorithm .the cg heuristic algorithm is able to produce slightly better results than the ga - lloyd algorithm for smaller instances , but at the cost of much longer running time .the ga - lloyd algorithm performs the best and identifies distinctive and meaningful seasonal patterns for the tested subcategories .in addition , we find that the column generation algorithm is able to solve the sku clustering problem with at most 20 skus to optimality within reasonable computation time .we benchmark the performance of the ga - lloyd and cg heuristic algorithms against the optimal solutions obtained by the column generation algorithm to find that both algorithms obtain close to optimal solutions .the contributions of our work are as follows . 1 . we are the first to model and solve the sku clustering problem , commonly encountered in retail predictive modeling , through generalized clr .2 . we propose four heuristic algorithms for the generalized clr problem , including the cg heuristic algorithm , the ga - lloyd algorithm , the two - stage approach , and a variant of spth algorithm .3 . we propose an exact column generation algorithm that enables us to evaluate the performance of the heuristic algorithms .we prove np - hardness of the generalized clr problem and np - completeness of the pricing problem of the column generation algorithm .note that the number of clusters is a parameter in the generalized clr problem that needs to be decided by user beforehand or by enumeration .although we provide comparison of models with different number of clusters for real - world data in section [ subsec_compare_seasonality ] , it is not straightforward to develop a universal rule for deciding the number of clusters .this is also a hard task for mssc and clr .the aic or bic criteria did not give a reasonable number of clusters for the data set we used for the experiment .they gave more than three times the number of hand - picked number of clusters that work in practice .hence , in this paper , we assume that the target number of clusters is given in advance . in figure [fig : compare ] , we summarize and compare the terms in the clr , generalized clr and the sku clustering problem . while clr only has entities , generalized clr allows multiple observations per entity .the clr problem can be thought of as the generalized clr with one observation per entity .note that entity and observation in generalized clr are the sku and transactions in the sku clustering problem , respectively .+ the rest of the paper is organized as follows . in section [ sec : formulations ] , we introduce both the mixed integer quadratic program and the set partitioning formulations of the generalized clr problem . we draw the connection between the generalized clr and mssc problems , and prove np - hardness of the former through this connection . in section[ sec : algorithms ] , we present the exact column generation algorithm , the cg heuristic algorithm , the ga - lloyd heuristic algorithm , the two - stage algorithm , and a variant of the spth algorithm . the pricing problem of the column generation algorithm is shown to be np - complete . in section [ sec : experiments ] , we present numerical experiments to test the performance of all proposed algorithms .the literature review is discussed next . to the best of authors knowledge, no previous work has been conducted that comprehensibly tackles the generalized clr problem .however , an extensive collection has been proposed for the typical clr problem , which can potentially be adapted to tackle the generalized clr problem .the algorithms proposed for the typical clr problem are mainly heuristics bearing close similarity to the algorithms for the mssc problem .for example , spth proposes an exchange algorithm which , starting from some initial clusters , exchanges two items between two clusters if a cost reduction is observed in the objective function .desarbo presents a simulated annealing method to escape local minimums . used a self organizing map to perform clusterwise regression . on mathematical programming - based heuristics , lau _ et al ._ propose a nonlinear programming formulation that it is solved approximately using commercial solvers with no guarantee to find a global optimum .their algorithm s performance depends heavily on the initial clusters .this initial - cluster dependency is overcome by the k - harmonic means clustering algorithm proposed by zhang .moreover , bertsimas and shioda introduce a compact mixed - integer linear formulation for a slight variation of the clr problem with the sum of the absolute error as the objective .their algorithm first divides entities into a small number of clusters , and then feeds these clusters into their mixed integer program . for exact approaches to clr ,carbonneau _ et al . _ proposed a mixed logical - quadratic programming formulation by replacing big m constraints with the logical implication of the constraints . proposed an iterative algorithm based on sequencing the data and repetitive use of a branch and bound algorithm . proposed a column generation based algorithm based on and .there are two key differences between these works and the one we propose in this paper .first , we provide both a quadratic mixed - integer program formulation and a set partition formulation of the generalized clr problem .the former is a generalization of the formulation in , and the latter is the set partitioning formulation for generalized clr ( recently , carbonneau _ et al ._ have proposed a set partitioning formulation for clr ) .second , we propose two new heuristics , namely the cg heuristic algorithm and the ga - lloyd algorithm for the generalized clr problem . there is another stream of research for the clr problem that assumes a distribution function for regression errors where each entity is assigned to each cluster with a certain probability , i.e. , using `` soft '' assignments .for example , desarbo and cron propose a finite conditional mixture maximum likelihood methodology , which assumes normal distribution for regression errors and is solved through the expectation maximization algorithm . since then , a large number of mixture regression models have been developed , including probit and logit mixture regression models as examples . compare the performance of the expectation maximization algorithms with their nonlinear programming - based algorithm .hennig investigate idenfiability of model - based clusterwise linear regression for consistent estimate of parameters .durso _ et al . _ proposed to integrate fuzzy clustering and fuzzy regression . a recent work of ingrassia _et al_. uses linear cluster - weighted models for clustering regression .these model - based approaches allow residual variances to differ between clusters , which the least squares approaches do not allow . in the soft assignment setting , an entity can be assigned to the cluster of highest probability .we restrict the scope of our review and comparison to least squares approaches because the objective functions are different .the reader is referred to wedel and desarbo and hennig for reviews .the algorithms for the mssc problem are instructive to solving the clr problem .there are abundant papers for solving the mssc problem .hansen and jaumard survey various forms of clustering problems and their solution methods , including mssc , from a mathematical programming point of view . in their survey ,solution methods for the mssc problem include dynamic programming , branch - and - bound , cutting planes , and column generation methods .all these algorithms do not scale well to large size instances or in higher dimensional spaces .heuristics are also considered , including lloyd s like algorithms ( e.g. , k - means and h - means ) and metaheuristics such as simulated annealing , tabu search , genetic algorithms and variable neighborhood search . with respect to mathematical programming approaches , du merle _ et al ._ propose an interior point algorithm to exactly solve the mssc problem . improve the algorithm of du merle _ et al . _ by exploiting the geometric characteristics of clusters , which enables them to solve much larger instances .we first provide a mixed integer quadratic formulation for the generalized clr problem .this formulation reveals a close connection between the generalized clr and mssc problems , which enables us to show that the generalized clr problem is np - hard .consider set of entities .each entity has observations of dependent variable , and independent variables with for any ] . )observation is associated with independent variables .note that vectors are represented in bold symbols .we want to divide these entities into a partition of clusters where , for any , and }c_k = [ i] ] where denotes the cardinality of cluster .note that the number of observations pertaining to a cluster is at least .the minimum size constraints are imposed to ensure that there are enough observations for each cluster .further , in order to avoid regression models with zero error , we require .we also require such that there is always a feasible solution .the generalized clr problem is formulated as follows : \text { , } k \in [ k]\text { , } l \in [ l]\label{constraint : quadratic : regression1 } \\t_{il } + ( y_{il } - \sum_{j=1}^j\beta_{kj}x_{ijl } ) + m(1 - z_{ik } ) & \geq 0 \quad & & i \in [ i]\text { , } k \in [ k]\text { , } l \in [ l]\label{constraint : quadratic : regression2 } \\\sum_{k=1}^k z_{ik } & = 1 \quad & & i \in [ i ] \label{constraint : quadratic : assignment } \\ \sum_{i=1}^i z_{ik } & \geq n \quad & & k \in [ k ] \label{constraint : quadratic : minimumsize }\\ z_{ik } & \in \{0,1\ } \quad & & i \in [ i]\text { , } k \in [ k ] \nonumber \\ t_{il } & \geq 0 \quad & & i \in [ i]\text { , } l \in [ l ] \nonumber\\ \beta_{kj}&\text{\ unconstrained } \quad & & k \in [ k]\text { , } j \in [ j ] , \nonumber\end{aligned}\ ] ] where is a binary variable , which is equal to one if and only if entity is assigned to cluster .value , referred to as big in the optimization literature , is a large positive constant . due to constraints and , is equal to the absolute error for the corresponding observation in the optimal solution , and are the regression coefficients for cluster , which are decision variables . the role of is to enforce constraints and only when they are needed ( entity is assigned to cluster ) . in detail , if , then we have , and , which implies because we are minimizing the sum of . if , constraints and require to be greater than a negative number , which holds trivially due to the existence of the nonnegativity constraint on .constraint requires that every entity is assigned to one cluster , and imposes the limit on the cardinality of each cluster . unlike the clr problem, the generalized clr allows each entity to have more than one observation , which implies that can be greater than one .the mixed integer linear program formulation for the clr problem in bertsimas and shioda has equal to one , and does not have the cluster cardinality constraint . besides , their objective function is the sum of the absolute errors while ours is the sum of squared errors. our sku clustering problem based on the seasonal effects can be modeled as the generalized clr problem .the entities to cluster are skus .the response variable corresponds to a vector of weekly sales for sku .the independent variables s include promotional predictors such as promotion mechanisms , percentage discount , and seasonal dummies for sku .aloise _ et al . _ showed np - hardness of the mssc problem in a general dimension when the number of clusters is two .general dimension means that the size of the vectors to be clustered is not a constant but part of the input data .a similar statement can be made for the generalized clr problem with the proof available in appendix [ appendix_proof_theorems ] .[ theorem : clr ] the generalized clr problem with two clusters in a general dimension is np - hard . with the formulation presented by , we can solve the generalized clr problem using any commercial optimization software that can handle quadratic mixed integer programs .however , this formulation suffers from two drawbacks , which makes it intractable for large instances .the first one relates to big .optimality of the solution and efficiency of integer programming solvers depend on a tight value of . unlike multiple linear regression ,where obtaining a valid value of is possible , it is not trivial to calculate a valid value of in and for the generalized clr or clr .when , s are not from the cluster that entity belongs to , and the residual can be arbitrarily large . provide an empirical result that a big m based mip formulation for clr sometimes fails to guarantee optimality of clr for the data sets they consider .the second one involves the symmetry of feasible solutions .any permutation of clusters yields the same solution , yet it corresponds to different decision variables .symmetry unnecessarily increases the search space , and renders the solution process inefficient . to overcome the symmetry problem, we propose a set partitioning formulation , which has already been used for the clr problem in .let denote the set of all clusters of entities with the cardinality equal to or greater than , i.e. , , |s| \geq n\} ] ) .we start the algorithm with small candidate clusters rather than , the set of all possible subsets of ] to obtain regression coefficients for ] such that , and let if , and otherwise . to obtain the new master problem, we need to replace with and with in the master problem . in addition , the range of constraints changes to ] to indicate whether group is selected in the cluster with the minimum reduced cost . to obtain the new pricing problem, we need to replace s with s in the pricing problem .constraint is changed to , and the range in constraints and now becomes \text { , } i \in g_r \text { , } l \in [ l] ] , create by randomly generating clusters randomly select parent chromosomes and using roulette wheel selection create child chromosomes and by performing crossover on and mutation on obtain and based on lloyd s algorithm , calculate and replace , } \gamma(h) ] .the generation procedure is based on a uniform random number and is the same as the one in line 1 of algorithm 1 .any randomly generated partition has to satisfy the constraint that for ] .we require this random integer to be no more than so that there is at least one gene positioned to the right of it .the portions of the chromosome lying to the right of this gene position are exchanged to produce two child chromosomes and encoded by and .the crossover operation is illustrated in figure [ figure : crossover ] .+ third , in line 5 , we perform mutation on these two child chromosomes .the mutation is performed on a child chromosome with a fixed probability , where is a parameter . a gene position with value randomly picked from the child chromosome using a uniform random number .after mutation , it is changed to with equal probability if is not zero . here is a random number with uniform distribution between zero and one .otherwise , when is zero , it is changed to with equal probability . in this way, the regression coefficients can take any real values after sufficient number of iterations .next , in line 6 , we need to decode these two mutated child chromosomes to get the partitions and of clusters they represent . to decode the child chromosome , we assign entity to cluster for then , we perform regression over each cluster of and , and update the encoding of these two child chromosomes and with the resultant regression coefficients .fitness and are calculated for the child chromosomes using . in lines 6 - 7, we replace the chromosome in population with the smallest fitness with the child chromosome with the smaller fitness if } \gamma(h) ] be the index set of entities in cluster , ] , build a regression model using entities in $ ] . in this section, we describe our two stage heuristic algorithm for the sku clustering problem .recall that we are given [ cols= " < , < " , ] + this gives a justification to evaluate algorithms for larger instances based on the gap from the target solution .however , in our experiment for larger instances , we observed that most of the proposed algorithms give a better objective function value than the target solution .hence , we did not use the target solution to evaluate the algorithms in the paper .however , the target solutions are available on the website stated in section [ sec : experiments ] .in this section , we present a stabilized version of the master problem - , referred to as restricted master problem , by applying the technique of du merle _ et al ._ . for iteration ,the restricted master problem is written as \label{constraints : restrictedpartition : assignment}\\ & 0 \leq \boldsymbol{q}^- \leq \boldsymbol{\xi}^{(k)}\\ & 0 \leq \boldsymbol{q}^+ \leq \boldsymbol{\xi}^{(k ) } \label{restr222}\\ & z_s \in \{0,1\}\quad & s \in \mathscr{s},\nonumber\end{aligned}\ ] ]
cluster - wise linear regression ( clr ) , a clustering problem intertwined with regression , is to find clusters of entities such that the overall sum of squared errors from regressions performed over these clusters is minimized , where each cluster may have different variances . we generalize the clr problem by allowing each entity to have more than one observation , and refer to it as generalized clr . we propose an exact mathematical programming based approach relying on column generation , a column generation based heuristic algorithm that clusters predefined groups of entities , a metaheuristic genetic algorithm with adapted lloyd s algorithm for k - means clustering , a two - stage approach , and a modified algorithm of spth for solving generalized clr . we examine the performance of our algorithms on a stock keeping unit ( sku ) clustering problem employed in forecasting halo and cannibalization effects in promotions using real - world retail data from a large supermarket chain . in the sku clustering problem , the retailer needs to cluster skus based on their seasonal effects in response to promotions . the seasonal effects are the results of regressions with predictors being promotion mechanisms and seasonal dummies performed over clusters generated . we compare the performance of all proposed algorithms for the sku problem with real - world and synthetic data .
m , width=316 ] a major challenge in biology is to determine how proliferating cells in developing and adult tissues behave _ in vivo_. a powerful technique in solving this problem is clonal analysis , the labelling of a sample of cells within the tissue to enable their fate and that of their progeny to be tracked .this approach gives access to information on proliferation , migration , differentiation ( into other cell types ) , and cell death ( apoptosis ) of the labelled cell population .the most reliable method of labelling is through genetic modification leading to the expression of a reporter gene in a random sample of cells .recently it has become possible to activate genetic labelling at a defined time in transgenic mice , enabling the kinetics of labelled cells to be studied with single - cell resolution _ in vivo _ . from a theoretical perspective , the analysis of clonal fate data presents a challenging `` inverse problem '' in population dynamics : while it is straightforward to predict the time - evolution of a population distribution according to a set of growth rules , the analysis of the inverse problem is more challenging , open to ambiguity and potential misinterpretation .these principles are exemplified by the mechanism of murine epidermal homeostasis : mammalian epidermis is organised into hair follicles interspersed with _interfollicular _ epidermis ( ife ) , which consists of layers of specialised cells known as keratinocytes ( see fig . [fig : skincrosssection](a ) ) .proliferating cells are confined to the basal epidermal layer .as they differentiate into specialised skin cells , the basal cells withdraw from the cycle of cell proliferation and then leave the basal layer , migrating towards the epidermal surface from which they are ultimately shed . to maintain the integrity of the tissue , new cells must be generated to replace those lost through shedding . for many years , it has been thought that interfollicular epidermis is maintained by two distinct progenitor cell populations in the basal layer .these comprise long - lived stem cells ( s ) with the capacity to self - renew , and their progeny , known as transit - amplifying cells ( ta ) , which go on to differentiate and exit the basal layer after several rounds of cell division .stem cells are also found in the hair follicles , but whilst they have the potential to generate epidermis in circumstances such as wounding , they do not appear to contribute to maintaining normal epidermis .the prevailing model of interfollicular homeostasis posits that the tissue is organised into regularly sized `` epidermal proliferative units '' or epus , in which a central stem cell supports a surrounding , clonal , population of transit amplifying cells , which in turn generate a column of overlying differentiated cells .several experimental approaches have been used to attempt to demonstrate the existence of epus , but conclusive evidence for their existence is lacking .the epu model predicts that slowly - cycling stem cells should be found in a patterned array in the ife ; cell labelling studies have failed to demonstrate such a pattern . in chimaeric micethe epu model predicts that the boundaries of mosaicism in the ife should run along the boundaries of epus ; instead boundaries were found to be highly irregular .genetic labelling studies using viral infection or mutation to activate expression of a reporter gene in epidermal cells have demonstrated the existence of long - lived , cohesive clusters of labelled cells in the epidermis , but these clusters do not conform to the predicted size distribution of the epu .thus , until recently the means by which homeostasis of ife was achieved has been unclear .however , by exploiting inducible genetic labelling , recent studies have allowed the fate of a representative sample of progenitor cells and their progeny to tracked _ in vivo _ .as well as undermining the basis of the stem / ta cell hypothesis , the range of clone fate data provide the means to infer the true mechanism of epidermal homeostasis . in particular ,these investigations indicate that the maintenance of ife in the adult system conforms to a remarkably simple birth - death process involving a single progenitor cell compartment .expanding upon the preliminary theoretical findings of ref . , the aim of this paper is to elucidate in full the evidence for , and the properties of , the model of epidermal maintenance , and to describe the potential of the system as a method to explore early signatures of carcinogenic mutations . to organise our discussion , we begin with an overview of the experimental arrangement , referring to ref . for technical details of the experimental system . to generate data onthe fate of individual labelled cells and their progeny , hereafter referred to as clonal fate data , inducible genetic marking was used to label a sample of cells and their progeny in the epidermis of transgenic mice . the enhanced yellow fluorescent protein ( eyfp )label was then detected by confocal microscopy , which enables 3d imaging of entire sheets of epidermis .low - frequency labelling of approximately 1 in 600 basal - layer epidermal cells at a defined time was achieved by using two drugs to mediate a genetic event which resulted in expression of the eyfp gene in a cohort of mice .this low efficiency labelling ensures that clones are unlikely to merge ( see discussion in section [ sec : model ] ) . by analysing samples of mice at different timepoints it was possible to analyse the fate of labelled clones at single cell resolution _ in vivo _ for times up to one year post - labelling in the epidermis ( see , for example , fig .[ fig : skincrosssection](b ) ) .weeks post - labelling of ( a ) a detached clone in which all cells have undergone a transition to terminal differentiation by week , and ( b ) a persisting clone in which some of the cells maintain a proliferative capacity , according to model ( [ modelratelaws ] ) .circles indicate progenitor cells ( p ) , differentiated cells ( d ) , and suprabasal cells ( sb ) .note that , because the birth - death process ( [ modelratelaws ] ) is markovian , the lifetime of cells is drawn from a poisson distribution with no strict minimum or maximum lifetime . the statistics of such lineage trees do not change significantly when we account for a latency period between divisions that is much shorter than the mean cell lifetime ( see discussion in section [ sec : stochastic ] ) ._ bottom : _ the total number of proliferating , differentiated and supra - basal cells for the two clones as a function of time.,width=316 ] with the gradual accummulation of eyfp levels , the early time data ( less than two weeks ) reveals a small increase in the number of labelled clones containing one or two cells . at longer times , clones increase in size while cells within clones begin to migrate through the suprabasal layers forming relatively cohesive irregular columns ( see fig .[ fig : skincrosssection](a ) ) . the loss of nuclei in the cornified layer ( fig .[ fig : skincrosssection ] ) makes determination of the number of cornified layer cells in larger clones by microscopy unreliable .therefore , to identify a manageable population , attention was focused on the population of basal cells in `` persisting clones '' , defined as those labelled clones which retain at least one basal layer cell , such as is exemplified in the theoretical lineage maps in fig .[ fig : tree ] .after two weeks , the density of persisting clones was seen to decrease monotonically indicating that the entire cell population within such clones had become differentiated and the clone detached from the basal layer ( shown schematically in figs .[ fig : skincrosssection](a ) and [ fig : tree](a ) ) .however , the population of persisting clones showed a steady increase in size throughout the entire duration of the experiment .weeks ) and at long times ( beyond weeks ) follows the two simple analytical approximations described in the main text ( lower and upper dashed curves ) . for times earlier than two weeks ( referring to section [ sec : eyfp ] ) , clones remain approximately one cell in size . the experimental data are consistent with the behaviour predicted by process ( [ modelratelaws ] ) ( black line ) when it is assumed that only a - type cells are labelled at induction .in contrast , assuming that a and b type cells label in proportion to their steady - state population leads to an underestimate of average clone size between two and six weeks ( lower curve , red online ) , as does the assumption that type b cells label with better efficiency ( not shown ) . _ inset : _ the underlying distribution of basal cells per clone at 2 weeks and 26 weeks post - labelling .the data is binned by cell count in increasing powers of .,width=316 ] towhat extent are the clone fate data consistent with the orthodox stem / ta cell model of epidermal maintenance ? referring to fig .[ fig : avgsize ] , one observes an inexorable increase in the average size of an ever - diminishing persisting clone population . this result is incompatible with any model in which the ife is supported by a population of long - lived stem cells . with the latter , one would expect the number density of persisting clones to reach a non - zero minimum ( commensurate with the labelling frequency of stem cells ) while the average clone size would asymptote to a constant value characteristic of a single epidermal proliferative unit .we are therefore lead to abandon , or at least substantially revise , the orthodox stem / ta cell hypothesis and look for a different paradigm for epidermal maintenance . but , to what extent are the clone fate data amenable to theoretical analysis? indeed , the application of population dynamics to the problem of cell kinetics has a long history ( see , e.g. , refs . ) with studies of epidermal cell proliferation addressed in several papers .however , even in the adult system , where cell kinetics may be expected to conform to a `` steady - state '' behaviour , it is far from clear whether the cell dynamics can be modelled as a simple stochastic process .regulation due to environmental conditions could lead to a highly nonlinear or even non - local dependence of cell division rates .indeed , _ a priori _ , it is far from clear whether the cell kinetics can be considered as markovian , i.e. that cell division is both random and independent of the past history of the cell . therefore , instead of trying to formulate a complex theory of cell division , taking account of the potential underlying biochemical pathways and regulation networks , we will follow a different strategy looking for signatures of steady - state behaviour in the experimental data and evidence for a simple underlying mechanism for cell fate .intriguingly , such evidence is to be found in the scaling properties of the clone size distribution . to identify scaling characteristics ,it is necessary to focus on the basal layer clone size distribution , , which describes the probability that a labelled progenitor cell develops into a clone with a total of basal layer cells at a time after labelling .( note that , in general , the total number of cells in the supra - basal layers of a clone may greatly exceed the number of basal layer cells . ) with this definition , describes the `` extinction '' probability of a clone , i.e. the probability that _ all _ of the cells within a labelled clone have migrated into the supra - basal layers . to make contact with the experimental data ,it is necessary to eliminate from the statistical ensemble the extinct clone population ( which are difficult to monitor experimentally ) and single - cell clones ( whose contribution to the total ensemble is compromised by the seemingly unknown relative labelling efficiency of proliferating and post - mitotic cells at induction ) , leading to a reduced distribution for persisting " clones , then , to consolidate the data and minimise fluctuations due to counting statistics , it is further convenient to _ bin _ the distribution in increasing powers of 2 , i.e. describes the probability of having two cells per clone , describes the probability of having 3 - 4 cells per clone , and so on . referring to fig .[ fig : scaling ] , one may see that , after an initial transient behaviour , the clone size distribution asymptotes in time to the simple scaling form , this striking observation brings with it a number of important consequences : as well as reinforcing the inapplicability of the stem cell / ta cell hypothesis , such behaviour suggests that epidermal maintenance must conform to a simple model of cell division .the absence of further characteristic time - scales , beyond that of an overall proliferation rate , motivates the consideration of a simple kinetics in which _ only one process dictates the long - time characteristics of clonal evolution_. moreover , from the scaling observation one can also deduce two additional constraints : firstly , in the long - time limit , the average number of basal layer cells within a _ persisting _ clone _ increases linearly with time _ , viz . secondly , if we assume that labelled progenitor cells are representative of _ all _ progenitor cells in the epidermis , and that the population of clones with only one basal layer cell is not `` extensive '' ( i.e. ) , this means that , in the long - time limit , the clone persistence probability must scale as such that where the constant , , is given by the fraction of proliferating cells in the basal layer . without this condition ,one is lead to conclude that the labelled population of basal layer cells either grows or diminishes , a behaviour incompatible with the ( observed ) steady - state character of the adult system ., plotted as a function of the rescaled time coordinate .the data points show measurements ( extracted from data such as shown in fig .[ fig : avgsize](inset ) , given fully in ref . ) , while the solid curves show the probability distributions associated with the non - equilibrium process ( [ modelratelaws ] ) for the basal - layer clone population as obtained by a numerical solution of the master equation ( [ pmastereqn ] ) .( error bars refer to standard error of the mean ) . at long times ,the data converge onto a universal curve ( dashed line ) , which one may identify with the form given in eq .[ pk_exact_all_cells ] .the rescaling compresses the time axis for larger clones , so that the large - clone distributions appear to converge much earlier onto the universal curve.,width=316 ] although the manifestation of scaling behaviour in the clone size distributions gives some confidence that the mechanism of cell fate in ife conforms to a simple non - equilibrium process , it is nevertheless possible to conceive of complicated , multi - component , models which could asymptote to the same long - time evolution . to further constrain the possible theories ,it is helpful to draw on additional experimental observations : firstly , immunostaining of clones with a total of two cells ( using the proliferation marker ki67 and , separately , the replication licensing factor cdc6 ) reveals that a single cell division may generate either one proliferating and one non - proliferating daughter through asymmetric division , or two proliferating daughters , or two non - proliferating daughters ( cf . ) .secondly , three - dimensional imaging of the epidermis reveals that only of mitotic spindles lie perpendicular to the basal layer indicating that divisions may be considered to be confined to the basal layer , confirming the results of earlier work that indicates a dividing basal cell generates two basal layer cells .this completes our preliminary discussion of the experimental background and phenomenology . in summary ,the clone fate data reveal a behaviour wholely incompatible with any model based on the concept of long - lived self - renewing stem cells .the observation of long - time scaling behaviour motivates the consideration of a simple model based on a stochastic non - equilibrium process and is indicative of the labelled cells being both a representative ( i.e. self - sustaining ) population and in steady - state . in the following, we will develop a theory of epidermal maintenance which encompasses all of these observations .taken together , the range of clonal fate data and the observation of symmetric and asymmetric division are consistent with a remarkably simple model of epidermal homeostasis involving only one proliferating cell compartment and engaging just three adjustable parameters : the overall cell division rate , ; the proportion of cell divisions that are symmetric , ; and the rate of transfer , , of non - proliferating cells from the basal to the supra - basal layers . to maintain the total proliferating cell population , a constraint imposed by the steady - state assumption , we have used the fact that the division rates associated with the two channels of symmetric cell division must be equal . denoting the proliferating cells as type a , differentiated basal layer cells as type b , and supra - basal layer cells as type c , the model describes the non - equilibrium process , finally , the experimental observation that the total basal layer cell density remains approximately constant over the time course of the experiment leads to the additional constraint that reducing the number of adjustable parameters to just two . by ignoring processes involving the shedding of cells from the surface of the epidermis ,the applicability of the model to the consideration of the _ total _ clone size distribution is limited to appropriately short time scales ( up to six weeks post - labelling ) . however ,if we focus only on the clone size distribution associated with those cells which occupy the basal layer , the model can be applied up to arbitrary times . in this case , the transfer process must be replaced by one in which . in either case , if we treat all instances of cell division and cell transfer as independent stochastic events , a point that we shall revisit later , then the time evolution associated with the process ( [ modelratelaws ] ) can be cast in the form of a master equation .defining as the probability of finding type a cells and type b cells in a given clone after some time , the probability distribution evolves according to the master equation : \nonumber\\ & & \qquad\qquad + r\lambda[(n_{\rm a}+1)p_{n_{\rm a}+1,n_{\rm b}-2}-n_{\rm a } p_{n_{\rm a},n_{\rm b}}]\nonumber\\ & & \qquad\qquad + ( 1 - 2r)\lambda[n_{\rm a } p_{n_{\rm a},n_{\rm b}-1}-n_{\rm a } p_{n_{\rm a},n_{\rm b}}]\nonumber\\ & & \qquad\qquad + \gamma[(n_{\rm b}+1)p_{n_{\rm a},n_{\rm b}+1}-n_{\rm b } p_{n_{\rm a},n_{\rm b}}]\,.\end{aligned}\ ] ] if we suppose that the basal layer cells label in proportion to their population , the latter must be solved subject to the boundary condition . later , in section[ sec : eyfp ] , we will argue that the clone size distribution is compatible with a labelling efficiency which favours a over b type cells . either way , by excluding single cell clones from the distribution , this source of ambiguity may be safely eliminated .although the master equation ( and its total cell number generalisation ) is not amenable to exact analytic solution , its properties can be inferred from the consideration of the a cell population alone for which an explicit solution may be derived . when considered alone ,a type cells conform to a simple set of rate laws , an example of a galton - watson process , long known to statisticians ( see , e.g. , ref . ) . in this case, the probability distribution , which is related to that of the two - component model through the relation , can be solved analytically .( here , we have used a lower case to discriminate the probability distribution from its two - component counterpart . ) for an initial distribution it may be shown that , from this system and its associated dynamics , one can draw several key implications : starting with a single labelled cell , the galton - watson process predicts that the persistance probability of the resulting clone ( i.e. , in this case , the probability that the clone retains at least one proliferating cell ) , is given by i.e. as with the experiment , the persistance probability of a clone decays monotonically , asymptoting to the form at time scales , the time scale for symmetric division .applied to the experimental system , this suggests that labelled clones continue to detach from the basal layer indefinitely . at the same time , defining as the size distribution of _ persisting _ clones , the mean number of basal layer cells in a persisting clone grows steadily as such that the overall cell population remains constant , viz . , i.e. the continual extinction of clones is compensated by the steady growth of persisting clones such that the average number of proliferating cells remains constant : given enough time , all cells would derive from the same common ancestor , the hallmark of the galton - watson process .this linear increase in clone size may lead one to worry about neighbouring clones coalescing .fortunately , the continual extinction of clones ensures that the fraction of clones conjoined with their neighbours remains small and of same order as the initial labelling density . ] .the fact that this fraction is constant is again indicative of the steady - state condition maintained throughout the experiment .if , at some instant , a clone is seen to have , say , proliferating cells then , after a further time , its size will fluctuate as thus clones ( as defined by the a cell population ) will maintain an approximately stable number of cells providing . for larger clonesthis time may exceed the lifetime of the system . at the limit where macroscopic sections of the basal layer are considered ,the statistical fluctuations are small .the increased stability of larger clones also explains the surprising prediction that , given enough time , all clones eventually become extinct ( viz . ) .calculated explicity , the extinction probability for a clone of size scales as approaching unity at long times .however , because this extinction probability is small when , a large enough clone may easily persist beyond the lifetime of the system . at asymptotically long times, one may show ) , we treat as continuous variables in eq .( [ pmastereqn ] ) ( a good approximation at large values of ) . then , making the ansatz that the b - cell population remains slave to the a - cell population viz . , the master equation simplifies to the approximate form which is solved by , leading to eq .[ p_exact_all_cells ] . ] that the full probability distribution for finding cells within a persisting clone scales in proportion to , viz .\ , , \ ] ] and so - \exp\left[-2^k\frac{\rho}{r\lambda t}\right]\ , , \ ] ] i.e. the probability distribution acquires the scaling form found empirically . referring to eq .( [ scaling ] ) , we can therefore deduce the form of the scaling function , -\exp[-\rho x / r\lambda]\,.\ ] ] as a result , at long times , the average basal layer population of persisting clones becomes proportional to the average number of proliferating cells per clone , , a behaviour consistent with that seen in experiment ( see fig . [fig : avgsize ] ) . in fitting the model to the data ( see below ), we will find that the rates and at which differentiated cells are created and then transferred into the super - basal region are significantly larger than the rate of symmetric division , which dictates the long - time behaviour of the clone size distribution . in this case , at early times ( ) , the clone size distributions are dominated by the differentiation and transfer rates , which remain prominent until the population of labelled differentiated cells associated with each proliferating cell reaches its steady - state value of .one may therefore infer that , at short times , the mean number of basal layer cells in clones arising from proliferating cells is given by and that the early - time clone size distribution is poisson - distributed , viz . with these insights it is now possible to attempt a fit of the model to the data . referring to fig .[ fig : fit2data ] , one may infer the rate of cell division from the short - time data , and the symmetric division rate from the long - time scaling data .in particular , taking the fraction of proliferating cells in the basal layer to be , a figure obtained experimentally by immunostaining using ki67 , a fit of eq .( [ eqn : shorttimep ] ) to the short - time data ( fig .[ fig : fit2data](a ) ) is consistent with a transfer rate of /week which , in turn , implies a rate of cell division of /week .furthermore , by plotting the long - time , large- , size - distributions in terms of the `` inverse '' to the scaling function , \right)^{-1}\\ & = & \left ( 2\ln\left[(1-(1-\mathcal{p}^{\rm pers.}_k(t))^{1/2})/2\right ] \right)^{-1},\end{aligned}\ ] ] the data converge onto a linear plot ( fig .[ fig : fit2data](b ) ) .the resulting slope takes the value , from which we may infer the symmetric division rate /week , and .these figures compare well with an optimal fit of the _ entire _ basal layer clone size distribution ( fig .[ fig : scaling ] ) , obtained by numerically integrating the master equation ( [ pmastereqn ] ) .the fitting procedure is shown in fig .[ fig : fit2data](c ) ( solid curves ) , where the likelihood of the model is evaluated for a range of values of and , as assessed from a test of the model solution .one may see that the likelihood is maximised with an overall division rate of /week and a symmetric division rate in the range /week , thus confirming the validity of the asymptotic fits . moreover , the corresponding fit of both the basal layer distribution and the _ total _ clone size distribution , including both basal and supra - basal cells , is equally favourable ( fig .[ fig : fit2data](c ) , dashed ) .thus , in the following sections we shall use the asymptotically fitted value of , however any choice of the parameter in the range gives similar results .although the comparison of the experimental data with the model leaves little doubt in its validity , it is important to question how discerning is the fit . by itself ,the observed increase in the size of persisting clones is sufficient to rule out any model based on long - lived self - renewing stem cells , the basis of the orthodox epu model .however , could one construct a more complicated model , which would still yield a similar fit ?certainly , providing the long - time evolution is controlled by a single rate - determining process , the incorporation of further short - lived proliferating cell compartments ( viz .transit - amplifying cells ) would not affect the observed long - time scaling behaviour .however , it seems unlikely that such generalisations would provide an equally good fit to the short - time data .more importantly , it is crucial to emphasize that the current experimental arrangement would be insensitive to the presence of a small , quiescent , long - lived stem cell population .yet , such a population could play a crucial role in _non_-steady state dynamics such as that associated with wound healing or development .we are therefore led to conclude that the range of clone fate data for normal adult ife are consistent with a simple ( indeed , the simplest ) non - equilibrium process involving just a single progenitor cell compartment . at this stage , it is useful to reflect upon the sensitivity of the model to the stochasticity assumption applied to the process of cell division. clearly , the scaling behaviour ( eq . [ p_exact_all_cells ] ) depends critically on the statistical independence of successive cell divisions ; each cell division results in symmetric / asymmetric cell fate with relative probabilities as detailed in ( [ modelratelaws2 ] ) .but , to what extent would the findings above be compromised if the cell _ cycle - time _, i.e. the time between consecutive cell divisions , were not determined by an independent stochastic process ?this question may have important ramifications , because the assumption of independent cell division , used in formulating the master equation ( [ pmastereqn ] ) , introduces a manifestly unphysical behaviour by allowing cells to have arbitrarily short cycle times .moreover , although a wide distribution of cell cycle - times has been observed for human keratinocytes _ in vitro _ , it is possible that that keratinocytes _ in vivo _ may divide in _ synchrony _ , giving a cell cycle - time distribution narrowly centered about the mean ( ) . in the following , we shall address both of these points : firstly , we shall show that , up to some potential latency period ( the time delay before a newly - divided cell is able to divide again ) , consecutive cell divisions occur independently as an asychronous , poisson process . secondly , while the data is insufficient to detect a latency period of hours or less between consecutive cell divisions , the data does discriminates against a period lasting longer than hours . to investigate the degree to which the model is sensitive to the particular cell cycle - time distribution , let us revisit the original model of independent cell division with several variations : firstly , we introduce a latency period of immediately following cell division , in which daughter cells can not divide .this biologically - motivated constraint renders a more complicated yet more realistic model of cell division than the idealised system studied in the previous section .motivated by observations of the minimal cycle - time of ( human ) keratinocytes , where a latency period of hours was observed _ in vitro _ , we shall here consider the a range latency periods of up to hours .secondly , we compare the empirical clone size distributions with a model where all progenitor cells have a cycle - time of exactly , i.e. where cells within each clone divide in perfect synchrony .finally , we shall investigate a range of intermediate models with different distributions of progenitor cell cycle - time ( see fig .[ fig : stochastic](a ) ) .technically , the resulting clone size distributions may be evaluated through monte carlo simulations of the non - equilibrium process ( [ modelratelaws ] ) with the cycle - time of each proliferating cell selected at random from a gamma distribution of the form where is the average time to division following the initial latency period , and is the `` shape parameter '' of the gamma distribution . in particular, the choice of shape parameter corresponds to the exponential distribution which characterises the independent cell cycle - time distribution , whereas describes the case in which all a - cells have an exact cycle - time of ( see fig .[ fig : stochastic](a ) ) .then , to reflect the assumption that initially - labelled , spatially separated , progenitor cells have uncorrelated cell cycles , the time to the initial division event post - labelling is adjusted by a random time $ ] . finally , for an unbiased comparison of the models, we optimise the value of for each model separately against the empirical data , whilst keeping to ensure an optimal fit of the long - time data , as discussed below . the resulting clone size distributions are shown in fig .[ fig : stochastic](b ) , where the case of independent division following a 12-hour latency ( ) and the exact cycle - time case ( ) are compared to the empirical _ total _ clone size distribution , which includes both basal and supra - basal ( type c ) cells , over the first 6 weeks post - labelling .two intermediate cases are also shown for comparison ( ) .focusing first on the results for the case , which bears closest resemblance to the markovian model analysed using the master equation ( [ pmastereqn ] ) , one may see by inspection that the quality of the fit to the data remains good even when the effects of a latency period between cell divisions is taken into account .more rigorously , a likelihood analysis reveals that the two cases are statistically indistinguishable ( see fig .[ fig : fit2data](c ) , inset ) , which indicates that the duration of a latency period of hours is beyond the current empirical resolution .however , referring to fig .[ fig : fit2data]c ( inset ) , a similar analysis of longer latency periods reveals that for periods of hours , the fit to the data is significantly poorer .turning next to the predicted basal - layer clone size distributions at late times ( ) ( not shown ) , one may see that all of the proposed distributions asymptotically converge : starting with exactly one cell , then the moment - generating function associated with the a cell population distribution after cell cycles satisfies the recursion relation : which asymptotes to the continuous master equation with the relative magnitude of the leading - order correction dropping off as . butwith , this equation is simply the master equation for the moment - generating function associated with the original model , eq .[ p_exact_acells ] , and so the two models converge . one may therefore conclude that , beyond the first several weeks of the experiment ( ) , the fit to the data is sensitive only to the average cycle time of progenitor cells . with this in mind , we note that for the case of perfectly synchronous cell division , an optimal ( albeit poor ) numerical fit was obtained when /week , a figure that compares well with the fit for the independent case .it appears therefore that _ the predicted average cell division rate ( ) is insensitive to the shape of the cell cycle distribution_. finally , let us turn to the early time behaviour ( ) , where the predicted distributions are distinct . referring to fig .[ fig : stochastic](b ) , one may see , at 2 - 4 weeks post - labelling , that relatively large clones ( cells ) appear earlier than expected by a model assuming synchronous division , and that , compared with the same model , a sizeable proportion of small clones ( e.g. , cells ) lingers on for far longer than expected . the same behaviour is observed for the basal layer clone size distribution ( not shown ) .one may therefore infer that cell division conforms to a model of _ independent _ rather than _ synchronous _ division , allowing for some progenitor cells to divide unusually early , and for others to remain quiescent for an unusually long period of time . in summary, we have established that , following division , progenitor cells do not divide for a period that is likely to last up to hours , and not more than hours .after this latency period , the data is consistent with cells switching to a mode of independent , asynchronous , cell division .these results shed light on why the simple model of independent cell division presented in section [ sec : model ] succeeds in producing such a remarkable fit to the data .although the integrity of the fit of the model to the data provides some confidence in its applicability to the experimental system , its viability as a model of epidermal homeostasis rests on the labelled clone population being representative of all cells in the ife .already , we have seen that the model , and by inference , the labelled clone population , has the capacity to self - renew .however , the slow accumulation of eyfp after induction , together with the question of the relative labelling efficiency of the two basal layer cell types , leaves open the question of the very short - time behaviour . accepting the validity of the model , we are now in a position to address this regime . in doing so , it is particularly useful to refer to the time evolution of clone size as measured by the average number of basal cells in a persisting clone . as expected from the scaling analysis discussed in section [ sec : scaling ], a comparison of the experimental data with that predicted by the proposed cell kinetic model shows a good agreement at long times ( fig .[ fig : avgsize ] ) .however , comparison of the data at intermediate time - scales provides significant new insight .in particular , if we assume _ equal _ labelling efficiency of progenitor and differentiated cells , i.e. that both cell types label in proportion to their steady - state population ( shown as the lower ( red ) curve in the fig . [fig : avgsize ] ) , then there is a substantial departure of the predicted curve from the experimental data for times of between two and six weeks .intriguingly , if we assume that differentiated cells simply do nt label , then the agreement of the data with theory is excellent from two weeks on ! we are therefore lead to conclude that , at least from two weeks , all labelled clones derive from progenitor cells labelled at induction . with this in mind, we may now turn to the average clone size as inferred from the data at two days and one week .here one finds that the model appears to substantially over - estimate the clone size .indeed , fig .[ fig : avgsize ] suggests that the average clone size is pinned near unity until beyond the first week post - labelling , i.e. the relative population of single - cell clones is significantly _ larger _ than expected at one week , yet falls dramatically to the theoretical value at two weeks . referring again to the slow accumulation of eyfp , can one explain the over - representation of single - cell clones at one week post - labelling ?at one week , two - cell clones are observed soon after cell division , and thus express lower concentrations of eyfp compared to single - cell clones . as a resultthey may be under - represented . at later times , all labelled clones become visible as eyfp concentration grows , explaining the coincidence of experiment and theory at two weeks .it follows , of course , that the size distributions at later time points would be unaffected by slow eyfp accumulation. however , a full explanation of this effect warrants further experimental investigation , and is beyond the scope of this paper .having elucidated the mechanism of normal skin maintenance , it is interesting to address its potential as a predictive tool in clonal analysis . conceptually, the action of mutations , drug treatments or other environmental changes to the tissue can effect the non - equilibrium dynamics in a variety of ways : firstly , a revision of cell division rates or `` branching ratios '' ( i.e. symmetric vs. asymmetric ) of _ all _ cells may drive the system towards either a new non - equilibrium steady - state or towards a non - steady state evolution resulting in atrofication or unconstrained growth of the tissue . (the development of closed non steady - state behaviour in the form of limit cycles seems infeasible in the context of cellular structures . )secondly , the stochastic revision of cell division rates or branching ratios of _ individual _ cells may lead to cancerous growth or extinction of a sub - population of clones .the former may be referred to as a `` global perturbation '' of the cell division process while the second can be referred to as `` local '' . in both cases, one may expect clonal analysis to provide a precise diagnostic tool in accessing cell kinetics . to target our discussion to the current experimental system , in the following we will focus on the action of a local perturbation in the form of a carcinogenic mutation , reserving discussion of a global perturbation , and its ramifications for the study of drug treatment , to a separate publication .let us then consider the action of a local perturbation involving the activation of a cancer gene in a small number of epidermal cells , which leads to the eventual formation of tumours . in the experimental system , one can envisage the treatment coinciding with label induction , for example by simultaneously activating the eyfp and the cancer gene . in this case , clonal fate data should simply reflect a modified model of cell proliferation leading to the eventual failure of the steady - state model of tissue maintenance . to quantify the process of cancer onset , we start by establishing the simplest possible changes to process ( [ modelratelaws ] ) which may be associated with tumour growth .cancer is widely held to be a disease caused by genetic instability that is thought to arise when a progenitor cell undergoes a series of mutations . as a result , cells within the mutant clone prefer to proliferate , on average , over processes leading to terminal differentiation or death . in this investigationwe shall consider a `` simple '' cancer resulting from _ two _ rate - limiting mutations : referring to our proposed labelling experiment , the controlled induction of a cancer - causing mutation during label induction defines the first mutation ; a second , rate - limiting step then occurs with the stochastic occurrence of a second cancer causing mutation .examples of the first type of mutation may be genes that affect the ability of a cell to respond to genetic changes of the cell , e.g. _ p53 _, whilst the second mutation may be of a gene that affects clone fate such as the _ ras _ oncogene .we may therefore distinguish between `` stage one '' mutated cells , which maintain the steady - state , and `` stage two '' cells , which have the capacity for tumour formation .the resulting process of cell proliferation is set by three parameters : the overall rate of mutation from a stage one a cell into a cancerous stage two cell ; the division rate of the stage two cells ; and the degree of imbalance between their stochastic rate of proliferation and differentiation . in summary , focusing on the proliferating cell compartment only , and denoting the stage two mutated cells as type a , then the revised cell proliferation model includes the additional non - equilibrium processes the rate may be interpreted as the mean rate with which a stage - one cell acquires an additional mutation necessary to activate a second oncogene .the mutated cells then give rise , on average , to an exponentially growing cell lineage with growth rate .this nonequilibrium process was originally addressed by kendall , who predicted the distribution in the number of tumours detected at time after mutation .his focus on tumour statistics may reflect the experimental limitations in clonal analysis at the time : until recently it was not possible to reliably detect clones at all , let alone to count the number of cells per clone .experimentally , however , the clone size distributions are a more efficient measure of cell kinetics than the tumour number distributions , because they result in a far richer data set , and are accessible within weeks rather than months. we shall therefore extend kendall s approach to predict the clone size distributions at times far earlier than tumour appearance . to familiarise ourselves with the modified model ,consider the evolution of the average clone size with time . focusing on the proliferating cell compartment with type a cells and type a cells in a clone , the relevant mean - field equations are which give the expected shift from linear growth of clones in normal skin to that of exponential growth , . more interestingly , referring to the master equation below, one may show that the variance in clone size also changes qualitatively : whereas for normal skin the rms variance in clone size grows as , here the variance in the long - time limit is _ finite _ , that is , the relative broadening of the clone size distribution observed in normal skin is halted by the introduction of an exponentially growing cell population .these observations may already provide a crude method for identifying carcinogenesis through clonal analysis . to do better ,it becomes necessary to solve for the full size distribution by extending the master equation ( [ pmastereqn ] ) to include process ( [ processeqn : cancer ] ) .if we neglect the fate of differentiated cells , then the master equation now describes the evolution of the probability for finding type a cells and type a cells in a clone , + r\lambda\left[(n+1)p_{n+1,n^*}-n p_{n , n^ * } \right]+ \nu\left[(n+1)p_{n+1,n^*-1}-n p_{n , n^ * } \right]\nonumber\\ & & \qquad\qquad + \frac{1+\delta}{2}\mu\left[(n^*-1)p_{n , n^*-1 } - n^ * p_{n , n^ * } \right]+ \frac{1-\delta}{2}\mu\left[(n^*+1)p_{n , n^*+1 } - n^ * p_{n , n^ * } \right]\ , , \nonumber\end{aligned}\ ] ] subject to the experimental boundary condition corresponding to exactly one `` stage one '' cell per clone at . as for the case of normal skin , we shall later be interested in the distribution of persistent clones , defined as , while it is not possible to solve eq .( [ eqn : cancermaster ] ) analytically , progress may be made when we allow for the widely - accepted view that tumours are _ monoclonal _ , that is they arise from a single `` stage two '' mutated cell .this assumption conveniently limits us to the parameter space , for which an approximate long - time solution for the full clone size distribution may be found . referring to the appendix for details, we find that the binned clone size distribution takes the long - time asymptotic scaling form , \,,\end{aligned}\ ] ] where , , , and . despite its apparent complexity, this distribution is characterised by a simple scaling behaviour : referring to fig .[ fig : cancer](a ) , the predicted clone size distributions are plotted using the scaling appropriate to the normal ( unperturbed ) system ( cf .[ fig : scaling ] ) . in this case, it is apparent that the scaling fails .by contrast , from the expression for , it is clear that the size distributions should scale according to the time translation , as confirmed by the results shown in fig . [fig : cancer](b ) . further consideration of the size distribution exposes several additional features , which may provide further access to the new model parameters : + * the long - time distribution decays with a rate : expanding for small gives us the asymptotic form of the universal decay curve . for ,consistent with the monoclonicity requirement , we find where denotes the gamma function .this expression allows us to estimate from the rescaled clone size distributions , providing access to the cell division and mutation parameters of the observed cells . *the probability of tumour formation is finite : this is a well - known feature of the simple non - critical birth - death process ( [ processeqn : cancer ] ) . referring to the appendix , we find that the probability for any given clone to survive and form a tumour is finite , as a result , the onset of cancer will halt the steady decrease in the density of labelled clones that is a hallmark of the unperturbed system .these properties , and especially the change in scaling behaviour , allow the onset of early - stage cancer to be identified from observations of clones less than one hundred cells in size .this may provide a dramatic improvement both in speed and accuracy over current experimental models , which rely on much later observations of tumours ( or hyperplasias ) in order to deduce the cell kinetics at early - stages .to summarize , we have shown that the range of clone fate data obtained from measurements of murine tail epidermis are consistent with a remarkably simple stochastic model of cell division and differentiation involving just one proliferating cell compartment .these findings overturn a long - standing paradigm of epidermal fate which places emphasis on a stem cell supported epidermal proliferative unit . as well as providing significant new insight into the mechanism of epidermal homeostasis, these results suggest the utility of inducible genetic labelleling as a means to resolve the mechanism of cell fate in other tissue types , and as a means to explore quantitatively the effects of drug treatment and mutation . to conclude, we note that the analysis above has focused on the dynamics of the clonal population without regard to the spatial characteristics .indeed , we have implicitly assumed that any model capable of describing the cell size distributions will also succeed in maintaining the near - uniform areal cell density observed in the basal layer . however , it is known that , when augmented by spatial diffusion , a simple galton - watson birth - death process leads to `` cluster '' formation in the two - dimensional system whereupon local cell densities diverge logarithmically .significantly , these divergences can not be regulated through a density - dependent mobility .understanding how the galton - watson process emerges from a two - dimensional reaction - diffusion type process represents a significant future challenge . from a practical perspective, there is also the significant question of how the cell kinetic model might be generalised to describe other forms of epidermis . in particular , it is not feasible to repeat these experiments _ in vivo _ in human epidermis , a system of obvious medical significance . therefore , it may be of great interest to determine , in future studies , the extent to which our results compare with the behaviour found in other systems .lastly , our analysis of the cancer system referred to the relatively simple case of a two - stage mutation .it is , of course , well - known that tumour formation is usually the result of multiple mutations .understanding whether clonal fate data can be used to probe the kinetics of _ multi - stage _ mutation remains an interesting future challenge .acknowledgements : we are grateful to sam edwards , martin evans , bill harris , marc kirschner , and gunter schtz for useful discussions .the experimental aspect of this work was funded by the medical research council , association for international cancer research and cancer research uk .to derive the clone size distribution given in eq .( [ cancersummarysoln ] ) , we start by quoting the known result for the probability distribution of finding stage - two a cells at time starting from a single a cell at time , when , this distribution asymptotes to the form from the value of we see that even when a cell has mutated , it is not guaranteed to result in a tumour : this will only occur with a probability of .the value of plays an important role in determining the statistics of tumour formation , as will be seen below .we now make two approximations : first , we take the long - time clone size distribution to be dominated by the statistics of a cells . this is a safe assumption at times and , as may be seen by considering the behaviour of the mean - field equations in section [ sec : cancer : soln ] .this approximation allows us to focus on the size distribution of a cells only , , which is related to the full clone size distribution by the sum .secondly , we assume that the entire population of type a cells in each clone arises from the first mutated cell that gives rise to a stable , exponentially growing lineage of cells .this corresponds to the condition , as discussed in the main text . with these two approximations ,the probability of finding a labelled clone containing mutated cells is given by the population distribution of the _ first surviving cell lineage _ of a cells , where is some normalisation constant , and we have introduced the probability for the -th lineage of mutated cells within a given clone to be created during the interval through the mutation process .the weight factor gives the probability that the first cell lineages of a cells within a clone will become extinct a situation necessary to make the -th cell line relevant to the distribution according to the monoclonal approximation .the rates may be accessed by considering the probability that a clone containing type a cells at time also contains _ independent lineages _ of mutated a cells , each arising from a separate mutation event .( later we shall treat the evolution of each of these cell lines post - creation ) . to solve for must introduce its moment - generating function , which evolves ( from eq .[ eqn : cancermaster ] ) according to the dynamical equation , \partial_qg\,.\ ] ] solving this equation subject to the initial condition of one `` stage - one '' ( a ) cell per clone , we find the solution with .( [ g1soln ] ) describes the evolution of a single a cell as it proliferates and eventually gives rise to a set of internal lines of mutated cells . before we proceed to find , note that setting in eq .( [ g1soln ] ) gives us the result ( quoted in the main text ) for the asymptotic fraction of clones in which all mutated cell lines become extinct , on the other hand , setting only in eq .( [ g1soln ] ) gives the moment - generating function for ( yet another ) distribution of a clone containing independent lines of a cells irrespective of the number of normal cells in the clone . finally , noting that then gives : which we may substitute into eq .( [ eqn : case2pdefn ] ) to find ( for ) from this expression , simplified by the large- approximation , we obtain the final form of the binned size distribution given in eq .( [ cancersummarysoln ] ) .
the rules governing cell division and differentiation are central to understanding the mechanisms of development , aging and cancer . by utilising inducible genetic labelling , recent studies have shown that the clonal population in transgenic mouse epidermis can be tracked _ in vivo_. drawing on these results , we explain how clonal fate data may be used to infer the rules of cell division and differentiation underlying the maintenance of adult murine tail - skin . we show that the rates of cell division and differentiation may be evaluated by considering the long - time and short - time clone fate data , and that the data is consistent with cells dividing independently rather than synchronously . motivated by these findings , we consider a mechanism for cancer onset based closely on the model for normal adult skin . by analysing the expected changes to clonal fate in cancer emerging from a simple two - stage mutation , we propose that clonal fate data may provide a novel method for studying the earliest stages of the disease .
-0.1 cm -0.1 cm [ fig : introduction ] -0.5 cm lossy compression ( jpeg , webp and hevc - msp ) is one class of data encoding methods that uses inexact approximations for representing the encoded content . in this age of information explosion ,lossy compression is indispensable and inevitable for companies ( _ twitter _ and _ facebook _ ) to save bandwidth and storage space .however , compression in its nature will introduce undesired complex artifacts , which will severely reduce the user experience ( figure [ fig : introduction ] ) .all these artifacts not only decrease perceptual visual quality , but also adversely affect various low - level image processing routines that take compressed images as input , contrast enhancement , super - resolution , and edge detection . however , under such a huge demand , effective compression artifacts reduction remains an open problem .we take jpeg compression as an example to explain compression artifacts .jpeg compression scheme divides an image into 8 pixel blocks and applies block discrete cosine transformation ( dct ) on each block individually .quantization is then applied on the dct coefficients to save storage space .this step will cause a complex combination of different artifacts , as depicted in figure [ fig : introductiona ] . _ blocking artifacts _ arise when each block is encoded without considering the correlation with the adjacent blocks , resulting in discontinuities at the 8 borders . _ ringing effects _ along the edges occur due to the coarse quantization of the high - frequency components ( also known as gibbs phenomenon ) ._ blurring _ happens due to the loss of high - frequency components . to cope with the various compression artifacts ,different approaches have been proposed , some of which can only deal with certain types of artifacts .for instance , deblocking oriented approaches perform filtering along the block boundaries to reduce only blocking artifacts .liew and foi use thresholding by wavelet transform and shape - adaptive dct transform , respectively .these approaches are good at removing blocking and ringing artifacts , but tend to produce blurred output .jung propose restoration method based on sparse representation .they produce sharpened images but accompanied with noisy edges and unnatural smooth regions .to date , deep learning has shown impressive results on both high - level and low - level vision problems .in particular , the srcnn proposed by dong shows the great potential of an end - to - end dcn in image super - resolution .the study also points out that conventional sparse - coding - based image restoration model can be equally seen as a deep model .however , we find that the three - layer network is not well suited in restoring the compressed images , especially in dealing with blocking artifacts and handling smooth regions . as various artifacts are coupled together , features extracted by the first layer is noisy , causing undesirable noisy patterns in reconstruction . to eliminate the undesired artifacts , we improve the srcnn by embedding one or more `` feature enhancement '' layers after the first layer to clean the noisy features .experiments show that the improved model , namely `` artifacts reduction convolutional neural networks ( ar - cnn ) '' , is exceptionally effective in suppressing blocking artifacts while retaining edge patterns and sharp details ( see figure [ fig : introduction ] ) . however , we are met with training difficulties in training a deeper dcn .`` deeper is better '' is widely observed in high - level vision problems , but not in low - level vision tasks .specifically , `` deeper is not better '' has been pointed out in super - resolution , where training a five - layer network becomes a bottleneck .the difficulty of training is partially due to the sub - optimal initialization settings .the aforementioned difficulty motivates us to investigate a better way to train a deeper model for low - level vision problems .we find that this can be effectively solved by transferring the features learned in a shallow network to a deeper one and fine - tuning simultaneously .this strategy has also been proven successful in learning a deeper cnn for image classification . following a similar general intuitive idea , _ easy to hard _, we discover other interesting transfer settings in this low - level vision task : ( 1 ) we transfer the features learned in a high - quality compression model ( easier ) to a low - quality one ( harder ) , and find that it converges faster than random initialization .( 2 ) in the real use case , companies tend to apply different compression strategies ( including re - scaling ) according to their purposes ( figure [ fig : introductionb ] ) .we transfer the features learned in a standard compression model ( easier ) to a real use case ( harder ) , and find that it performs better than learning from scratch .the contributions of this study are three - fold : ( 1 ) we formulate a new deep convolutional network for efficient reduction of various compression artifacts .extensive experiments , including that on real use cases , demonstrate the effectiveness of our method over state - of - the - art methods both perceptually and quantitatively .( 2 ) we verify that reusing the features in shallow networks is helpful in learning a deeper model for compression artifact reduction . under the same intuitive idea _ easy to hard _ , we reveal a number of interesting and practical transfer settings .our study is the first attempt to show the effectiveness of feature transfer in a low - level vision problem .( 3 ) we show the effectiveness of ar - cnn in facilitating other low - level vision routines ( super - resolution and contrast enhancement ) , when they take jpeg images as input .existing algorithms can be classified into deblocking oriented and restoration oriented methods .the deblocking oriented methods focus on removing blocking and ringing artifacts . in the spatial domain ,different kinds of filters have been proposed to adaptively deal with blocking artifacts in specific regions ( , edge , texture , and smooth regions ) . in the frequency domain ,liew utilize wavelet transform and derive thresholds at different wavelet scales for denoising .the most successful deblocking oriented method is perhaps the pointwise shape - adaptive dct ( sa - dct ) , which is widely acknowledged as the state - of - the - art approach .however , as most deblocking oriented methods , sa - dct could not reproduce sharp edges , and tend to overly smooth texture regions .the restoration oriented methods regard the compression operation as distortion and propose restoration algorithms .they include projection on convex sets based method ( pocs ) , solving an map problem ( foe ) , sparse - coding - based method and the regression tree fields based method ( rtf ) , which is the new state - of - the art method .the rtf takes the results of sa - dct as bases and produces globally consistent image reconstructions with a regression tree field model .it could also be optimized for any differentiable loss functions ( ssim ) , but often at the cost of other evaluation metrics . super - resolution convolutional neural network ( srcnn ) is closely related to our work . in the study ,independent steps in the sparse - coding - based method are formulated as different convolutional layers and optimized in a unified network .it shows the potential of deep model in low - level vision problems like super - resolution .however , the model of compression is different from super - resolution in that it consists of different kinds of artifacts .designing a deep model for compression restoration requires a deep understanding into the different artifacts .we show that directly applying the srcnn architecture for compression restoration will result in undesired noisy patterns in the reconstructed image .transfer learning in deep neural networks becomes popular since the success of deep learning in image classification .the features learned from the imagenet show good generalization ability and become a powerful tool for several high - level vision problems , such as pascal voc image classification and object detection .yosinski have also tried to quantify the degree to which a particular layer is general or specific .overall , transfer learning has been systematically investigated in high - level vision problems , but not in low - level vision tasks . in this study , we explore several transfer settings on compression artifacts reduction and show the effectiveness of transfer learning in low - level vision problems .our proposed approach is based on the current successful low - level vision model srcnn . to have a better understanding of our work, we first give a brief overview of srcnn .then we explain the insights that lead to a deeper network and present our new model .subsequently , we explore three types of transfer learning strategies that help in training a deeper and better network .the srcnn aims at learning an end - to - end mapping , which takes the low - resolution image ( after interpolation ) as input and directly outputs the high - resolution one .the network contains three convolutional layers , each of which is responsible for a specific task .specifically , the first layer performs * patch extraction and representation * , which extracts overlapping patches from the input image and represents each patch as a high - dimensional vector. then the * non - linear mapping * layer maps each high - dimensional vector of the first layer to another high - dimensional vector , which is conceptually the representation of a high - resolution patch . at last , the * reconstruction * layer aggregates the patch - wise representations to generate the final output .the network can be expressed as : where and represent the filters and biases of the layer respectively , is the output feature maps and denotes the convolution operation .the contains filters of support , where is the spatial support of a filter , is the number of filters , and is the number of channels in the input image . note that there is no pooling or full - connected layers in srcnn , so the final output is of the same size as the input image .rectified linear unit ( relu , ) is applied on the filter responses .these three steps are analogous to the basic operations in the sparse - coding - based super - resolution methods , and this close relationship lays theoretical foundation for its successful application in super - resolution .details can be found in the paper .* insights . * in sparse - coding - based methods and srcnn , the first step feature extraction determines what should be emphasized and restored in the following stages .however , as various compression artifacts are coupled together , the extracted features are usually noisy and ambiguous for accurate mapping . in the experiments of reducing jpeg compression artifacts ( see section [ sec : exp_srcnn ] ) , we find that some quantization noises coupled with high frequency details are further enhanced , bringing unexpected noisy patterns around sharp edges .moreover , blocking artifacts in flat areas are misrecognized as normal edges , causing abrupt intensity changes in smooth regions .inspired by the feature enhancement step in super - resolution , we introduce a feature enhancement layer after the feature extraction layer in srcnn to form a new and deeper network ar - cnn .this layer maps the `` noisy '' features to a relatively `` cleaner '' feature space , which is equivalent to denoising the feature maps . ** the overview of the new network ar - cnn is shown in figure [ fig : framework ] .the three layers of srcnn remain unchanged in the new model .we also use the same annotations as in section [ sec : srcnn ] . to conduct feature enhancement, we extract new features from the feature maps of the first layer , and combine them to form another set of feature maps .this operation can also be formulated as a convolutional layer : where corresponds to filters with size . is an -dimensional bias vector , and the output consists of feature maps .overall , the ar - cnn consists of four layers , namely the feature extraction , feature enhancement , mapping and reconstruction layer .it is worth noticing that ar - cnn is not equal to a deeper srcnn that contains more than one non - linear mapping layers . rather than imposing more non - linearity in the mapping stage, ar - cnn improves the mapping accuracy by enhancing the extracted low - level features .experimental results of ar - cnn , srcnn and deeper srcnn will be shown in section [ sec : exp_srcnn ] given a set of ground truth images and their corresponding compressed images , we use mean squared error ( mse ) as the loss function : where , is the number of training samples .the loss is minimized using stochastic gradient descent with the standard backpropagation .we adopt a batch - mode learning method with a batch size of 128 .second row : the 5-layer ar - cnn targeted at _ dataa_- .third row : the ar - cnn targeted at _ dataa_- .fourth row : the ar - cnn targeted at _ twitter _ data .green boxes indicate the transferred features from the base network , and gray boxes represent random initialization .the ellipsoidal bars between weight vectors represent the activation functions.,title="fig : " ] -3.9 cm transfer learning in deep models provides an effective way of initialization .in fact , conventional initialization strategies ( randomly drawn from gaussian distributions with fixed standard deviations ) are found not suitable for training a very deep model , as reported in . to address this issue, he derive a robust initialization method for rectifier nonlinearities , simonyan propose to use the pre - trained features on a shallow network for initialization . in low - level vision problems ( super resolution ), it is observed that training a network beyond 4 layers would encounter the problem of convergence , even that a large number of training images ( imagenet ) are provided .we are also met with this difficulty during the training process of ar - cnn . to this end, we systematically investigate several transfer settings in training a low - level vision network following an intuitive idea of `` easy - hard transfer '' .specifically , we attempt to reuse the features learned in a relatively easier task to initialize a deeper or harder network .interestingly , the concept `` easy - hard transfer '' has already been pointed out in neuro - computation study , where the prior training on an easy discrimination can help learn a second harder one .formally , we define the base ( or source ) task as _ a _ and the target tasks as _ _ , .as shown in figure [ fig : easy - hard1 ] , the base network _basea _ is a four - layer ar - cnn trained on a large dataset _of which images are compressed using a standard compression scheme with the compression quality _ qa_. all layers in _ basea _ are randomly initialized from a gaussian distribution .we will transfer one or two layers of _ basea _ to different target tasks ( see figure [ fig : easy - hard1 ] ) .such transfers can be described as follows .* transfer shallow to deeper model . * as indicated by , a five - layer network is sensitive to the initialization parameters and learning rate .thus we transfer the first two layers of _ basea _ to a five - layer network _target. then we randomly initialize its remaining layers _ , and _. ] and train all layers toward the same dataset _dataa_. this is conceptually similar to that applied in image classification , but this approach has never been validated in low - level vision problems .* transfer high to low quality .* images of low compression quality contain more complex artifacts .here we use the features learned from high compression quality images as a starting point to help learn more complicated features in the dcn .specifically , the first layer of _target _ are copied from _basea _ and trained on images that are compressed with a lower compression quality _* transfer standard to real use case .* we then explore whether the features learned under a standard compression scheme can be generalized to other real use cases , which often contain more complex artifacts due to different levels of re - scaling and compression .we transfer the first layer of _ basea _ to the network _target _ , and train all layers on the new dataset . * discussion .* why the features learned from relatively easy tasks are helpful ?first , the features from a well - trained network can provide a good starting point .then the rest of a deeper model can be regarded as shallow one , which is easier to converge .second , features learned in different tasks always have a lot in common .for instance , figure [ fig : features ] shows the features learned under different jpeg compression qualities .obviously , filters of high quality are very similar to filters of low quality .this kind of features can be reused or improved during fine - tuning , making the convergence faster and more stable .furthermore , a deep network for a hard problem can be seen as an insufficiently biased learner with overly large hypothesis space to search , and therefore is prone to overfitting .these few transfer settings we investigate introduce good bias to enable the learner to acquire a concept with greater generality .experimental results in section [ sec : transfer ] validate the above analysis .-0.2 cm -0.16 cm [ fig : features ] -0.4 cmwe use the bsds500 database as our base training set .specifically , its disjoint training set ( 200 images ) and test set ( 200 images ) are all used for training , and its validation set ( 100 images ) is used for validation . as in other compression artifacts reduction methods ( rtf ) ,we apply the standard jpeg compression scheme , and use the jpeg quality settings ( mid quality ) and ( low quality ) in matlab jpeg encoder .we only focus on the restoration of the luminance channel ( in ycrcb space ) in this paper .the training image pairs are prepared as follows images in the training set are decomposed into sub - images . then the compressed samples are generated from the training samples with matlab jpeg encoder .the sub - images are extracted from the ground truth images with a stride of 10 .thus the 400 training images could provide 537,600 training samples . to avoid the border effects caused by convolution , ar - cnn produces a output given a input . hence , the loss ( eqn .( [ eqn : loss ] ) ) was computed by comparing against the center pixels of the ground truth sub - image . in the training phase ,we follow and use a smaller learning rate ( ) in the last layer and a comparably larger one ( ) in the remaining layers .we use the live1 dataset ( 29 images ) as test set to evaluate both the quantitative and qualitative performance .the live1 dataset contains images with diverse properties .it is widely used in image quality assessment as well as in super - resolution . to have a comprehensive qualitative evaluation, we apply the psnr , structural similarity ( ssim ) windows as in .] , and psnr - b for quality assessment .we want to emphasize the use of psnr - b .it is designed specifically to assess blocky and deblocked images , thus is more sensitive to blocking artifacts than the perceptual - aware ssim index .the network settings are , , , , , , and , denoted as ar - cnn ( 9 - 7 - 1 - 5 ) or simply ar - cnn .a specific network is trained for each jpeg quality .parameters are randomly initialized from a gaussian distribution with a standard deviation of 0.001 ..the average results of psnr ( db ) , ssim , psnr - b ( db ) on the live1 dataset . [ cols="^,^,^,^,^",options="header " , ] -0.15 cm in table [ tab : transfer ] , we denote a deeper ( five - layer ) ar - cnn as `` 9 - 7 - 3 - 1 - 5 '' , which contains another feature enhancement layer ( and ) . results in figure [ fig : transfer1 ] show that the transferred features from a four - layer network enable us to train a five - layer network successfully .note that directly training a five - layer network using conventional initialization ways is unreliable .specifically , we have exhaustively tried different groups of learning rates , but still have not observed convergence .furthermore , the `` transfer deeper '' converges faster and achieves better performance than using he s method , which is also very effective in training a deep model .we have also conducted comparative experiments with the structure `` 9 - 7 - 1 - 1 - 5 '' and observed the same trend . -0.15 cm-0.15 cm results are shown in figure [ fig : transfer2 ] .obviously , the two networks with transferred features converge faster than that training from scratch .for example , to reach an average psnr of 27.77db , the `` transfer 1 layer '' takes only backprops , which are roughly a half of that for `` base - q10 '' .moreover , the `` transfer 1 layer '' also outperforms the ` base - q10' by a slight margin throughout the training phase .one reason for this is that only initializing the first layer provides the network with more flexibility in adapting to a new dataset .this also indicates that a good starting point could help train a better network with higher convergence speed .online social media like _ twitter _ are popular platforms for message posting .however , _ twitter _ will compress the uploaded images on the server - side .for instance , a typical 8 mega - pixel ( mp ) image ( ) will result in a compressed and re - scaled version with a fixed resolution of .such re - scaling and compression will introduce very complex artifacts , making restoration difficult for existing deblocking algorithms ( sa - dct ) .however , ar - cnn can fit to the new data easily .further , we want to show that features learned under standard compression schemes could also facilitate training on a completely different dataset .we use 40 photos of resolution taken by mobile phones ( totally 335,209 training subimages ) and their _ twitter_-compressed version to train three networks with initialization settings listed in table [ tab : transfer ] . -0.2 cm [ fig : last - fig ] -1.3 cm from figure [ fig : transfer3 ], we observe that the `` transfer '' and `` transfer '' networks converge much faster than the `` base - twitter '' trained from scratch .specifically , the `` transfer '' takes backprops to achieve 25.1db , while the `` base - twitter '' uses backprops . despite of fast convergence ,transferred features also lead to higher psnr values compared with `` base - twitter '' .this observation suggests that features learned under standard compression schemes are also transferrable to tackle real use case problems .some restoration results are shown in figure .we could see that both networks achieve satisfactory quality improvements over the compressed version .in the real application , many image processing routines are affected when they take jpeg images as input .blocking artifacts could be either super - resolved or enhanced , causing significant performance decrease . in this section ,we show the potential of ar - cnn in facilitating other low - level vision studies , super - resolution and contrast enhancement . to illustrate this , we use srcnn for super - resolution and tone - curve adjustment for contrast enhancement , and show example results when the input is a jpeg image , sa - dct deblocked image , and ar - cnn restored image . from results shown in figure [ fig : application ], we could see that jpeg compression artifacts have greatly distorted the visual quality in super - resolution and contrast enhancement .nevertheless , with the help of ar - cnn , these effects have been largely eliminated . moreover, ar - cnn achieves much better results than sa - dct .the differences between them are more evident after these low - level vision processing .applying deep model on low - level vision problems requires deep understanding of the problem itself . in this paper, we carefully study the compression process and propose a four - layer convolutional network , ar - cnn , which is extremely effective in dealing with various compression artifacts .we further systematically investigate several _ easy - to - hard _ transfer settings that could facilitate training a deeper or better network , and verify the effectiveness of transfer learning in low - level vision problems . as discussed in srcnn , we find that larger filter sizes also help improve the performance .we will leave them to further work .
lossy compression introduces complex compression artifacts , particularly the blocking artifacts , ringing effects and blurring . existing algorithms either focus on removing blocking artifacts and produce blurred output , or restores sharpened images that are accompanied with ringing effects . inspired by the deep convolutional networks ( dcn ) on super - resolution , we formulate a compact and efficient network for seamless attenuation of different compression artifacts . we also demonstrate that a deeper model can be effectively trained with the features learned in a shallow network . following a similar `` easy to hard '' idea , we systematically investigate several practical transfer settings and show the effectiveness of transfer learning in low - level vision problems . our method shows superior performance than the state - of - the - arts both on the benchmark datasets and the real - world use case ( _ twitter _ ) . in addition , we show that our method can be applied as pre - processing to facilitate other low - level vision routines when they take compressed images as input .
in weakly chaotic hamiltonian systems regions of regular ( periodic and quasi - periodic ) and chaotic motion typically coexist in the phase - space . in high dimensions , due to arnold diffusion, all initial conditions leading to chaotic motion are connected in the phase - space building a single chaotic component . even if the volume of the regular regions becomes vanishingly small , as expected for high - dimensional nonlinear systems , the dynamics inside the chaotic component of the phase - spaceis strongly affected by such regions .this happens because trajectories approaching non - hyperbolic regions or regular motion remain a long time close to them before visiting again other parts of the chaotic component of the phase - space .this signature of weak mixing ( or weak chaos ) is known as stickiness . since chirikov - shepelyansky , the main quantification of stickiness in hamiltonian systems has been through the fat - tail distribution of poincar recurrence times ( see , _ e.g. _ , ) .an alternative approach is to use finite - time lyapunov exponents ( ftles ) , with recent applications using large deviation techniques and the cumulants of the ftle distribution . in area - preserving maps ,stickiness generically occurs at the border of -dimensional kolmogorov - arnold - moser ( kam ) island ( _ i.e. _ , at -dimensional tori ) .the recurrence time is a measure of the time the trajectory spends around such structures before returning to the chaotic sea ( stickiness happens also to one - parameter families of parabolic orbits and even to isolated parabolic fixed points ) . near the non - hyperbolic structures , the local instability of chaotic trajectoriesis reduced so that ftles can be used to characterize phase - space regions of interest .stickiness has been studied also in higher - dimensional systems , long recurrence times can be due to different non - hyperbolic regions and tori of different dimensionalities .an improved characterization of stickiness events ( long recurrence time ) requires thus to measure the number of stable and unstable directions in the trajectory during this event .froeschl conjectured that lower - dimensional tori could not exist .in early studies in the 80 s such events of stickiness to lower dimensional tori were reported in some systems but were not found in other examples .even if invariant tori do not exist , small local lyapunov exponent could effectively act as a lower - dimensional trap .this is similar to almost invariant sets , which are regions in phase - space where typical trajectories stay ( on average ) for long periods of time . in this paperwe introduce a methodology that uses time - series of local lyapunov exponents to define regimes of ordered , semi - ordered and totally chaotic motion and obtain an improved characterization of stickiness in high - dimensional hamiltonian systems .we illustrate this general procedure in a chain of coupled standard maps and confirm that stickiness events of different times length are dominated by trajectories with different ftles . a significant improvement of the characterization of sticky motion in high - dimensional systems is found .we also characterize the ftles for small couplings and compare them to expected universal properties in fully chaotic systems .the method proposed here is general and can be used to investigate hamiltonian systems in any dimension .the paper is divided as follows . in sec .[ mod ] we describe the hamiltonian model we use to illustrate our method . in sec .[ methods ] we introduce our method to compute and analyze time series of local lyapunov exponents .this methodology is then applied to the symplectic model of coupled standard maps in sec .[ nr ] . section [ cc ] summarizes the main results of the paper .we use a time - discrete -dimensional hamiltonian system obtained as the composition of independent one - step iteration of symplectic -dimensional maps and a symplectic coupling . as a representative example of -dimensional maps we choose for our numerical investigation the standard map : and for the coupling \\ x_i \\\end{array } \right),\ ] ] with ( all - to - all coupling ) .the motivation for working with this system is that in the limit of small coupling it can be understood looking at the dynamics of the uncoupled maps .this system was studied in refs . using recurrence time distribution .this allow us to critically compare the benefits of our methodology .in all numerical simulations we used for the map and for the maps and .in this section we describe the method proposed in this work . to be illustrative , we present numerical simulations for the system defined in sec .[ mod ] . consider a chaotic trajectory in a closed hamiltonian system which , after reducing the phase - space dimension due to global invariant of motion , has degree of freedoms . for long times the trajectory ergodically fills the whole chaotic component of the phase - space which is characterized by a spectrum of lyapunov exponents , where .largest lyapunov exponents because due to the symplectic character of hamiltonian systems the others exponents are simply . ]the central ingredient of our analysis is the spectrum of ftles computed along a trajectory during a window of size where we obtain a time dependent spectrum .the window size has to be sufficiently small to guarantee a good resolution of the temporal variation of the s , but sufficiently large in order to have a reliable estimation ( see refs .the probability density function of has been extensively studied . herewe go beyond the study of the probability density function and explore temporal properties in the time series of .figures [ lyapsloct](a ) and ( b ) , for and respectively , show the time series of , ( ) .the sharp transitions towards motivates the classification in regimes of motion as ( a ) _ ordered _ ( ) ; ( b ) _ semi - ordered or semi - chaotic _ ( ) ; and ( c ) _ chaotic _ ( ) . for a system with degrees of freedomwe will say that the trajectory is in a regime of type if it has local lyapunov exponents , where are the small thresholds .this means that and are ordered and chaotic regimes respectively .whenever there is no ambiguity , we will drop the superscript to have a simpler notation .practical implementations of the general method described above require the choice of a few parameters and conventions .first of all , the window size and the threshold directly affect the classification in regimes . they can be thought as the phase - space resolution of the analysis and should be chosen so that it provides maximal information about the regions of interest . unless stated otherwise , we use and , where denotes average over , where ( even though the _ classification _ in regimes is strongly -dependent , our _ conclusions _ are not sensitively affected by variations around the chosen values ) . another important choice is the method for computation of the ftles .we use benettin s algorithm , which includes the gram schmidt re - orthonormalization procedure .the decreasing order of is valid on average , but inversions of the order ( ) may happen for some times and we have chosen to impose the order of for all .finally , it is possible to decide how to sample the time series . while the ftles are defined for all , there is a trivial correlation between the values of ftles inside a window of size because they are computed using the same points of the trajectory . in order to avoid this trivial correlationthe series of can be computed using non - overlapping windows , _i.e. _ plotting only every time steps ( a choice we adopt in our simulations ) . in order to understand the properties of the time series it is useful to consider the phase - space regions associated to each regime .we denote by the phase - space volume ( liouville measure ) of region in the bounded phase - space , _ i.e. _ .the most important distinction is between the regions of regular and chaotic motion . in hamiltonian systems , typically and . in principle, the regular region can be subdivided according to the dimensionality of the tori . however , according to froeschl s conjecture , in a -dimensional phase - space , tori with dimension have positive measure and thus . for , the chaotic region expected to build a single ergodic component because tori of dimension do not partition the -dimensional phase - space in different regions and therefore any chaotic trajectories eventually explores ( through arnold diffusion ) the whole .our interest is not to test the froeschl conjecture or arnold diffusion , but to show the insights about the chaotic dynamics we can obtain using the time series of together with the definition of the regimes .one application is to use the regimes to split the chaotic component of the phase - space in meaningful components .this is done by considering the set of points in the phase - space leading to each regime as where is the total length of the trajectory and indicates that at time the trajectory at had .figure [ ps ] shows numerical estimates of the phase - space regions obtained for each regime in the chain of coupled maps defined in sec .[ mod ] . the regime ( or the ordered regime )is associated to region localized close to the border of the kam island of the uncoupled case ( compare to fig .points which belong to the regime are closer to the center of the torus from the uncoupled case .this suggests that when trajectories are inside the region related to regime , they more likely penetrate inside the torus from the uncoupled case . in the chaotic seaboth regimes and are visible .these results are naturally understood in the perturbative limit ( small coupling ) .the regime corresponds to for every , which is expected when the trajectory is stuck close to the -dimensional tori built as the product of the -dimensional tori of the uncoupled maps . in contrast , for implies that at least one ftle and therefore the trajectory projected in one map can be both in the chaotic and regular regions ( _ e.g. _ , for can be obtained from or from ) .altogether , these observations confirm that our method allows for a meaningful division of the chaotic component of the phase - space and can thus be used to identify regions of interesting dynamics . in the casepartial barriers exist inside the chaotic component such as in area - preserving maps with mixed phase space we expect the regions obtained through our method to depend weakly on and to coincide with those obtained from the partial barriers .in this section we apply the lyapunov time - series methodology described in sec .[ methods ] to the -dimensional system defined in sec . [ mod ] .we compute and interpret four basic properties of the method : the total time spent in each regime ( residence time ) , the transition between regimes , the consecutive time in each regime , and the scaling of lyapunov exponents .the first and most basic quantity we measure is the probability of finding the trajectory in each regime , defined as the fraction of the total time that ( _ i.e. _ , where if and otherwise ) . .( a ) with and .( b ) with , and . in ( a ) the values obtained with are compared ( gray curves ) with results for and . only for the case the gray curves ( right for and left for ) show a shift in the -axis .estimations for each are based on a trajectory with length . ]figure [ commot ] shows the probabilities for the map with as a function of the coupling strength .we now explain the behavior of with by discussing the effect of coupling on the phase - space regions associated to , as defined in sec .[ ph ] . by the ergodicity of , corresponds to the ( normalized ) volume of the region related to regime in the phase - space the results of fig .[ commot ] show that the chaotic region is the largest region in phase - space for any coupling , while the region associated to has a larger volume than for couplings . for larger see oscillations with a local maximum close to for the cases and .we now interpret the dependence observed in fig .[ commot ] by arguing how the different terms in eq .( [ eq.smu ] ) vary with .we denote by the measure of tori for the -th map with control parameter in the uncoupled case ( which we assume to be approximately equal to the measure of the kam islands ) . for small coupling expect that most tori of the uncoupled maps to survive and therefore : * , which in the simple case of for all reduces to . * corresponds to a small volume around , _ i.e. _ .* for we have that maps are in their corresponding kam island ( with probability ) and maps in the chaotic area ( with probability ) .for example , for and we have that [1-\mu(u_3)]\cr & + & \mu(u_2)[1-\mu(u_1)][1-\mu(u_3)]\cr & + & \mu(u_3)[1-\mu(u_1)][1-\mu(u_2 ) ] .\nonumber\end{aligned}\ ] ] in general this leads to where the last product is over all except . in the simple case of , it reduces to we now consider the effect of growing . in the spirit of the kam theorem ,the tori of the coupled maps ( generated as the product of the maps ) are expected to be robust to small couplings , which act as a perturbation .this explains why the curves in fig .[ commot ] are essentially flat for small . increasing further , the nonlinearity of the system increases and therefore is expected to decrease ( for ) .this reduction of the tori leads to an increase in the denominator of eq .( [ eq.smu ] ) and explains the observed tendency of reduction of for all regions related to stickiness ( ) .indeed , for no signature of tori or stickiness was detected numerically and .the nontrivial dependencies of in fig .[ commot ] appear at values , close to the values of for which the last tori disappear ( see also fig . in ref . ) . in this regimethe volume of the tori is already negligible but stickiness is still effective ( notice that even zero measure non - hyperbolic sets can lead to stickiness ) .the denominator in eq .( [ eq.smu ] ) is therefore , not significantly affected by further increases of , and therefore not driving the reduction of .small variation of a control parameter of the system ( in this case ) are known to lead to sensitive creation and destruction of tori , with non - trivial dependencies on the stickiness .we can thus expect that close to the disappearance of the tori the small volume of stickiness regions to fluctuate with leading even to an increase with .it is interesting to note that this non - trivial increase with appears for in fig .[ commot ] precisely when the curves show a sharp decreasing fluctuation .this suggests an exchange between measure of different sticky regions associated to regimes , without interference of the much larger fully chaotic component . as a function of the coupling strength .( a ) transition probability ; ( b - d ) conditional probability defined in eq .( [ nullmodel ] ) of moving to given that the trajectory was at .estimations for each based on a single trajectory with length in the case of coupled maps and and . ]we now focus on the transition between regimes .the simplest analysis correspond to the two - time ( joint ) probability , computed as the fraction of the total trajectory time that and . the probabilities considered in the previous section can be obtained as and .figure [ prob1](a ) shows the dependence of on for our model .we notice that is equal to .this is expected considering that the system is ergodic , volume preserving , and time - reversible .the dependence of on follows a similar pattern observed for in fig .[ commot ] .more information is obtained from the conditional probability which quantifies the probability that trajectories at will move to .the results shown in fig . [ prob1](b - d ) show for all that ( i ) persistence in the same ( ) is dominant and ( ii ) the most likely transitions occur between neighboring regimes ( _ e.g. _ , ) .the only ( slight ) deviations of this picture happen for large values of , close to the disappearance of the kam island .altogether , these results confirm that in the perturbative regime ( ) stickiness happens approaching the region of regular motion of different maps one after the other ( in opposite to a direct approach from to ) .the results of the previous section confirm that residence in the same regime is the dominant behavior .this motivates us to study the time spent consecutively in a regime ( _ i.e. _ , is the time between two consecutive transitions between different regimes , the first to and the second out of ) . in a trajectory of lengthwe collect a series of .we are mainly interested in the probability distribution ( or , equivalently its cumulative ) for different s in the limit .these distributions should be compared to the distribution of recurrence times , defined as the time between two successive entries to a pre - defined recurrence region ( usually taken in the fully chaotic component of the phase - space ) .events in the tails of are associated to times for which the trajectory is stuck to the non - hyperbolic components of the phase - space and is the traditional method to quantify stickiness in hamiltonian systems . of times is shown for each regime for and ( a ) and ( b ) . in ( a )the gray curves show results for and . only for the case the gray curves ( left for and right for ) show a shift in the -axis .the cumulative distribution for recurrence times to a region in the chaotic component of the phase - space ( in ) for ( c ) and ( d ) . for comparison , in panels ( c )we show the results obtained combining the normalized curves for ( blue dotted line : divided by for convenience of scale ) of panel ( a ) , and in ( d ) the normalized curves for ( blue dotted line : divided by ) of panel ( b ) .results obtained using maps of sec . [ mod ] with , and for the case and , and for the case .,title="fig : " ] of times is shown for each regime for and ( a ) and ( b ) . in ( a )the gray curves show results for and . only for the case the gray curves ( left for and right for ) show a shift in the -axis . the cumulative distribution for recurrence times to a region in the chaotic component of the phase - space ( in ) for ( c ) and ( d ) .for comparison , in panels ( c ) we show the results obtained combining the normalized curves for ( blue dotted line : divided by for convenience of scale ) of panel ( a ) , and in ( d ) the normalized curves for ( blue dotted line : divided by ) of panel ( b ) .results obtained using maps of sec .[ mod ] with , and for the case and , and for the case .,title="fig : " ] the numerical simulations in fig .[ pcum ] confirm that the distribution obtained summing for ordered and semi - ordered regimes ( or ) is equivalent to cumulative distribution obtained using recurrences .this is in agreement with the association of long consecutive times in regimes of ordered and semi - ordered motion to long recurrence times .looking at the individual distributions provide valuable additional information on the sticky motion . for semi - ordered motion ( when ) we observe an exponential tail after an intermediate decay with scaling .this behavior confirms the interpretation given in ref .more interestingly , the case shows an asymptotic algebraic decay which characterize stickiness .while the scaling is compatible with the results obtained using recurrence time , obtained in our methodology provides a better characterization of the scaling ( over several orders of magnitudes ) and allows for an independent analysis of the different regimes .these properties are essential when dealing with high - dimensional systems ( which may contain different pre - asymptotic regimes ) and for an accurate estimation of the stickiness exponent . finally , panel ( a ) in fig .[ pcum ] shows that all decays discussed above remain ( qualitatively ) the same for different choices of , with the curve for showing the largest sensitivity on ( as in fig .[ commot](a ) .so far we have focused at the temporal properties of the time series of ftles and how they change with the coupling strength .we now consider how the value of the lyapunovs respond to an external perturbation , which in our case is the coupling to the other maps .it is known that the largest exponent is extremely sensitive to perturbation .more specifically , daido s relation states that for small couplings to another chaotic system , a universal logarithmic singularity is observed , where are the unperturbed lyapunov exponents and is a constant and .this relation is valid for totally chaotic systems and a small mismatch between lyapunov exponents of the uncoupled systems compared to their fluctuations .here we investigate the relation as a function of , for distinct values of and different regimes . to this endwe compute the temporal averages of the ftles for times such that . ) and uncoupled ( ) maps as a function of , where is the coupling strength .results are shown for ( ) and different time windows ( a)-(b ) , ( c)-(d ) , and ( e)-(f ) .black dashed lines in ( c)-(f ) are the expected linear behaviour , consistent with eq .( [ daido ] ) .panels in the left column ( ace ) were computed for the full time series , while on the right column ( bdf ) only ftles in the regime were used .the different colors correspond to different choices of threshold imposed to define the ftle : uses while uses , where is computed over the full time series ( left column ) . ]our numerical simulations reported in fig . [ scal ] show that small values of lead to a situation in which at a finite value of ( figs .[ scal](b ) and ( d ) ) , while larger values of lead to situations in which for any .these results depend crucially on our choice to impose the order of for all , as discussed in sec .[ regions ] .this makes the average over the trajectory time to be -dependent and different from the average over the lyapunov time . applying the analysis without the division in regimes leads to strongly fluctuating results ( figs .[ scal](a , c , e ) ) . much smoother results ( figs .[ scal](b , d , f ) ) are obtained when we apply our method and compute only for in the fully chaotic regime .looking at these smoother results we observe that the difference in lyapunovs scales as , but that even for the sticky motion leads to a deviation from daido s relation ( [ daido ] ) ( curves are shifted vertically ) .in summary , we have proposed a method to characterize the dynamics of hamiltonian systems with mixed phase - space based on time series of finite - time lyapunov exponents . using this method it is possible to define and study with high accuracy the time evolution of regimes of ordered , semi - ordered , and totally chaotic motion .this allows for an individualized characterization of the different stickiness mechanisms , improving alternative methods based on the statistics of recurrence times or on the distribution of finite - time lyapunov exponents .we applied our method to a chain of coupled standard maps and showed how the frequency of different regimes and the transition probabilities between them are related to the volume of different phase - space regions . using the consecutive time in distinct regimes we have reproduced previous results obtained using recurrence times and showed that our method allows for a significant improvement in the characterization of the sticky motion ( _ e.g. _ , in the determination of the scaling exponents ) .this indicates that our method can be used to characterize stickiness in general high - dimensional systems and is particularly suited for cases in which different regions of sticky motion coexist .we have also shown that the dependence on the coupling strength of the largest lyapunov exponents , after conveniently using our procedure , tend to follow only the qualitative universal properties of fully chaotic system .results obtained in a simple chain of standard maps confirm that our methodology can be applied to high - dimensional systems and problems of current interest , such as controlling fermi acceleration , galactic models , and plasma physics .another example of application is to associate each regime with effective hamiltonian functions , a procedure used to reproduce the complicated dynamics of kicking electrons or the high harmonic generation in laser - assisted collisions .cm and rms thank cnpq , capes and fapesc and mwb thanks cnpq for financial support and mpipks in the framework of the advanced study group on optical rare events .cm also thanks eduardo g. altmann for the financial support and hospitality at the mpipks .ega thanks d. paz for suggesting the analysis performed in sec .
we investigate chaos in mixed - phase - space hamiltonian systems using time series of the finite - time lyapunov exponents . the methodology we propose uses the number of lyapunov exponents close to zero to define regimes of ordered ( stickiness ) , semi - ordered ( or semi - chaotic ) , and strongly chaotic motion . the dynamics is then investigated looking at the consecutive time spent in each regime , the transition between different regimes , and the regions in the phase - space associated to them . applying our methodology to a chain of coupled standard maps we obtain : ( i ) that it allows for an improved numerical characterization of stickiness in high - dimensional hamiltonian systems , when compared to the previous analyses based on the distribution of recurrence times ; ( ii ) that the transition probabilities between different regimes are determined by the phase - space volume associated to the corresponding regions ; ( iii ) the dependence of the lyapunov exponents with the coupling strength .
the instability of the stably stratified shear flow is one of main problems in fluid dynamics , astrophysical fluid dynamics , oceanography , meteorology , etc .although both pure shear instability without stratification and statical stratification instability without shear have been well studied , the instability of the stably stratified shear flow is still mystery . on the one hand ,the shear instability is known as the instability of vorticity maximum , after a long way of investigations .it is recognized that the resonant waves with special velocity of the concentrated vortex interact with flow for the shear instability .other velocity profiles are stable in homogeneous fluid without stratification . on the other hand , proved out that buoyancy is a stabilizing effect in the statical case .thus , it is naturally believed that the stable stratification do favor the stability ( see , e.g. * ? ? ?* ; * ? ? ?* ) , which finally results in the well known miles - howard theorem . according to this theorem , the flow is stable to perturbations when the richardson number ( ratio of stratification to shear ) exceeds a critical value everywhere . in three - dimensional stratified flow , the corresponding criterion is , the stabilization effect of buoyancy is a illusion . in a less known paper , had shown with several special examples that stratification effects can be destabilizing due to the vorticity generated by non - homogeneity , and the instability depends on the details of the velocity and density profiles .one instability is called as holmboe instability .then stated three main points from the examples without any further proof .( a ) stratification may shift the band of unstable wave numbers so that some which are stable at homogeneous cases become unstable .( b ) conditions ensuring stability in homogeneous flow ( such as the absence of a vorticity maximum ) do not necessarily carry over to the stratified case , so that static stability can destabilize .( c ) new physical mechanisms brought in by the stratification may lead to instability in the form of a pair of growing and propagating waves where in the homogeneous case one had a stationary wave .recall the points by , and that there is a big gap between rayleigh s criterion and miles - howard criterion , even wrote miles criterion for stability is not the nature generalization of rayleigh s well - known sufficient condition for the stability of a homogeneous fluid in shear flow " .the mystery of the instability is still cover for us .following the frame work of , this study is an attempt to clear the confusion in theories .we find that the flow instability is due to the competition of the kinetic energy with the potential energy , which is dominated by the total froude number . andthe unexpected assumption in miles - howard theorem leads the contradiction to other theories .the miles - howard criterion is one of the most important theorem for stably stratified shear flow . according to this theorem , the flow is stable to perturbations when the richardson number exceeds a critical value .this criterion is widely used in fluid dynamics , astrophysical fluid dynamics , oceanography , meteorology , etc .specifically , it is the most important criterion for turbulence genesis in ocean modelling .using arnold s method , the the corresponding criterion for three - dimensional stratified flow is .consequently , it is widely believed that the stable stratification do favor the stability , and that all perturbations should decay when ( see , e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?however , the experiments indicate otherwise .in fact , the flow might be unstable at very large . thus , to dispel the contradiction between the experiments and the miles - howard criterion , different explanations were postulated in literature : this interval , , separates two different turbulent regimes : strong mixing and weak mixing rather than the turbulent and the laminar regimes , as the classical concept states " , or , the richardson number criteria is not , in general , a necessary and sufficient condition " .in fact , the miles - howard criterion is also in contradiction to other criteria for neutral stratification ( homogeneous ) fluid , e.g. , rayleigh - kuo s criterion ( e.g. * ? ? ? * ; * ? ? ? * ) , fjortoft s criterion , arnold s criteria and sun s criterion .this contradiction was also noted long time ago by , who noticed that miles criterion for stability is not the nature generalization of rayleigh s well - known sufficient condition for the stability of a homogeneous fluid in shear flow " .he also also made an attempt to make a generalization .following the frame work of , this study is an attempt to clear the confusion in theories and to build a bridge between the lab experiments and theories .the taylor - goldstein equation for the stratified inviscid flow is employed , which is the vorticity equation of the disturbance . considering the flow with velocity profile and the density field , and the corresponding stability parameter ( the brunt - vaisala frequency ) , where is the acceleration of gravity , the single prime denotes , and denotes a stable stratification .the vorticity is conserved along pathlines .the streamfunction perturbation satisfies \phi=0 , \label{eq : stable_stratifiedflow_taylorgoldsteineq}\ ] ] where is the real wavenumber and is the complex phase speed and double prime denotes . for is real , the problem is called temporal stability problem .the real part of complex phase speed is the wave phase speed , and is the growth rate of the wave .this equation is subject to homogeneous boundary conditions it is obvious that the criterion for stability is ( ) , for that the complex conjugate quantities and are also physical solutions of eq.([eq : stable_stratifiedflow_taylorgoldsteineq ] ) and eq.([eq : stable_parallelflow_rayleighbc ] ) . multiplying eq.([eq : stable_stratifiedflow_taylorgoldsteineq ] ) by the complex conjugate and integrating over the domain , we get the following equations \ , dy = \int_{a}^{b } \frac{(u - c_r)^2-c_i^2}{|u - c|^4}n^2|\phi|^2\ , dy .\label{eq : stable_stratifiedflow_taylorgoldsteineq_int_rea}\ ] ] and |\phi|^2\,dy=0 .\label{eq : stable_stratifiedflow_taylorgoldsteineq_int_img } \ ] ] in the case of , used eq.([eq : stable_stratifiedflow_taylorgoldsteineq_int_img ] ) to prove that a necessary condition for inviscid instability is , where is the inflection point and is the velocity at .using eq.([eq : stable_stratifiedflow_taylorgoldsteineq_int_img ] ) , also pointed out that a necessary condition for instability is that should change sign .but such condition is useless as there are two unknown parameters and . as a first step in our investigation, we need to estimate the ratio of to .this is known as the poincar s problem : where the eigenvalue is positively definite for any .the smallest eigenvalue value , namely , can be estimated as . in departure from previous investigations, we shall investigate the stability of the flow by using eq.([eq : stable_stratifiedflow_taylorgoldsteineq_int_rea ] ) and eq.([eq : stable_parallelflow_poincare ] ) .as is estimated with boundary so the criterion is global .we will also adapt a different methodology .if the velocity profile is unstable ( ) , then the equations with the hypothesis of should result in contradictions in some cases .following this , a sufficient condition for instability can be obtained .firstly , substituting eq.([eq : stable_parallelflow_poincare ] ) into eq.([eq : stable_stratifiedflow_taylorgoldsteineq_int_rea ] ) , we have where g(y)&=&+k^2 + , and + h(y)&=&(+k^2)(u - c_r)^2+u(u - c_r)-n^2 .[ eq : stable_stratifiedflow_hygy ] it is noted that for .then if throughout the domain for a proper and . obviously , is a monotone function of : the smaller is , the smaller is . when , has the smallest value .\label{eq : stable_stratifiedflow_hyfroude}\ ] ] if we define shear , parallel and rossby froude numbers , and as where the shear froude number is a dimensionless ratio of kinetic energy to potential energy . as plays the same role of effect in the rossby wave ,the shear froude number is a dimensionless ratio of rossby wave kinetic energy to potential energy .then equals to .thus a general theorem for instability can be obtained from the above notations .theorem 1 : if velocity and stable stratification satisfy or throughout the domain for a certain , the flow is unstable with a .physically , implies that the total kinetic energy is smaller than the total potential energy .so the potential energy might transfer to the kinetic energy after being disturbed , and the flow becomes unstable . on the other hand , when the potential energy is smaller than the kinetic energy ( ) , the flow is stable because no potential energy could transfer to the kinetic energy .mathematically , we need derive some useful formula for applications , since there is still unknown in above equations . to this aim ,we rewrite eq.([eq : stable_stratifiedflow_hyfroude ] ) as assume that the minimum and maximum value of within is respectively and .it is from eq.([eq : stable_stratifiedflow_hy ] ) that for the smallest value of .thus a general theorem for instability can be obtained from the above notations .theorem 2 : if velocity and stable stratification satisfy throughout the domain for a certain , there must be a and the flow is unstable . it is from eq.([eq : stable_stratifiedflow_hy ] ) that requires less than .the bigger is , the smaller is .so the stable stratification has a destabilization mechanism in shear flow .this conclusion is new as former theoretic studies always took the static stable stratification as the stable effects for shear flows .according to eq.([eq : stable_stratifiedflow_hy ] ) , the bigger is , the more stable the flow is .it is obvious that is bigger for than that for .so the flow is more stable with the velocity profile .although theorem 1 gives a sufficient unstable condition for instability , the complicated expression makes it difficult for application . in the following section we will derive simple and useful criteria: ( a ) for , ( b ) for . , title="fig:",width=226 ] : ( a ) for , ( b ) for . , title="fig:",width=226 ] the simplest flow is the inviscid shear flow with .the sufficient condition for instability is . to find such condition ,we rewrite in eq.([eq : stable_stratifiedflow_hy ] ) as where and . then there may be three cases .two of them have intersecting with at ( fig.[fig : hy ] ) .the first case is that ; thus , always holds at as shown in fig.[fig : hy]a .the second case is that ; thus , always holds in the whole domain , as shown in fig.[fig : hy]b . in this case , the flow might be unstable .the sufficient condition for instability can be found from eq.([eq : rayleigh - hy ] ) as shown in fig.[fig : hy]b . given , eq.([eq : rayleigh - hy ] ) becomes \ ] ] if is always satisfied , holds within the domain .corollary 1.1 : if the velocity profile satisfies within the domain , the flow is unstable . since obtained a sufficient condition for stability , i.e. within the domain .the above condition for instability is nearly marginal .the last case is that throughout the domain ; thus , always exists somewhere within the domain , as shown in fig.[fig : tg - hy]a . for in case 3 and case 4.,title="fig:",width=226 ] for in case 3 and case 4.,title="fig:",width=226 ] if the static stratification is stable ( ) , then is positive .the flow is unstable if is negatively defined within at .we rewrite as \\ & \displaystyle \times [ u+\frac{1}{2\mu}(u''+\sqrt{u''^2 + 4\mu n^2}\,)-c_r ] .\end{array}\label{eq : taylorgoldsteineq - hy}\ ] ] the value of can be classified into 4 cases .the first and the second ones ( and at ) are similar to discussed above and shown in fig[fig : hy]a and fig[fig : hy]b . for such cases , we have a sufficient condition for instability , this can be derived directly from eq.([eq : stable_stratifiedflow_sun_int_rea ] ) , similar to corollary 1.1 .the first sufficient condition for instability is due to the shear instability , and the unstable criterion is eq.([eq : taylorgoldsteineq - sic1 ] ) .corollary 1.2 : if the velocity profile satisfies within the domain , the flow is unstable .the third case ( ) is also similar to the case in fig[fig : tg - hy]a , and the flow is stable .the last one is unstable flow shown in fig.[fig : tg - hy]b , where and throughout . in the last case , the maximum of must be smaller than the minimum of so that a proper within the and could be used for the unstable waves . although the exact criterion can not be obtained as the required maximum and minimum can not be explicitly given , the approach is very straightforward .nevertheless , we can also obtain some approximate criterion for the fourth case .it is from eq.([eq : stable_stratifiedflow_hy ] ) that if the minimax of is less than the minimum of . as the minimax value of is when , we obtained a new criterion according to eq.([eq : stable_stratifiedflow_froude ] ) .\label{eq : stable_stratifiedflow - fs2}\ ] ] thus a sufficient ( but not necessary ) condition for is that the following equation holds for . from the above corollaries , the flow might be unstable if the static stable stratification is strong enough .the stably stratification destabilize the flow , which is a new unstable mechanism .the above corollary contradicts the previous results , but it agrees well with the recent theory , experiments and simulations .again , we point out here that the flow is unstable due to potential energy transfer to kinetic energy under the condition of .this conclusion is new because it is quite different from previous theorems in which the static stable stratification plays the role as a stabilizing factor for shear flows .in the above investigation , it was found that stable stratification is a destabilization mechanism for the flow .such finding is not surprising if one notes the terms in eq.([eq : stable_stratifiedflow_taylorgoldsteineq ] ) .mathematically the sum of terms in square brackets should be negative for the wave solution .thus both and do favor this condition .this is why the unstable solutions always occur at in shear flow . and here might lead to .physically , the perturbation waves are truncated in the neutral stratified flow .but the stable stratification allows wide range of waves in the perturbation .such waves might interact with each other like what was illustrated in . as theorem 1 is the only sufficient condition ,it is hypothesized that the criterion is not only the sufficient but also the necessary condition for instability in stably stratified flow .this hypothesis might be criticized in that the flow might be unstable ( ) if changes sign within the interval ( fig[fig : tg - hy]a ) , where a proper chosen would let the right hand of eq.([eq : stable_stratifiedflow_sun_int_rea ] ) become negative . however , this criticism is not valid for the case in fig[fig : tg - hy]a .it is from the well - known criteria ( e.g. rayleigh s inflexion point theorem ) that the proper chosen always let the right hand of eq.([eq : stable_stratifiedflow_sun_int_rea ] ) vanish .it seems that the flow tends to be stable , or the perturbations have a prior policy to let .the flow become unstable unless any choice of would let the right hand of eq.([eq : stable_stratifiedflow_sun_int_rea ] ) be negative .in this situation , we hypothesize that theorem 1 fully solves the stability problem . in inviscid shear flows, it has been recognized that very short - wave perturbations are dynamically stable under neutral stratification , and the dynamic instability is due to the larger wavelengths .it should be noted that rayleigh s case is reduced to the kelvein - helmholtz vortex sheet model under the long - wave limit . we have shown that this can be extended to shear flows , and that the growth rate , is proportional to .such conclusion can be simply generated to the stratified shear flows , which can be seen from eq.([eq : stable_stratifiedflow_hygy ] ) .if is larger than a critical value , the sufficient condition in theorem 1 can not be satisfied and the flow is stable . for shortwave ( ) , is always larger than that for long - wave .the long - wave instability in the stratified shear flow was also noted by and , who showed a likelihood of at .the long - wave instability theory can explain the results in numerical simulations , where the unstable perturbations are long - wave . in the above investigations ,an parameter is used , which represents the ratio of two integrations with boundaries .so the criteria are global . on the other hand, we can also investigate the local balance without boundary conditions .for example , consider the flow within a thick layer .the velocity is , and the kinetic energy is .the stratification is , and the potential energy is , where is the thickness of the layer .the froude number is for .the instability criterion in eq.([eq : stable_stratifiedflow - criterion ] ) becomes if local gradient richardson number exceeds , the local disturbances is unstable .however , the flow might be stable as the globe total froude number .this criterion is opposite to miles - howard theorem , we will show why miles - howard theorem is not correct from their derivations . in the inviscidshear flow , the linear theories , e.g. , rayleigh - kuo cirterion , fjrtoft criterion and sun s criterion , are equal to arnold s nonlinear stability criterion .arnold s first stability theorem corresponds to fjrtoft s criterion , and arnold s second nonlinear theorem corresponds to sun s criterion .it is obvious that the present theory , especially corollary 1.1 is a natural generalization of inviscid theories . in the stratified flow , and applied a transform to eq.([eq : stable_stratifiedflow_taylorgoldsteineq ] ) , which allows different kind of perturbations .thus gives miles s theory and gives howard s semicircle theorem .considering that and , eq.([eq : stable_stratifiedflow_taylorgoldsteineq_int_rea ] ) becomes dy=0 .\label{eq : stable_stratifiedflow_taylorgoldsteineq_miles_rea}\ ] ] it is from eq.([eq : stable_stratifiedflow_taylorgoldsteineq_miles_rea ] ) that all the inviscid flows ( no mater what the velocity profile is ) must be temporal unstable ( is real ) .this contradicts the criteria ( both linear and nonlinear ones ) for inviscid shear flow .so the wavenumber in eq.([eq : stable_stratifiedflow_taylorgoldsteineq_miles_rea ] ) should be complex .besides , from eq.([eq : stable_stratifiedflow_hy ] ) , eq.([eq : taylorgoldsteineq - hy ] ) and fig.[fig : tg - hy]b , the unstable might be either within the value of or beyond the value of .this also contradicts the howard s semicircle theorem for the stratified flow .it implies that the transform is not suitable for temporal stability problem .taking , howard extracted a new equation from taylor - goldstein equation , '-[k^2(u - c)+\frac{u''}{2}+(\frac{1}{4}u'^2-n^2)/(u - c)]f=0 \label{eq : stable_stratifiedflow_taylorgoldsteineq_howard}\ ] ] after multiplying above equation by the complex conjugate of and integrating over the flow regime , then the imaginary part of the expression is miles - howard theorem concludes that if , then for instability .however , the transform requires a complex function , even though both and are real . in that might be complex somewhere as .consequently , the wave number in eq.([eq : stable_stratifiedflow_taylorgoldsteineq_howard ] ) is a complex number but no longer a real number as that assumed in taylor - goldstein equation .the complex wavenumber leads to spatial stability problem but temporal stability problem investigated in this study .the assumption of with implies the flow is unstable with . however , howard ignored this in his derivations .that is why miles - howard theorem leads contradictions to the present studies .although the transform leads some contradictions with rayleigh criterion and present results , it might be useful for the viscous flows . in these flows ,the spatial but temporal stability problem is dominated , and is the complex wavenumber .it is well known that the plane couette flow is viscously unstable for reynolds number from the experiments but viscously stable from the orr - sommerfeld equation .if applying the transform in eq.([eq : stable_stratifiedflow_taylorgoldsteineq_miles_rea ] ) all the inviscid flows must be unstable .thus the plane couette flow might be stable only for due to the stabilization of the viscosity .it is argued that the taylor - goldstein equation represents temporal instability , the transform represents spatial instability . in thatthe perturbation is seen along with the flow at the speed of in .the transform also turns real wavenumber into complex number , implies .the assumption of real after transform will leads to contradictions with the results derived from the taylor - goldstein equation .so the previous investigators can hardly generalize their results from homogeneous fluids to stratified fluids .in summary , the stably stratification is a destabilization mechanism , and the flow instability is due to the competition of the kinetic energy with the potential energy .globally , the flow is always unstable when the total froude number , where the larger potential energy might transfer to the kinetic energy after being disturbed .locally , the flow is unstable as the gradient richardson number .the approach is very straightforward and can be used for similar analysis . in the inviscid stratified flow, the unstable perturbation must be long - wave scale .this result extends the rayleigh s , fjrtoft s , sun s and arnold s criteria for the inviscid homogenous fluid , but contradicts the well - known miles and howard theorems .it is argued here that the transform is not suitable for temporal stability problem , and that it will leads to contradictions with the results derived from taylor - goldstein equation .the author thanks dr .yue p - t at virginia tech , prof .yin x - y at ustc , prof .wang w. at ouc and prof .huang r - x at whoi for their encouragements .this work is supported by the national basic research program of china ( no .2012cb417402 ) , and the knowledge innovation program of the chinese academy of sciences ( no .kzcx2-yw - qn514 ) .
the temporal instability of stably stratified flow was investigated by analyzing the taylor - goldstein equation theoretically . according to this analysis , the stable stratification has a destabilization mechanism , and the flow instability is due to the competition of the kinetic energy with the potential energy , which is dominated by the total froude number . globally , implies that the total kinetic energy is smaller than the total potential energy . so the potential energy might transfer to the kinetic energy after being disturbed , and the flow becomes unstable . on the other hand , when the potential energy is smaller than the kinetic energy ( ) , the flow is stable because no potential energy could transfer to the kinetic energy . the flow is more stable with the velocity profile than that with . besides , the unstable perturbation must be long - wave scale . locally , the flow is unstable as the gradient richardson number . these results extend the rayleigh s , fjrtoft s , sun s and arnold s criteria for the inviscid homogenous fluid , but they contradict the well - known miles - howard theorem . it is argued here that the transform is not suitable for temporal stability problem , and that it will lead to contradictions with the results derived from the taylor - goldstein equation . however , such transform might be useful for the study of the orr - sommerfeld equation in viscous flows .
there is significant interest in modelling shallow water flows at the mesoscopic level , either utilising the mesoscopic information or directly simulating at the mesoscopic scale . for the first approach , the gas - kinetic scheme ( gks )was proposed , in which a boltzmann type equation with the * * b**hatnagar-**g**ross-**k**rook ( bgk ) is used as an intermediate step to supplement the flux term of shallow water equations ( swes ) . in the literature, the gks has been applied for a range of flows including strong shock and dam - breaking problems . for the second direction ,the main approach is the lattice boltzmann method ( lbm ) .the lbm can be considered as a special case of discrete velocity method ( dvm ) for which a minimal discrete velocity set is sought and tied to the discretisation in space . in this way , the scheme is sufficiently simple yet powerful enough to handle complex flow problems .hence , the scheme has also be extended to modelling of the swes .a critical challenge for lbm is to simulate supercritical shallow water flows characterised by a high froude ( ) number . for this purpose ,an asymmetric model is proposed in for simulating one - dimensional flows with . in ,a multi - speed scheme has been proposed recently by matching hydrodynamic moments and applied for both one - dimensional and two - dimensional supercritical flows .since the discrete velocities are not integers in general , a finite difference scheme has to be used so that the scheme is best classified as a discrete boltzmann model . in this work ,we have derived a hierarchy of * * d**iscrete * * b**oltzmann * * m**odels with * * p**olynomial * * e**quilibria ( dbmpe ) from the continuous boltzmann - bgk type equation using the hermite expansion approach . from this framework, we can have both integer and non - integer discrete velocity sets , and then we are able to choose either particle - hoping like scheme or general finite difference scheme .the order of expansion may be easily tuned according to the requirement of the flow to be simulated .if the depth scale is much less than the horizontal length scale , the water flows are dominated by the nearly horizontal motio and are referred as shallow water flows . for these flows ,the swes can be employed to simplify the modelling , which are written as and where \thickapprox h\nu\triangle\bm{v},\ ] ] and ( i.e. , ) is the unit tensor .the equations describe the evolution of depth , , and depth - averaged velocity , .the flows are often driven by a body force , , which mathematically represents not only the effects of an actual body force , including geostrophic force and tide - raising forces , but also those of wind stress , surface slope and atmospheric pressure gradient .the pressure originates from the hydrostatic assumption , where represents for the gravitational force . if necessary , the viscous term may also be considered where is the depth averaged kinetic viscosity .as has been shown , the swes can be mathematically analogous to the 2d compressible flow equations .in fact , it may be considered to be describing an ideal gas flow with a gas state equation eq .( [ eq : state ] ) , and its ratio of specific heats is and the `` sound speed '' is ( cf . chapter 2.1 in ) .a boltzmann - bgk type equation can then be constructed for swes , where .\label{eq : swefeq}\ ] ] at the mesoscopic scale , the evolutionary variable becomes the distribution function which represents the number of particles in the volume centred at position with velocities within around velocity at time .the macroscopic quantities can be obtained by integrating over the whole particle velocity space , i.e. , and moreover , the * * r**ight-**h**and * * s**ide ( rhs ) term of eq .( [ eq : bkgswe ] ) obeys the conservation property of the collision integral as shown in eqs .( [ eq : h ] ) and ( [ eq : v ] ) .similar to a real ideal gas , the relation between the relaxation time and kinetic viscosity is which can be obtained by using the chapman - enskog expansion . in principle , this kinetic equation may be solved directly by using regular numerical discretisation for both physical and particle velocity , i.e. , the so - called discrete velocity method . compared with direct discretisation of swes , there are two more degrees of freedom for the particle velocity , which need to be treated carefully for both accuracy and efficiency . among various schemes ,the gauss type quadrature can provide a very efficient yet simple to implement discretisation if properly truncating the equilibrium function eq .( [ eq : swefeq ] ) .we shall derive a hierarchy of discrete boltzmann models using a hermite expansion .however , it is convenient to first introduce a non - dimensional system , where the hat symbol denotes the non - dimensional variables . the reference depth , , can be the characteristic depth of the system such as the depth at the inlet ( e.g. , in fig .[ fig : ill ] ) , while the reference viscosity is represented by and the reference pressure is chosen as . by using these non - dimensional variables , eqs .( [ eq : bkgswe ] ) and ( [ eq : swefeq ] ) become , and ,\label{eq : nonswefeq}\ ] ] while there is no need to change the form of eqs .( [ eq : h ] ) -([eq : p ] ) .we may also define a `` knudsen '' number and the local froude number becomes where the velocity magnitude is denoted by . using the `` knudsen '' number and considering the fact that the viscosity is often considered as constant , the rhs term of eq.( ) becomes , hence, the actual relaxation time will change with time locally .hereinafter , we shall use the non - dimensional version of quantities and equations by default , and the hat symbol will be omitted for clarity .it is also convenient to use the symbol to substitute for in writing the equations .first of all , the equilibrium distribution will be expanded on the basis of the hermite orthogonal polynomials in particle velocity space ( see ref . for detail ) , i.e. , where the order terms are retained and the coefficient is given yb using gauss - hermite quadrature with weights and abscissae , , the integration in eq .( [ eq : coeff ] ) has been converted into a summation .the first few coefficients are given by ,\ ] ] ,\ ] ] and ,\label{eq : a4}\ ] ] where tensor products such as stands for .it is easy to verify that , with an appropriate quadrature , the conservation property of the collision integral will be satisfied automatically using an expansion higher than the first order , which will simplify the algorithm .the body force term can also be approximated as , where is the corresponding coefficients for the distribution function , .the first two are same the and while the higher order terms can be related to stress and heat flux . through the expansion , the kinetic equation ( [ eq : nondimeq ] )can be rewritten in its truncated form , i.e. , thus , we will be solving an approximation of the original kinetic equation .the second step is to discretise eq .( [ truncatedeq ] ) in particle velocity space .the gauss - hermite quadrature is the natural choice . in one dimension ,the discrete velocities are just the roots of the hermite polynomials , and the corresponding weights are determined by : ^{2}}.\label{weight}\ ] ] given one - dimensional velocity sets , those of a higher - dimension can be constructed using the production formulae .once the discrete velocity set is chosen , the governing equation is discretised as where , and . according to eqs .( [ eq : a0 ] ) - ( [ eq : a4 ] ) and ( [ approxfeq ] ) , the explicit form of the fourth order , , is \nonumber \\ & + & \underline{\frac{c_{i}v_{i}}{6}[(c_{i}v_{i})^{2}-3v_{i}v_{i}+3(h-1)(c_{i}c_{i}-4)]}\nonumber \\ & + & \underline{\underline{\frac{1}{24}[(c_{i}v_{i})^{4}-6(v_{i}c_{i})^{2}v_{j}v_{j}+3(v_{j}v_{j})^{2}]}}\label{eq : discretefeq}\\ & + & \underline{\underline{\frac{h-1}{4}[(c_{i}c_{i}-4)((v_{i}c_{i})^{2}-v_{i}v_{i})-2(v_{i}c_{i})^{2}]}}\nonumber \\ & + & \underline{\underline{\frac{(h-1)^{2}}{8}\left[(c_{i}c_{i})^{2}-8c_{i}c_{i}+8)\right]}}\nonumber \\ \},\nonumber \end{aligned}\ ] ] where the underlined terms are of third order , those double underlined are of fourth order , and the others consist of the zero order , the first order and the second order terms .we have already constructed a kind of dbmpe . in order to conduct a numerical simulation ,the physical space and the time are yet to be discretised .for this purpose , any available scheme may be utilised according to the property of flow , such as an upwind scheme for space discretisation . in particular , if using abscissae consisting of integers , the lattice boltzmann scheme +\frac{\tau\mathcal{f}_{\alpha}dt}{\tau+0.5dt}\label{eq : scheme}\end{aligned}\ ] ] can be constructed by introducing which allows a kind of particle - hopping like feature .due to this unique property , the lattice boltzmann method has attracted significant interests in broad areas including shallow water simulations . however ,if a high - order expansion is used , the abscissae will not be integer in general so that more sophisticated schemes will be required to solve the dbmpe .for instance , we will use an implicit - explicit ( imex ) scheme below for testing .we are now ready to discuss a few aspects on the accuracy of dbmpe .as previously shown , errors are introduced in truncating the equilibrium function and discretising the particle velocity , i.e. , from eq .( [ eq : nondimeq ] ) to eq .( [ truncatedeq ] ) and from eq .( [ truncatedeq ] ) to eq .( [ lbgk ] ) . in principle , if the order of the hermite expansion is sufficiently high , eq .( [ truncatedeq ] ) is expected to accurately recovery eq .( [ eq : nondimeq ] ) . in practice , however , only a few orders may be affordable in term of computational cost , and the approximation accuracy will be determined by the expansion order . in general , the accuracy will be related to the froude number , i.e. , higher froude numbers may need more expansion terms , cf .( [ eq : nonswefeq ] ) and eq .( [ eq : fr ] ) .for instance , similar to gas dynamics , see ref . , using a first - order expansion leads to a linear equation which is suitable when .specifically , we may introduce a norm + to measure the displacement of from over the whole particle velocity space . in fig .[ fig : et ] , the density plots of are shown over the macroscopic velocity range of and . for comparison ,we consider both the second - order expansion and fourth - order expansion of the equilibrium distribution function . from the plots, we clearly see that the error grows with increasing macroscopic velocities . in particular , with a smaller ,the error may become relatively large ( note that represents an accumulation of errors from all discrete velocities ) with relatively small macroscopic velocities .hence , simulations may be more prone to error and instability if there is a sharp horizontal gradient of depth which often induces fast water flow .interestingly , it appears that a high - order expansion does not necessarily means smaller errors ( see the plots for ) .this shows complex convergence behaviour of the hermite expansion .however , since the conservation laws are retained naturally , even with limited expansion order , the dbmpe may be sufficient for a broad range of the hydrodynamic problems , particularly if the diffusion term plays a negligible role .the second kind of error is from the chosen quadrature .according to the chapman - enskog expansion , when the `` knudsen '' number is small , the first - order asymptotic solution of eq .( [ truncatedeq ] ) may be written as where the zeroth - order solution , , is just the truncated equilibrium function , . from eq .( [ eq:1stsol ] ) , will include a polynomial of of one order higher than that in .therefore , to numerically evaluate the integration of eqs .( [ eq : h ] ) to ( [ eq : p ] ) , we need a quadrature with the degree of precision to calculate the order of moment ( e.g. , for the zeroth - order moment , ) . specifically , if using a order hermite expansion and to get the stress right ( ) , we need at least a quadrature with the order of precision , which may need four discrete velocities .in the following , we test the scheme by simulating the transient evolution of an initial discontinuity of water level , i.e. , the classical 1d dam - breaking problem . by utilising eq .( [ approxfeq ] ) , the hemite abscissae and eq .( [ weight ] ) , it is straightforward to construct various discrete velocity sets .for instance , to capture the strong shock with high froude number , we may need to use a fourth order expansion for and a quadrature of at least order of precision .in contrast , if the flow is subcritical , then it may be enough to employ commonly used abscissae of nine points and a second order expansion for .we will test both scenarios .when the fourth expansion terms are employed for , we use a set of 16 discrete velocities obtained from the roots of fourth order hermite polynomial , see eq .( [ weight ] ) .these 16 discrete velocities are not integer and require a finite difference scheme . for this purpose, we employ an imex scheme , which is particularly suitable if an solution is pursued at the euler - limit when is typically small , i.e. , where and the force term is ignored here since it is not involved in the test case . to cope with the strong discontinuity, the flux term is discretised as ( cf . ) (\sigma_{\alpha , i}-\sigma_{\alpha , i-1 } ) \label{spacediscrete}\ ] ] where and the van leer slope limiter for simplicity , only the discretisation in the direction is given here . the choice of the subscript in eq .( [ spacediscrete ] ) also depends on the the sign . if ( ) the information is acquired from the grid point ( ) .the mesh size is denoted by and . in simulations, we set the whole domain length to be in the non - dimensional system ( i.e. , in the dimensional system since the initial depth at the inlet is chosen as the reference length ) .an initial discontinuity will be put at the middle ( see fig . [fig : ill ] ) . for supercritical flow ,the initial water depth is at the left half domain ( i.e. , the reference length scale ) and at the right half , while for the subcritical case , the ratio is . ,scaledwidth=40.0% ] for supercritical flow , the parameters for calculation are chosen as the results at and are shown in fig .[ fig:1000vs1 ] .excellent agreement is found for both water depth and velocity with the analytical solution ( see ref . for details of theanalytical solution ) , although numerical error is found for the profile of local froude number at the position of discontinuity due to the phase lag between water depth and velocity caused by numerical diffusion .the simulations confirms the capability of modelling supercritical flows with a fourth order expansion and corresponding quadrature .by contrast , we also tested the combination of a second - order expansion and 16 discrete velocities , but found that the simulation is unstable using the same problem setup .for the subcritical flow , we use the stream - collision scheme ( [ eq : scheme ] ) and the parameters are chosen as while the stream - collision scheme is simpler , it requires a smaller time step for stable simulations due to the stiffness of the right hand collision term when is small .moreover , since the spatial mesh is tied to the time step , this requirement may cause quadratic increase of computational cost in 2d .the above parameters present an economic combination which results in very efficient simulations .the numerical results are shown in fig .[ fig:4vs1 ] , where excellent agreement with the analytical solutions can be found .moreover , it appears that there is almost no numerical diffusion .however , possibly due to the low diffusion , the scheme limits the maximum local froude number . using the same expansion order and quadrature, the scheme ( [ eq : fluxlimiter ] ) can simulate higher froude number in general , in particular for supercritical flow .to summarise , we have derived a hierarchy of discrete boltzmann models with polynomial equilibria for modelling shallow water flows . utilising the hermite expansion and gauss - hermite quadrature, the scheme automatically satisfies the conservation property of the collision integral , which greatly simplifies the algorithm .we also discussed theoretically how to choose the expansion order and quadrature according the requirement of problems ( e.g. , ) , although the hermite expansion shows complex convergence behaviour .in particular , if the quadrature consists of integer abscissae , we can obtain the very simple and efficient lattice boltzmann scheme .the derived models are tested using the classical one - dimensional dam - breaking problem , excellent agreement with analytical solutions is found for both supercritical and subcritical flows . in the future , we will further investigate the application for more complicated problems , e.g. , the treatment of force terms .authors from daresbury laboratory would like to thank the engineering and physical science research council ( epsrc ) for their support of collaborative computational project 5 ( ccp5 ) and uk consortium on mesoscale engineering sciences ( ukcomes , grant no .ep / l00030x/1 ) .authors from sichuan university would like to thank the support of the national natural science foundation of china under grant numbers : 51409183 , 51579166 and 5151101425 .18ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) . * * , ( ) * * , ( ) * * , ( ) * * , ( ) .
a hierarchy of discrete boltzmann model is proposed for simulating shallow water flows . by using the hermite expansion and gauss - hermite quadrature , the conservation laws are automatically satisfied without extra effort . moreover , the expansion order and quadrature can be chosen flexibly according to the problem for striking the balance of accuracy and efficiency . the models are then tested using the classical one - dimensional dam - breaking problem , and successes are found for both supercritical and subcritical flows .
gravitational wave bursts are short ( less than one second ) transients of gravitational radiation with poorly known waveforms .they are one of the distinct types of signals ligo and other interferometric and resonant mass detectors are pursuing .a search for such signals is generally a time - frequency search , since time and frequency information of the candidate events are essential for a multi - detector coincidence analysis as well as for relating events to astrophysical sources .one of the well established tools for such an analysis is the short time fourier transform ( also known as the windowed fourier transform ) developed by denis gabor in 1946 .this approach applies standard fourier transforms to short windowed segments of the original time series to create a uniform time - frequency map with a time resolution and frequency resolution that depends upon the window s duration .however , it is impossible to pick a window duration appropriate for all frequencies .low frequency signals require long duration windows to accumulate sufficient frequency information , while high frequency signals are better isolated in time using short duration windows .time and frequency localization of transient signals is therefore not easily achieved with the short time fourier transform .here we have also assumed that burst waveforms are limited in ; that is , that the number of oscillations fall within a particular range regardless of frequency . in this introductory methods paperwe explore multiresolution techniques for the detection of transients that rely on the dyadic wavelet transform and the constant transform .both methods search for a statistically significant excess of signal power in the time - frequency plane .however , contrary to existing excess power searches , we implement a bank of filters that produce a logarithmic tiling of the time - frequency plane .this provides increasing time resolution at higher frequencies and thus naturally addresses the time - frequency localization problem .this paper outlines these new methods and is organized as follows .section [ sec : bursts ] defines a representation independent parameterization of gravitational - wave bursts and defines an optimal signal to noise ratio , which motivates our use of a multiresolution approach .section [ sec : wavelet ] presents the basics of the continuous and discrete wavelet transforms , while section [ sec : q ] presents the continuous and discrete transform as a modification of the short time fourier transform .section [ sec : lpef ] demonstrates how linear prediction can be used to whiten background noise prior to transform analysis .section [ sec : events ] describes methods for the selection of significant events from the output of the proposed algorithms , and section [ sec : efficiency ] presents preliminary detection efficiencies of our search algorithms applied to gaussian and sine - gaussian bursts injected into simulated ligo detector noise .tuning and application of the proposed algorithms to search for a broader set of signals in data from the first ligo science runs is currently in progress .to further motivate the development of multiresolution techniques , we first consider the parameterization and detectability of gravitational - wave bursts that are well localized in both time and frequency .the more general case of non - localized bursts can be treated as a superposition of localized bursts .an arbitrary gravitational wave burst may be represented in both the time - domain and the frequency - domain by the fourier transform pair and . for bursts that are square - integrable, we may also define a representation independent characteristic squared amplitude , for bursts that are well localized in both time and frequency , it is then meaningful to define a central time , central frequency , duration , and bandwidth : it can be shown that the duration and bandwidth , when defined in this way , obey an uncertainty relation of the form , which we will also take as an approximate criterion for bursts that are well localized in the time - frequency plane .finally , we define the dimensionless quality factor of a burst , which for well localized bursts , is simply a measure of the burst s aspect ratio in the time - frequency plane .in general , a search algorithm for gravitational - wave bursts projects the data under test onto a basis constructed to span the space of plausible bursts .the optimal measurement occurs when a member of this basis exactly matches a gravitational - wave burst . in this casewe achieve a signal to noise ratio , , which for narrowband bursts or a flat detector noise spectrum , is simply the ratio of the total energy content of the signal , , to the in - band one - sided power spectral density , , of the detector noise : thus , , which has units of strain per square root hz , is a convenient quantity for evaluating the detectability of narrowband bursts since it is directly comparable to the amplitude spectral density of detector noise .it is important to note , however , that the above signal to noise ratio is only achieved when a member of the measurement basis closely matches the signal in the time - frequency plane .otherwise , the measurement encompasses either too little signal or too much noise , resulting in a loss in the measured signal to noise ratio .since bursts are naturally characterized by their , we therefore choose to tile the time frequency plane by selecting a measurement basis which directly targets bursts within a finite range of .this naturally leads to a logarithmic tiling of the time frequency plane in which individual measurement pixels are well localized signals , all with the same .in the preceding section , we motivate a search for bursts which projects the data under test onto a basis of well localized bursts of constant .the wavelet transform by construction satisfies these requirements , and is defined by the integral , where the wavelet , , is a time - localized function of zero average .the coefficients are evaluated continuously over times , , and scales , . our ability to resolve in time and frequencyis then determined by the properties assumes at each scale . at large scale, is highly dilated yielding improved frequency resolution at the expense of time resolution . at small scale, we achieve good time resolution with large uncertainty in frequency .however , since the number of oscillations is fixed , the resulting is constant over all scales. for the case of discrete data , a computationally efficient algorithm exists for calculating wavelet coefficients over scales that vary as powers of two : .this is the dyadic wavelet transform , which can be implemented for a limited family of wavelets using conjugate mirror filters .the filters consist of a high pass filter , , and low pass filter , , which can be applied in a cascade to obtain the wavelet coefficients . beginning with the original time series , , of length ,two sequences of length are obtained by application of the high pass and low pass filters followed by down - sampling . the sequence of detail coefficients , , and approximation coefficients , ,are defined at each level , , of the decomposition by the detail coefficients for scale , where , calculated in this manner are the same as the wavelet coefficients obtained from equation [ eqn : cwt ] .if is a power of two , so that , the final approximation sequence will be .the simplest dyadic wavelet is the haar function : the corresponding high pass and low pass filters are / \sqrt{2 } \hfill \mbox{and } \hfill \hat{l}^{\mbox{\scriptsize haar } } = [ + 1 , + 1 ] / \sqrt{2 } , \hspace{0.5in}\ ] ] from which we see that the detail coefficients are related to the differences of each pair of points in the parent series , while the approximation coefficients are related to the averages of each pair . a technique called wavelet packets can be used to further decompose the detail coefficients into smaller frequency subbands of higher .the _ waveburst_ search method currently implements a full wavelet packet decomposition which results in a uniform time - frequency plane with time - frequency cell sizes equivalent to those of the highest scale in the dyadic decomposition .it is possible , however , to choose a different subset of the wavelet packet coefficients which approximate a more constant time - frequency plane by further decomposing the detail coefficents only a constant number of times at each scale .this would give much more flexibility to detect high signals which do not match well to the mother wavelet .the transform is a modification of the standard short time fourier transform in which the analysis window duration varies inversely with frequency .it is similar in design to the continuous wavelet transform and also permits efficient computation for the case of discrete data .however , since reconstruction of the data sequence is not a concern , we permit violation of the zero mean requirement .in addition , the discrete transform is not constrained to frequencies which are related by powers of two .we begin by projecting the sequence under test , , onto windowed complex sinusoids of frequency : here is a time - domain window centered on time with a duration that is inversely proportional to the frequency under consideration .the transform may also be expressed in an alternative form , which allows for efficient computation .thus , the transform at a specific frequency is obtained by a standard fourier transform of the original time series , a shift in frequency , multiplication by the appropriate frequency domain window function , and an inverse fourier transform .the benefit of equation [ eqn : qtransform : alt ] is that the fourier transform of the original time series need only be computed once .we then perform the inverse fourier transform only for the logarithmically spaced frequencies that we are interested in .we may also adapt the transform to the case of discrete data . in this case ,equation [ eqn : qtransform ] takes the form : = \sum_{n = 0}^{n - 1 } x[n ] w[n - m , k ] e^{-n k / n}.\ ] ] as was the case for the continuous transform , we may express the discrete transform in the alternative form : = \sum_{l = 0}^{n - 1 } \tilde{x}[l + k ] \tilde{w}^{*}[l , k ] e^{+ i 2 \pi m l / n}.\ ] ] this allows us to take advantage of the computational efficiency of the fast fourier transform for both the initial transform as well as the subsequent inverse transforms for each desired value of . due to the implementation of the discrete transform in equation [ eqn : dqt : alt ] ,the window function is most conveniently defined in the frequency domain . in particular , we choose a frequency domain hanning window , which has finite support in the frequency domain while still providing good time - frequency localization .in this section , we present linear prediction as a tool for whitening gravitational - wave data prior to analysis by both the dyadic wavelet transform and the discrete transform . for both transforms , the statistical distribution of transform coefficientsis well known for the case of input white noise .whitened data therefore permits a single predefined test for significance , which greatly simplifies the subsequent identification of candidate bursts .we also note that linear predictive whitening is a necessary component of current cross - correlation and time - domain searches for gravitational - wave bursts in ligo data .linear prediction assumes that the sample of a sequence is well modeled by a linear combination of the previous samples .we thus define the predicted sequence , = \sum_{m=1}^{m } c[m ] x[n - m],\ ] ] and the corresponding prediction error sequence , = x[n ] - \hat{x}[n].\ ] ] the coefficients ] is a stationary stochastic process , this results in the well known yule - walker equations , c[m ] = r[k ] \qquad \mbox{for } \qquad 1 \leq k \leq m,\ ] ] where ] evaluated at lag . in practice , we choose the biased autocorrelation estimate , = \frac{1}{n } \sum_{n = |k|}^{n } x[n ] x[n - |k|],\ ] ] which guarantees the existence of a solution to the yule - walker equations . when this process is applied to gravitational - wave data , the resulting prediction error sequence is composed primarily of sample to sample uncorrelated white noise .however , it also contains any unpredictable transient signals which were present in the original data sequence .thus , the prediction error sequence is the whitened data sequence which we desire to subject to further analysis by the algorithms proposed in this paper .we therefore define the linear predictor error filter as the order finite impulse response filter , which when applied to a data sequence , ] : = \sum_{m=0}^{m } b[m ] x[n - m].\ ] ] the choice of the filter order , , is then determined by the measurement resolution of subsequent analysis . for a given filter order , the resulting prediction error sequence will be uncorrelated on time scales shorter than samples . in the frequency domain, this corresponds to a spectrum which is white on frequency scales greater than a characteristic bandwidth , where is the sample frequency of the data .thus , we can whiten to any desired frequency resolution by selecting the appropriate filter order , . in practice, we simply choose the minimum filter order which is sufficient for the subsequent analysis .for the training length , , we choose to train over times which are long compared to the typical gravitational - wave bursts we are searching for . [ 0.8 ] .in addition , the right hand plot shows the effect of the same linear prediction error filter when applied to a simulated gravitational - wave burst injected into the data . in this case, we inject a 235 hz sine - gaussian with a of 12.7 and signal to noise ratio of 20 as described in section [ sec : efficiency ] .note the absence of a time delay between the whitened and unwhitened burst.,title="fig:",width=240 ] [ 0.8 ] .in addition , the right hand plot shows the effect of the same linear prediction error filter when applied to a simulated gravitational - wave burst injected into the data . in this case, we inject a 235 hz sine - gaussian with a of 12.7 and signal to noise ratio of 20 as described in section [ sec : efficiency ] .note the absence of a time delay between the whitened and unwhitened burst.,title="fig:",width=240 ] in order to avoid timing errors due to the unspecified phase response of linear prediction error filters , we construct a new filter from the unnormalized autocorrelation of the original filter coefficients : = \sum_{m = |k|}^{m } b[m ] b[m - |k|].\ ] ] while this new filter has the desired property of zero phase response , its magnitude response has been squared . in practice ,the construction of the zero phase filter is carried out in the frequency domain , where it is also possible to take the square root of the resulting frequency domain filter before transforming back to the time domain .an example application of zero - phase linear prediction to sample gravitational - wave data and a simulated gravitational - wave burst is shown in figure [ fig : lpef ] .we now seek to identify statistically significant events in the resulting dyadic wavelet transform and discrete transform coefficients .we assume that linear predictive whitening ( section [ sec : lpef ] ) has been applied to the data prior to the dyadic wavelet transform . then , according to the central limit theorem , for a sufficiently large scale , , the wavelet coefficients , , within the scale will approach a zero - mean gaussian distribution with standard deviation .we therefore define the sequence of squared normalized coefficients , or normalized pixel energies at scale , which is chi - squared distributed with one degree of freedom ( figure [ fig : histogram ] ) .it is then simply a matter of thresholding on the energy of individual pixels , , to identify statistical outliers .we may also choose to cluster nearby pixels to better detect bursts which deviate from the dyadic wavelet tiling of the time - frequency plane .for a randomly selected cluster of pixels , we define the total cluster energy , which is also chi - square distributed , but with degrees of freedom .it is therefore possible to select a threshold energy to achieve a desired white noise false event rate . alternatively, sharp transients may also be detected by simply searching for vertical clusters of pixels within the top percent of pixel energies in each scale .[ 0.8 ] transform ( right ) pixel energies in the scale and frequency bin encompassing 600 hz for the data of figure [ fig : lpef ] .the distributions before ( dashed ) and after ( solid ) linear predictive whitening are compared to the theoretically expected distributions for ideal white noise ( shaded ) .the whitened data show good agreement with theory except at high pixel energy , where the unpredictable non - stationary behavior ( glitches ) of the gravitational - wave data becomes apparent . for the discrete transform, the inconsistency of the unwhitened data with theoretical white noise is due to the coherent 600 hz power line harmonic , which results in a rician rather than exponential distribution of pixel energies .for the discrete haar wavelet transform , the presence of a coherent signal results in an apparent , but false , white noise distribution .this is due to the use of a real valued wavelet which is sensitive to the relative phase between the signal and wavelet .this effect actually masks true non - stationarities , which are only exposed after linear predictive whitening removes the coherent signal content.,title="fig:",width=240 ] [ 0.8 ] transform ( right ) pixel energies in the scale and frequency bin encompassing 600 hz for the data of figure [ fig : lpef ] .the distributions before ( dashed ) and after ( solid ) linear predictive whitening are compared to the theoretically expected distributions for ideal white noise ( shaded ) .the whitened data show good agreement with theory except at high pixel energy , where the unpredictable non - stationary behavior ( glitches ) of the gravitational - wave data becomes apparent . for the discrete transform ,the inconsistency of the unwhitened data with theoretical white noise is due to the coherent 600 hz power line harmonic , which results in a rician rather than exponential distribution of pixel energies . for the discrete haar wavelet transform , the presence of a coherent signal results in an apparent , but false , white noise distributionthis is due to the use of a real valued wavelet which is sensitive to the relative phase between the signal and wavelet .this effect actually masks true non - stationarities , which are only exposed after linear predictive whitening removes the coherent signal content.,title="fig:",width=240 ] for the discrete transform , we also make use of the central limit theorem by assuming that the data has first been whitened using linear prediction . for pixels of sufficiently long duration , the pixel energies , = | x[m , k ] |^2,\ ] ] at a particular frequency , ,will then approach an exponential distribution ( figure [ fig : histogram ] ) .thus for each pixel , we have a measure of significance : ) = \exp \left ( -\epsilon[m , k ] / \mu_k \right),\ ] ] where is the mean pixel energy at frequency . as with the discrete wavelet transform, we may also consider clusters of pixels and select a threshold energy in order to achieve a desired white noise false event rate .however in this case , the cluster energy is chi - square distributed with degrees of freedom . to efficiently detect bursts over a range of , we perform multiple transforms , each of which produces a time - frequency plane tiled for a particular within our range of interest . to best estimate the parameters of candidate bursts , we then select the most significant non - overlapping pixels among all planes using a simple exclusion algorithm .pixels are considered in decreasing order of significance and any pixel which overlaps in time or frequency with a more significant pixel is discarded .thus , for each localized candidate burst , only the single pixel which best represents the burst parameters is reported .this does not exclude the possibility of clustering pixels from non - localized bursts , since the pixels which pass both the initial significance threshold as well as the exclusion algorithm will represent the strong localized features of such bursts . reducing a non - localized burst to its strong , localized features has the additional benefit of a much tighter time - frequency coincidence criterion when multiple detectors are used .finally , we evaluate the detection efficiencies of the proposed algorithms over simulated ligo noise . the simulated data are composed of line sources and gaussian distributed white noise passed through shaping filters to produce a power spectrum which matches the typical h1 detector noise spectrum during the second ligo science run .assuming optimal orientation , we evaluate the detection efficiency for each waveform at various by injecting 256 such bursts at random times into the simulated data ( see figure 3 ) .the simulated data are first filtered by a 6th order 64 hz butterworth high pass filter followed by a linear prediction error filter with whitening resolution of 16 hz .however , we do not model the non - stationary behavior of real ligo noise , which largely determines the false event rate of existing search algorithms . the algorithms considered in this paperare equally applicable to the identification of statistically significant transient events in auxiliary interferometer or environmental data .so , they may also prove useful for identifying potential vetoes . transform ( right ) .the detection efficiency of the dyadic wavelet transform is shown for three durations of gaussian bursts with s of 0.35 , 0.71 , and 1.41 milliseconds as shown from left to right .the detection efficiency of the transform is shown for a sine - gaussian burst with a of 12.7 and central frequency , , of 275 hz.,title="fig:",width=240 ] transform ( right ) .the detection efficiency of the dyadic wavelet transform is shown for three durations of gaussian bursts with s of 0.35 , 0.71 , and 1.41 milliseconds as shown from left to right .the detection efficiency of the transform is shown for a sine - gaussian burst with a of 12.7 and central frequency , , of 275 hz.,title="fig:",width=240 ] the low haar wavelet is best suited to broadband signals , and for a representative broadband waveform , we use gaussian injections of the form , for s of 0.35 , 0.71 , and 1.41 ms .the filtered data are passed through the discrete wavelet transform and event selection algorithm as described in section [ sec : events : wavelet ] for a false event rate of 0.5 hz .we allow clusters to be composed of a single pixel , or a combination of two pixels adjacent or overlapping in time and adjacent in scale for scales , , of 5 , 6 , and 7 . for s of 0.35 , 0.71 , and 1.41ms , we obtain 50% efficiencies for equal to , , and hz , which correspond to signal to noise ratios , , of 3.6 , 3.7 , and 3.5 .since the discrete transform targets well localized bursts , we demonstrate its efficiency for sine - gaussian bursts of the form , , with a of 12.7 and central frequency , , of 275 hz .the discrete transform is applied to the whitened data with approximately 75 percent pixel overlap to search for single pixel bursts within the frequency band 644096 hz and with s in the vicinity of 10.6 , 14.1 , and 17.7 .candidate events are then identified as described in section [ sec : events : dqt ] in order to obtain a 1 hz white noise false rate .the resulting 50% detection efficiency occurs at an of hz , corresponding to a of 3.0 .we have described two transient - finding algorithms using logarithmic tiling of the time - frequency plane and examined their applicability to the detection of gravitational wave bursts .the methods work more efficiently when data are whitened and for this we have introduced a data conditioning algorithm based on linear prediction . as a first step in quantifying their performance , we have used simulated noise data according to the ligo noise spectra from its second science run .moreover , _ ad hoc _ waveform morphologies were used to simulate the presence of a burst signal over the noise .this allowed us to measure the algorithms efficiency at a fixed false alarm rate of o(1hz ) .the results are encouraging and we are continuing the computation for mapping the efficiency of the algorithms as a function of their false alarm rate .needless to say , even if the shaped ligo noise with some line features that we used for our simulation is a step forward , it does remain unrealistic .real interferometric data have richer line structure , non - stationary and non - gaussian character that may be far from our assumptions ; at the same time though , it is extremely hard to simulate .all of these make the perfomance estimates that we made using our simulation model optimistic . the use though of mathematically well described noise allows the straightforward reproduction and comparison of our results with other methods .an end - to - end pipeline using the aforementioned methods for whitening and transient - finding on data collected by the ligo instruments is currently in development .this will provide the ultimate test of the performance of the algorithms and will be reported separately within the context of the ligo scientific collaboration ( lsc ) data analysis ongoing research .this work was supported by the us national science foundation under cooperative agreement no .
we present two search algorithms that implement logarithmic tiling of the time - frequency plane in order to efficiently detect astrophysically unmodeled bursts of gravitational radiation . the first is a straightforward application of the dyadic wavelet transform . the second is a modification of the windowed fourier transform which tiles the time - frequency plane for a specific . in addition , we also demonstrate adaptive whitening by linear prediction , which greatly simplifies our statistical analysis . this is a methodology paper that aims to describe the techniques for identifying significant events as well as the necessary pre - processing that is required in order to improve their performance . for this reason we use simulated ligo noise in order to illustrate the methods and to present their preliminary performance .
the modelling and control of quantum linear systems is an important emerging application area which is motivated by the fact that quantum mechanical features emerge as the systems being controlled approach sub - nanometer scales and as the required levels of accuracy in control and estimation approach quantum noise limits . in recent years, there has been considerable interest in the feedback control and modeling of linear quantum systems ; e.g. , see .such linear quantum systems commonly arise in the area of quantum optics ; e.g. , see . the feedback control of quantum optical systems has applications in areas such as quantum communications , quantum teleportation , and gravity wave detection . in particular , the papers have been concerned with a class of linear quantum systems in which the system can be defined in terms of a set of linear quantum stochastic differential equations ( qsdes ) expressed purely in terms of annihilation operators .such linear quantum systems correspond to optical systems made up of passive optical components such as optical cavities , beam - splitters , and phase shifters .the main results of this paper apply to this class of linear quantum systems for the square case in which the number of outputs is equal to the number of inputs .this paper is concerned with the use of singular perturbation approximations in order to obtain reduced dimension models for the class of linear quantum systems under consideration .singular perturbation approximations are is widely used for obtaining reduced dimension models for classical systems ; e.g. , see . in the case of quantum systems ,a reduced dimension model may be desired for a quantum plant to be controlled in order to simplify the controller design process which can be very complicated using existing quantum controller design methods such as the quantum lqg method of .another application of model reduction for linear quantum systems arises in the case of controller reduction where a reduced dimension controller is obtained from a high order synthesized controller . in the case of coherent quantum control such as considered in ,the controller is required to be a quantum system itself and thus the reduced dimension system must be physically realizable . in the physics literature , a commonly used technique in the modeling of quantum systems is the method of adiabatic elimination , which is closely connected to the singular perturbation method in linear systems theory ; e.g , see .the papers also consider the issue of convergence of these singular perturbation approximations . in this paper , we consider the properties of the singular perturbation approximation to a linear quantum system from a linear systems point of view ; e.g. , see for a detailed description of singular perturbation methods in linear systems theory including error characterization in both the time and frequency domains . in particular , we are concerned with the physical realizability properties of the singular perturbation approximation to a linear quantum system .the issue of physical realizability for linear quantum systems was considered in the papers .this notion relates to whether a given qsde model represents a physical quantum system which obeys the laws of quantum mechanics . in particular , the results of the papers show that the notion of physical realizability enables a direct connection between results in quantum linear systems theory and linear systems theory . in applying singular perturbation methods to obtain approximate models of quantum systems , it is important that model obtained is a physically realizable quantum system so that it retains the essential features of a quantum system . also , if the approximate model of a quantum plant is to be used for controller synthesis , the controller synthesis procedure may need to exploit the physical realizability of the plant model . in addition , if model order reduction is applied to a coherent feedback controller which is to be implemented as a quantum system , then this reduced order controller model must be physically realizable . in the paper , the notion of physical realizability is shown to be equivalent to the lossless bounded real property for the class of square linear quantum systems under consideration .this property requires that the system matrix is hurwitz and that the system transfer function is unitary for all frequencies .the main result of this paper shows that if a singularly perturbed linear quantum system is physically realizable for all values of the singular perturbation parameter , then the corresponding reduced dimension approximate system has the property that all of its poles are in the closed left half of the complex plane and its transfer function is unitary for all frequencies .these properties indicate that in all but pathological cases , the singular perturbation approximation method will yield a physically realizable reduced dimension system .in addition , an example is given showing one such pathological system in which the singular perturbation approximation is not strictly hurwitz .the paper also presents a result for a special case of the singularly perturbed linear quantum systems considered in this paper .this special case corresponds to singular perturbations which arise physically from a perturbation in the system hamiltonian . in this case, the result shows that the corresponding reduced dimension approximate system is always physically realizable .this result can in fact be derived from the nonlinear quantum system results presented in the papers .however , we have included this result , along with a straightforward proof , for the sake of completeness .we have also included an example of a singularly perturbed linear quantum optical system which fits into the subclass of singularly perturbed quantum systems for which this result applies .this example illustrates how such singularly perturbed quantum systems can arise naturally in physical quantum optical systems .the remainder of this paper proceeds as follows . in section [ sec : systems ] , we define the class of linear quantum systems under consideration and recall some preliminary results on the physical realizability of such systems . in section [ sec : sing_pert ] , we consider the singular perturbation approximation to a linear quantum system .we first present a result for the class of singularly perturbed linear quantum systems under consideration which relates to the lossless bounded real property .we then consider a special class of singular perturbations which is related to corresponding perturbations of the quantum system coupling operator and hamiltonian operator .we present a result which relates to this class of singular perturbations and shows that the corresponding approximate reduced dimension system is guaranteed to be physically realizable . in section [ sec : example ] , we present a simple example from the field of quantum optics to illustrate the proposed theory . in section [ sec : conclusions ] , we present some conclusions .we consider a class of linear quantum systems described in terms of the annihilation operator by the following quantum stochastic differential equations ( qsdes ) : where , , and ; e.g. , see . here^t ] to be from this , it follows using the routh - hurwitz criterion that the matrix is hurwitz for all .furthermore , it is straightforward to verify that the matrix > 0 ] and ] which is not hurwitz . also , , , and thus , the reduced order system is not minimal .this example shows that a stronger result than theorem [ t3 ] , which guarantees minimality and hurwitzness of the approximate system , can not be obtained in the general case . in the next subsection, we consider a special class of singular perturbations for which the physical realizability of the reduced dimension system can be guaranteed .we now consider a special class of singularly perturbed physically realizable quantum systems of the form ( [ sys1 ] ) defined in terms of the matrices , and in definition [ phys_real ] . indeed, we consider the case in which , ;~ m = \left[\begin{array}{cc } m_{11 } & \frac{1}{\sqrt{\epsilon}}m_{12}\\ \frac{1}{\sqrt{\epsilon}}m_{12}^\dagger & \frac{1}{\epsilon}m_{22 } \end{array}\right]\ ] ] for all where , and and are hermitian matrices .then , substituting these values into ( [ harmonic ] ) , we obtain the following linear quantum system of the form ( [ sys ] ) : if we make the change of variables , this leads to the following singularly perturbed quantum system of the form ( [ sys1 ] ) : that even though we formally let in the singular perturbation approximation , this state space transformation can be applied for each fixed . then , for the singularly perturbed linear quantum system ( [ sys5 ] ) , we can obtain the corresponding reduced dimension approximate system according to equations ( [ sys3 ] ) , ( [ slow_matrices ] ) .the following result is obtained for singularly perturbed linear quantum systems of the form ( [ sys5 ] ) .this result can also be derived from the general nonlinear results presented in the papers . however , this result for the linear case is included here for the sake of completeness .[ t4 ] consider a singularly perturbed linear quantum system ( [ sys5 ] ) which is physically realizable for all and suppose that the matrix is nonsingular. then the corresponding reduced dimension approximate system defined by equations ( [ sys3 ] ) , ( [ slow_matrices ] ) is physically realizable .the proof of this theorem is given in the appendix .we consider an example from quantum optics involving the interconnection of two optical cavities as shown in figure [ f1 ] .each optical cavity consists of two partially reflective mirrors which are spaced at a specified distance to give a cavity resonant frequency which corresponds to the frequency of the driving laser ; e.g. , see . in practice, optical isolators would also need to be included in the optical connections between the cavities to ensure that the light traveled only in one direction . here and are the coupling parameters of the first cavity and is the coupling parameter of the second cavity .these parameters are determined by the physical characteristics of each cavity including the mirror reflectivities .the qsde of the form ( [ sys ] ) describing this quantum system is as follows : & = & \left[\begin{array}{cc}-\frac{k_1+k_2}{2 } -\sqrt{k_1 k_2 } & -\sqrt{k_1 \tilde \gamma } \\-\sqrt{k_2 \tilde \gamma } & -\frac{\tilde \gamma}{2}\end{array}\right ] \left[\begin{array}{c}a \\\tilde a\end{array}\right]dt -\left[\begin{array}{c}\sqrt{k_1}+\sqrt{k_2 } \\ \sqrt{\tilde \gamma } \end{array}\right ] du_2 ; \nonumber \\ dy_1 & = & \left[\begin{array}{cc}\sqrt{k_1}+\sqrt{k_2 } & \sqrt { \tilde \gamma } \end{array}\right]\left[\begin{array}{c}a \\ \tildea\end{array}\right]dt + du_2.\end{aligned}\ ] ] we wish to consider the reduced dimension approximation to this system which is obtained by letting .this corresponds to the case in which the mirrors in cavity 2 are perfectly reflecting and so there is a direct optical feedback from the output of cavity 1 into the input of cavity 1 .if we let , it is straightforward to verify that this system is a system of the form ( [ sys4 ] ) with , , , , , , . with the change of variables ,the system becomes & = & \left[\begin{array}{cc}-\frac{k_1+k_2}{2 } -\sqrt{k_1 k_2 } & -\sqrt{k_1 } \\-\sqrt{k_2 } & -\frac{1}{2}\end{array}\right ] \left[\begin{array}{c}a \\ \tilde a\end{array}\right]dt-\left[\begin{array}{c}\sqrt{k_1}+\sqrt{k_2 } \\ 1 \end{array}\right ] du_2 ; \nonumber \\ dy_1 & = & \left[\begin{array}{cc}\sqrt{k_1}+\sqrt{k_2 } & 1 \end{array}\right]\left[\begin{array}{c}a \\\tilde a\end{array}\right]dt + du_2\end{aligned}\ ] ] which is a singularly perturbed quantum system of the form ( [ sys2 ] ) .hence , the corresponding reduced dimension slow subsystem ( [ sys3 ] ) , ( [ slow_matrices ] ) is given by since the system ( [ sys8 ] ) satisfies the conditions of theorem [ t4 ] , it follows from this theorem that the system ( [ sys9 ] ) will be physically realizable .this can also be verified directly by noting that the system ( [ sys9 ] ) satisfies the conditions of theorem [ t1 ] with .note that for this example , if , then the reduced dimension quantum system is uncontrollable , unobservable and has a pole at the origin .in this paper , we have considered the physical realizability properties of the singular perturbation approximation to a class of singularly perturbed linear quantum systems. these results may be useful in the modeling of linear quantum systems such as gravity wave detectors where a simplified model is required without sacrificing physical realizability .the author is wishes to acknowledge useful research discussions with matthew james , elanor huntington , michele heurs and hendra nurdin .if the singularly perturbed quantum system ( [ sys2 ] ) is physically realizable for all , then it follows from theorem [ t1 ] that for all , there exists a matrix such that the matrices ;~ g_\epsilon = \left[\begin{array}{c } g_{1}\\ \frac{1}{\epsilon}g_{2}\end{array}\right];\nonumber \\ h & = & \left[\begin{array}{cc } h_{1 } & h_{2 } \end{array}\right];~k\end{aligned}\ ] ] satisfy the conditions ( [ physrea ] ) .hence , it follows from the first of these equalities and fact 12.21.3 of that matrix has all of its eigenvalues in the closed left half of the complex plane for all .then , using a standard result on singularly perturbed linear systems ( e.g. , see theorem 3.1 on page 57 of ) it follows that the matrix has all of its eigenvalues in the closed left half of the complex plane . with the matrices , , and defined as above ,it follows by a straightforward but tedious calculation that we can write the transfer function in the form : ^{-1}\nonumber \\ & & \times \left(g_0+f_{12}\left(i-\frac{f_{22}}{\epsilon s}\right)^{-1}g_2\right)\nonumber \\ & & + k_0+h_2f_{22}^{-1}\left(i-\frac{f_{22}}{\epsilon s}\right)^{-1}g_2\end{aligned}\ ] ] where the matrices , , , are defined as in ( [ slow_matrices ] ) .now for small values of , we can approximate the term in the above expression as follows : from this , it follows that we can write ^{-1 } \nonumber \\ & & \times \left(g_0-\epsilon s f_{12}f_{22}^{-1}g_2\right)\nonumber \\ & & + k_0- \epsilon sh_2f_{22}^{-2}g_2 + o(\epsilon^2).\end{aligned}\ ] ] from this and some further straightforward manipulations and simplifications , we can obtain now using the fact that the matrices , , and satisfy the conditions ( [ physrea ] ) , we will show that transfer function matrix unitary at all frequencies . indeed using ( [ physrea ] ), we have for all \left(-i\omega i - f_\epsilon^\dagger\right)^{-1}h^\dagger\nonumber \\ & & = h\theta\left(-i\omega i - f_\epsilon^\dagger\right)^{-1}h^\dagger + h\left(i\omega i - f_\epsilon\right)^{-1}\theta h^\dagger \nonumber \\ & & = -kg_\epsilon^\dagger\left(-i\omega i - f_\epsilon^\dagger\right)^{-1}h^\dagger - h\left(i\omega i - f_\epsilon\right)^{-1}g_\epsilon k^\dagger . \end{aligned}\ ] ] now using the third equation of ( [ physrea ] ) and the fact that is square , we have for all therefore , since is square we have for all .hence , it follows from ( [ tf_pert ] ) and the fact that ( [ unitary_e ] ) holds for all that we must have for all .this completes the proof of the theorem . for the singularly perturbed linear quantum system ( [ sys5 ] ) , it is straightforward but tedious to verify that the corresponding reduced dimension slow subsystem ( [ sys3 ] ) , ( [ slow_matrices ] ) is given by where furthermore , it is straightforward to verify that and the matrix is hermitian . hence , it follows from definition [ phys_real ] that the system ( [ sys6 ] ) is physically realizable with the matrices , , defined as above and with .this completes the proof of the theorem . m. yanagisawa and h. kimura , `` transfer function approach to quantum control - part i : dynamics of quantum feedback systems , '' _ ieee transactions on automatic control _ ,48 , no . 12 , pp . 21072120 , 2003 .v. p. belavkin and s. c. edwards , `` quantum filtering and optimal control , '' in _ quantum stochastics and information : statistics , filtering and control ( university of nottingham , uk , 15 - 22 july 2006 ) _ , v. p. belavkin and m. guta , eds.1em plus 0.5em minus 0.4emsingapore : world scientific , 2008 . j. gough and r. van handel , `` singular perturbation of quantum stochastic differential equations with coupling through an oscillator mode , '' _ journal of statistical physics _ , vol .127 , no . 3 , pp .575607 , 2007 .l. bouten , r. van handel , and a. silberfalb , `` approximation and limit theorems for quantum stochastic models with unbounded coefficients , '' _ journal of functional analysis _ , vol .254 , pp . 31233147 , 2008 .j. gough , h. nurdin , and s. wildfeuer , `` commutativity of the adiabatic elimination limit of fast oscillatory components and the instantaneous feedback limit in quantum feedback networks , '' _ journal of mathematical physics _51 , no . 12 , pp .123518.1123518.25 , 2010 .
this paper considers the use of singular perturbation approximations for a class of linear quantum systems arising in the area of linear quantum optics . the paper presents results on the physical realizability properties of the approximate system arising from singular perturbation model reduction .
quantum error correcting codes are a central weapon in the battle to overcome the effects of environmental noise associated with attempts to control quantum mechanical systems as they evolve in time .it is thus important to develop techniques that assist in determining whether one code is better than another for a given noise model . in this paperwe make a contribution to this study by introducing a notion of entropy for quantum error correcting codes .no single quantity can be expected to hold all information on a code , its entropy included .nevertheless , the entropy of a code is one way in which the amount of effort required to recover a code can be quantified . in the extremal case ,a code has zero entropy if and only if it can be recovered with a single unitary operation .this is the simplest of all correction operations in that a measurement is not required as part of the correction process .these codes have been recently coined _ unitarily correctable _ , and include _ decoherence - free subspaces _ in the case that recovery is the trivial identity operation .thus more generally the entropy can be regarded as a measure of how close a code is to being unitarily correctable , or decoherence - free in some cases . in the next sectionwe introduce the entropy of a code , along with required nomenclature .we also consider an example motivated by the stabilizer formalism and discuss an extension of code entropy to operator quantum error correcting subsystem codes .we then consider in detail an illustrative class of quantum operations for which the code structure has recently been characterised , the class of _ binary unitary channels _ .let denote a quantum state : a hermitian , positive operator , satisfying the trace normalisation condition tr . a linear quantum operation ( or channel ) , which sends a density operator of size into its image of the same size may be described in the choi - kraus form the kraus operators can be chosen to be orthogonal , so that the non - negative weights become eigenvalues of the dynamical ( choi ) matrix , .we refer to the rank of the choi matrix as the _ choi rank _ of .observe that the choi rank of is equal to the minimal number of kraus operators required to describe as in ( [ kraus ] ) .hence in the canonical form the number of non - zero kraus operators does not exceed . due to the theorem of choi the map is completely positive ( cp ) if and only if the corresponding dynamical matrix is positive if and only if has a form as in ( [ kraus ] ) .the map is _ trace preserving _, , if and only if where we assume some are zero if is less than .the family of operators is not unique .however , if is another family of operators that determine as in ( [ kraus ] ) , then there is a scalar unitary matrix such that for all .we refer to this as the _ unitary invariance _ of choi - kraus decompositions .to characterise the information missing in a quantum state one uses its _ von neumann _ entropy , we will use the convention that refers to logarithm base two as this provides a cleaner operational qubit definition in the context of quantum information . in order to describe the action of a cp map , represented in the canonical choi - kraus form ( [ kraus ] ) , for an initial state we may compare its entropy with the entropy of the image . to obtain a bound for such an entropy change we can define an operator acting on an extended hilbert space , if the map is stochastic , the operator is positive definite and normalised so it represents a density operator in its own right , ( specifically , it is an initially pure environment evolved by the unitary dilation of ) .observe that for any unitary map , , the form ( [ kraus ] ) consists of a single term only .hence in this case the operator reduces to a single number equal to unity and its entropy vanishes , .the auxiliary state acting in an extended hilbert space was used by lindblad to derive bounds for the entropy of an image of any initial state .the bounds of lindblad , are obtained by defining another density matrix in the composite hilbert space , where and forms an orthonormal basis in .thus , is simply the system and an initially pure environment evolved by the unitary dilation of .computing partial traces one finds that tr and tr .it is possible to verify that , so making use of the subadditivity of entropy and the triangle inequality one arrives at ( [ boundent ] ) .if the initial state is pure , that is if , we find from ( [ boundent ] ) that the final state has entropy . for this reason called the _ entropy exchange _ of the operation by shumacher . in that workan alternative formula for the entropy exchange was proven , where is an _arbitrary _ purification of the mixed state , tr .thus , the entropy exchange is invariant under purification of the initial state and remains a function only of the initial density operator and the map . a quantum operation allows for an error correction scheme in the standard framework for quantum error correction if there exists a subspace such that for some set of complex scalars the corresponding projection operator satisfies specifically , this is equivalent to the existence of a quantum _ recovery operation _ such that where is the map .the subspace related to determines a _ quantum error correcting code _ ( qecc ) for the map .a special class of codes are the _ unitarily correctable codes _ ( ucc ) , which are characterised by the existence of a unitary recovery operation .these codes include _ decoherence - free subspaces _ ( dfs ) in the case that is the identity map , .it can be shown that the matrix is hermitian and positive , and is in fact a density matrix , so it can be considered as an auxiliary state acting in an extended hilbert space of size at most .it is easy to obtain a more refined global upper bound on the rank of in terms of the map .[ upperbound ]let be the matrix determined by a code for a quantum map .then the rank of is bounded above by the choi rank of .* proof . * without loss of generality assume the choi matrix is diagonal .we have for all , and the result follows from the positivity of and . ' '' ''unitarily correctable codes are typically highly _ degenerate _ codes , as the map collapses to a single unitary operation when restricted to the code subspace .in particular , the unitary invariance of choi - kraus representations implies the restricted operators are all scalar multiples of a single unitary .more generally , one can see that a code is degenerate for precisely when the choi rank of is strictly greater than the rank of .indeed , the choi rank counts the minimal number of kraus operators required to implement via ( 2.1 ) , and satisfaction of this strict inequality means there is redundancy in the description of by the operators . thus , for these reasons we shall refer to codes as _ non - degenerate _ if the choi rank of coincides with the rank of and if the spectrum of is equally balanced that is to say that non - degenerate codes correspond to the maximally degenerate error correction matrix , proportional to the identity matrix assume now that an error correcting code exists and all conditions ( [ compression ] ) are satisfied .if a quantum state is supported on the code then and calculation of the entropy exchange ( [ osigma ] ) simplifies , in this way we have shown that the error correction matrix is _ equal _ to the auxiliary matrix of lindblad , provided the initial state belongs to the code subspace . from another direction ,given an error correcting code for a map , in it was shown that there is a quantum state and an isometry such that for all , the result , which can be seen as a consequence of the decoupling condition of , gives an explicit way to `` see '' a code at the output stage of a quantum process for which the code is correctable . the result ( and its subsystem generalization see below ) may also be viewed as a formalisation of the _ subsystem principle _ for preserving quantum information . from the proof of this resultone can see directly that the entropy of satisfies .this equality follows also from the fact that and can be interpreted as the states obtained by partial trace of an initially pure state with respect to two different subsystems .thus , from multiple perspectives we find motivation for the following : [ entdefn ] given a quantum operation with kraus operators and a code with matrix given by ( [ compression ] ) , we call the von neumann entropy the _ entropy of _ _ relative to _ .the entropy of a code depends only on the map and the subspace defined by , not on any particular state in the code subspace .thus , the entropy exchange will be the same for all initial states supported on the code subspace and is therefore a property of the code itself . in the following resultwe determine what possible values the code entropy can take , and we derive a characterisation of the extremal cases in terms of both the code and the map .[ ucccase ] let be a quantum operation and let be a code with matrix given by ( [ compression ] ) .then belongs to the closed interval ] .it is clear that the problem of finding an error correcting code subspace for the above map is equivalent to the case where .the number of kraus operators is equal to , with and .thus the error correction matrix is of size two and reads where is a solution of the _ compression problem _ for the set of solutions to this problem can be phrased in terms of the _ higher - rank numerical range _ of the matrix .the rank- numerical range of is defined as given a dimension- that defines the size of the desired correctable code , each in corresponds to a particular correctable code defined by the associated projection that solves ( [ comp1 ] ) .the following is a straightforward application of ( [ compression ] ) .[ correctable - if - numrange ] given a binary unitary channel , there exists a rank- correctable code for if and only if the rank- numerical range of is non - empty .thus , the problem of finding the correctable codes for a given binary unitary channel can be reduced to the problem of finding the higher - rank numerical range of .this problem has recently been solved in its entirety .most succinctly , in terms of the eigenvalues of , the numerical range of is the convex subset of the unit disk given by where is the set of linear combinations such that and .figures 1.a and 1.b depict the case of a generic two - qubit unitary ( ) with , while figure 1.c shows the case of a generic two - qutrit unitary noise ( ) with . for unitary matrices describing a bi - unitary channel : two qubit system , a ) example 2 with , b ) case with for which code entropy is smaller , c ) two qutrit case with chosen to maximize its modulus and to minimize the code entropy . ] with a particular in - hand , straightforward algebra provides us with the spectrum of the matrix ( [ lambda1 ] ) , which allows us to calculate the entropy of the code , we are then led to the following : [ entropy - biunitary ] a minimum entropy rank- code for a binary unitary channel corresponds to any code for which the magnitude of the compression values is closest to unity , while the maximum entropy corresponds to closest to zero. moreover , a code with minimal entropy can be constructively obtained .* proof . *the first statement follows directly from an application of first and second derivative tests on ( [ lambda5 ] ) , constrained to the unit disk .a minimal entropy code can be explicitly constructed based on the analysis of higher - rank numerical ranges . ' '' '' as an illustration of the code construction in the simplest possible case ( ) , let be the unitary with spectrum depicted in figure 1.a , .let be the associated eigenstates , .in this case we have so , and one can check directly that a single qubit correctable code for is given by , where for a concrete example , in the case that , ( [ lambda5 ] ) yields a code entropy of .the general case requires a more delicate construction , nevertheless it can be done .the `` eigenstate grouping '' procedure used above can be applied whenever divides .for instance , in the generic and case depicted in figure 1.c , a single qutrit code can be constructed for all in the region .the states , , can be constructed in an analogous manner by grouping the nine eigenstates for into three groups of three , and writing in three different ways as a linear combination of the associated unimodular eigenvalues , .however , without going into the details of this construction we can still analyze the corresponding code entropies . for simplicity assume the nine eigenvalues are distributed evenly around the unit circle with . by theorem [ entropy - biunitary ]we know that the entropy will be minimized for any that gives the minimum distance from to the unit circle .an elementary calculation shows that one such , given by the intersection of the lines through the first and seventh , and sixth and ninth eigenvalues ( counting counterclockwise ) is approximately . with the probability , the corresponding error correction matrix has spectrum .thus , ( [ lambda3 ] ) and ( [ lambda5 ] ) yield the minimal qutrit code entropy for this channel as on the other hand , as belongs to , by theorem [ entropy - biunitary ] and ( [ lambda1 ] ) we also see the maximal entropy for occurs for any code with . in such cases we have spectrum , andhence the maximal entropy is .changing focus briefly , if we fix an arbitrary unitary , then we could consider the family of channels determined by varying the probability .it follows from ( [ lambda5 ] ) that the channel with the correctable codes of maximal entropy corresponds to the channel , and the channels whose correctable codes possess minimal entropy correspond to and .indeed , the value of depends on but not , thus a given will solve ( [ comp1 ] ) for any and so can be chosen independently using the above theorem .the result once again follows from application of first and second derivative tests with between and .the following results show that the entropy of a code for a binary unitary channel can be regarded as a measure of how close the code is to a decoherence - free subspace .[ uccdfs ] if is a binary unitary channel , then the sets of unitarily correctable subspaces and decoherence - free subspaces coincide .* proof . *as proved in , for a bistochastic ( unital ) map , a condition satisfied by every binary unitary channel , the unitarily correctable codes ( respectively the decoherence - free subspaces ) for are imbedded in the fixed point algebra for the map ( respectively ) , where is the hilbert - schmidt dual map of . in particular , it follows from this fact that the former set is given by the set of operators that commute with and , whereas the latter is the set of operators that commute with . by the spectral theoremthese two sets coincide . ' '' '' [ dfsbuc ] let be a binary unitary channel .then there is a rank- code of zero entropy , , for if and only if there is a -dimensional decoherence - free subspace for if and only if there exists .* a -dimensional decoherence - free subspace for corresponds to an eigenvalue of with multiplicity at least ( see and the references therein ) ; that is , .the rest follows from the lemma and previous theorem . ' '' '' in order to further illustrate these results , consider again the case of an arbitrary two - qubit system ( ) .the correctable codes with largest entropy are those with and so the spectrum of reads in the two - qubit case , the complex number is given by the point inside the unit circle at which two diagonals of the quadrangle formed by the spectrum of cross ( see figures 1.a and 1.b ) .consider a special case of the problem where has a doubly degenerated eigenvalue , so that . for example , could be any ( non - identity ) element of the two - qubit pauli group .then the spectrum of consists of which implies ( despite having been chosen for the largest entropy correctable codes ) .hence is pure and there exists a decoherence free subspace the one spanned by the degenerated eigenvalues of . in general , for binary unitary channels one may use the entropy ( [ lambda5 ] ) as a measure quantifying to what extend a given error correction code is close to a decoherence - free subspace .for instance , any channel ( [ biunitary2 ] ) acting on a two qubit system and described by unitary matrix of size may be characterized by the radius of the point in which two diagonals of the quadrangle of the spectrum cross .the larger , the smaller entropy , and the closer the error correction code is to a decoherence - free space .the code entropy can be also used to classify codes designed for a binary unitary channel acting on larger systems .for instance in the case of two qutrits , , one can find a subspace supported on dimensional subspace .the solution is by far not unique and can be parametrized by complex numbers belonging to an intersection of triangles , which forms a convex set of a positive measure . from this setone can thus select a concrete solution providing a code , such that is the largest , which implies that the code entropy , , is the smallest see figure 1.c .such an error correction code is distinguished by being as close to a decoherence - free subspace as possible .we have investigated a notion of entropy for quantum error correcting codes and quantum operations .the entropy has multiple natural realisations through fundamental results in the theory of quantum error correction .we showed how the extremal cases are characterised by unitarily correctable codes and decoherence - free subspaces on the one hand , and the non - degenerate case determined by the choi matrix of the map on the other .we considered examples from the stabilizer formalism , and conducted a detailed analysis in the illustrative case of binary unitary channels .recently developed techniques on higher - rank numerical ranges were used to give a complete geometrical description of code entropies for binary unitary channels ; in particular , the structure of these subsets of the complex plane can be used to visually determine how close a code is to a decoherence - free subspace .we also introduced an extension of code entropy to subsystem codes , and left a deeper investigation of this notion for elsewhere .it could be interesting to explore further applications of the code entropy in quantum error correction .for instance , although quantum error correction codes were originally designed for models of discrete time evolution in the form of a quantum operation , generalizations to the case of continuous evolution in time have been investigated .further , we have investigated perfect correction codes only , for which the error recovery operation brings the quantum state corrupted by the noise back to the initial state with fidelity equal one .such perfect correction codes may be treated as a special case of more general approximate error correction codes .another recent investigation includes analysis that suggests the measurement component of recovery may prove to be problematic in quantum error correction , and hence may motivate further investigation of unitarily correctable codes .we thank the referee for helpful comments .d.w.k . was partially supported by nserc grant 400160 , by nserc discovery accelerator supplement 400233 , and by ontario early researcher award 48142 .was partially supported by an ontario graduate scholarship . k. .acknowledges support of an european research project scala and the special grant number dfg - sfb/38/2007 of polish ministry of science .d. gottesman .quantum error correction and fault tolerance . in j .-francoise , g.l .naber , and s.t .tsou , editors , _ encyclopedia of mathematical physics _ , volume 4 , page 196 .oxford , elsevier , 2006 , quant - ph/0507174 .
we define and investigate a notion of entropy for quantum error correcting codes . the entropy of a code for a given quantum channel has a number of equivalent realisations , such as through the coefficients associated with the knill - laflamme conditions and the entropy exchange computed with respect to any initial state supported on the code . in general the entropy of a code can be viewed as a measure of how close it is to the minimal entropy case , which is given by unitarily correctable codes ( including decoherence - free subspaces ) , or the maximal entropy case , which from dynamical choi matrix considerations corresponds to non - degenerate codes . we consider several examples , including a detailed analysis in the case of binary unitary channels , and we discuss an extension of the entropy to operator quantum error correcting subsystem codes .
many systems of scientific or practical interest are decomposable into subsystems with strong internal and relatively weak external interactions ; for example , there are groups of friends or collaborators in social networks , sets of topically related documents in hypertexts , or blocs of interlocked countries in international trade .if systems are modeled as networks , with the system elements as vertices and their interactions as edges , then each subsystem corresponds to a so - called _ community _ , a set of vertices with dense internal connections but sparse connections to the remaining network .two widely used representations of networks are _ layouts _ , which assign the vertices to positions in a metric space , and _clusterings _ , which partition the vertex set into disjoint subsets . both representations can group densely connected vertices , by placing them at nearby positions or in the same cluster , and separate sparsely connected vertices , by placing them at distant positions or in different clusters , and can thus naturally reflect the community structure .requirements like the grouping of densely connected vertices are often formalized as mathematical functions called _ quality measures _ , and the optimization of quality measures is a common strategy for the computation of both layouts and clusterings . despite these commonalities , andalthough layouts and clusterings are often used together as complementary representations of the same network , there is no coherent understanding of layout quality and clustering quality .this paper unifies newman and girvan s modularity , a popular quality measure for clusterings , with energy models of pairwise attraction and repulsion between vertices ( e.g. , ) , a widely used class of quality measures for layouts . after an introduction of the quality measures in sec .[ s : def ] , sec .[ s : density ] shows that layouts with optimal energy and clusterings with optimal modularity represent the community structure similarly , and sec .[ s : unification ] demonstrates that modularity actually _ is _ an energy model of pairwise attraction and repulsion , if clusterings are considered as restricted layouts .section [ s : appl ] discusses the application of these results for computing consistent clusterings and layouts .quality measures for representations of networks formalize what is considered as a _ good _ representation , and allow to compute good representations automatically using optimization algorithms .mathematically , a quality measure maps network representations to real numbers , such that larger ( or smaller ) numbers are assigned to better representations , and the best representations correspond to maxima ( or minima ) of the measure .this section introduces two widely used quality measures , namely energy models based on pairwise attraction and repulsion for layouts , and newman and girvan s modularity measure for clusterings . to obtain uniform and general formulations ,both measures are defined for _ weighted _ networks . in a weighted network ,each vertex has a nonnegative real _ vertex weight _ , and each unordered vertex pair ( including ) has a nonnegative real _ edge weight _ . intuitively , a vertex ( or edge ) of weight can be thought of as a chunk of vertices ( or edges ) of weight . the commonly studiedun_weighted networks correspond to the special case where the edge weights are either ( no edge ) or , and the vertex weights are .-dimensional layout _ of a network maps each vertex to a position in ; it thereby assigns a _ distance _ to each vertex pair , namely the euclidean distance between the respective vertex positions .so - called _ energy models _ are an important class of quality measures for layouts . in general , _ smaller _ energy indicates better layouts . because force is the negative gradient of energy , energy models can also be represented as force systems , and energy minima correspond to force equilibria . for introductions to energy - based or force - directed layout ,see refs . .the most popular energy models for general undirected networks are either similar to stress functions of multidimensional scaling , or represent force systems of pairwise attraction and repulsion between vertices .models of the former type ( e.g. , ) enforce that the distance of each vertex pair in the layout approximates some prespecified distance , most commonly the length of the shortest edge path between the vertices .they will not be further discussed , because their layouts reflect these path lengths rather than the community structure . in models of the latter type ,adjacent vertices attract , which tends to group densely connected vertices , and all pairs of vertices repulse , which tends to separate sparsely connected vertices .the strengths of the forces are often chosen to be proportional to some power of the distance .formally , for a layout and two vertices with , the attractive force exerted on by is and the repulsive force exerted on by is where is the distance between and , is the unit - length vector pointing from to , and and are real constants with .the condition ensures that the attractive force between connected vertices grows faster than the repulsive force , and thus prevents infinite distances except between unconnected components .for most practical force models holds and , i.e. , the attractive force is non - decreasing and the repulsive force is non - increasing with growing distance . in the widely used force model of fruchterman and reingold , and . by exploiting that force is the negative gradient of energy , the force model can be transformed into an energy model , such that force equilibria correspond to ( local ) energy minima . for a layout and constants with , the _ -energy _ is where must be read as ( because is the derivative of ) .the -energy model has been proposed by davidson and harel , and the -energy model is known as linlog model . a _ clustering _ of a network partitions the vertex set into disjoint subsets called _ clusters _ , and thereby maps each vertex to a cluster .proposals of quality measures for clusterings are numerous and scattered over the literature of diverse research fields ; surveys , though non - exhaustive , are provided by refs . .one of the most widely used quality measures was introduced by newman and girvan , and is called modularity .it was originally defined for the special case where the edge weights are either or and the weight of each vertex is its degree , and was later extended to networks with arbitrary edge weights .( the _ degree _ of a vertex is the total weight of its incident edges , with the edge weight from the vertex to itself counted twice . )generalized to arbitrary vertex weights , the _ modularity _ of a clustering is where is the set of all vertices in the network , and is the set of clusters ; the weight functions are naturally extended to sets of vertices or edges : is the total edge weight within the cluster , and is the total weight of the vertices in . intuitively , the first term of the modularity measure is the _ actual _ fraction of intra - cluster edge weight . in itself , it is not a good measure of clustering quality , because it takes the maximum value for the trivial clustering where one cluster contains all vertices .this is corrected by subtracting a second term , which specifies the _ expected _ fraction of intra - cluster edge weight in a network with uniform density .thus modularity takes positive values for clusterings where the total edge weight within clusters is larger than would be expected if the network had no community structure .finding a minimum - energy layout or a maximum - modularity clustering of a given network is computationally hard ; in particular , modularity maximization was recently shown to be np - complete . in practice, energy and modularity are almost exclusively optimized with heuristic algorithms that do not guarantee to find optimal or near - optimal solutions .an extensive experimental comparison of energy minimization algorithms for network layout was performed by hachul and jnger ; however , most of the examined algorithms make fairly restrictive assumptions about the optimized energy model .more general and reasonably efficient is the force calculation algorithm by barnes and hut , whose runtime is per iteration for a network with edges ( with nonzero weight ) and vertices ( assuming that the number of dimensions is small and the vertex distances are not extremely nonuniform ) .the number of iterations required for convergence typically grows sublinearly with .clustering algorithms for networks are surveyed in refs. .a relatively fast yet very effective heuristic for modularity maximization is agglomeration by iteratively merging clusters ( starting from singletons ) , combined with single - level or multi - level refinement by iteratively moving vertices ; an efficient implementation requires a runtime of ( assuming hierarchy levels in agglomeration and iterations through all vertices per level in refinement ) .a set of vertices is called a _ community _ if the density within the set is significantly larger than the density between the set and the remaining network .density between _ two disjoint sets of vertices and is intuitively the quotient of the actual edge weight and the potential edge weight between and ; formally , it is defined as , where is the total weight of the vertices in , and is the total edge weight between and .similarly , the _ density within _ a vertex set is .( this generalizes standard definitions of density from graph theory to weighted networks with self - edges . )existing theoretical results , which will be summarized and extended in this section , already show that the community structure of a network is reflected in layouts with optimal -energy ( for certain values of and ) and in clusterings with optimal modularity .what has previously escaped notice is the striking analogy : _ the separation of communities in an optimal layout is inversely proportional to ( some power of ) the density between them , and the separation of communities in an optimal clustering reflects whether the density between them is smaller than a certain threshold . _ as an important limitation , the result for layouts will be derived only for two communities , and can not be expected to hold precisely for more communities .therefore , the consistency of -energy layouts and modularity clusterings will be revisited in sec .[ s : appl ] , after further evidence has been presented in sec .[ s : unification ] . in what appears to be the only previous work that formally relates energy - based layout to modularity clustering , we did not established similarities between optimal layouts and optimal clusterings , but only noted that the modularity measure is mathematically similar to the density ( called normalized cut in ) , as both normalize the actual edge weight with a potential or expected edge weight .this subsection discusses how the distances in a layout with optimal -energy can be interpreted in terms of the community structure of the network , and how this interpretation depends on the parameters and . for the simple case of a network with two vertices ,the minimum - energy layouts can be computed analytically ( theorem 3 in ) .if the vertices and have the distance , the -energy is the derivative of this function is at its minimum , thus thus the distance of the two vertices in a layout with optimal -energy is the power of the density between the vertices . in particular , the distance is the inverse density if , and the distance is almost independent of the density if .this impact of on the representation of the community structure is illustrated for a larger network in fig .[ f : random ] .layouts with small linlog energy ( ) and with small fruchterman - reingold energy ( ) of a pseudo - random network with eight clusters ( intra - cluster density , expected inter - cluster density ).,title="fig:",width=147 ] layouts with small linlog energy ( ) and with small fruchterman - reingold energy ( ) of a pseudo - random network with eight clusters ( intra - cluster density , expected inter - cluster density ).,title="fig:",width=147 ] replacing the edge with two edges and , where is a new vertex with weight , increases the optimal distance between and by a factor of . because the -energy is only defined for , the factor is if , and greater than if .this result has a significant implication , given that the addition of increases the path length between and ( from to edges ) without changing the density : the optimal distance of and depends only on the density , and not on the path length , if ( as in the linlog energy model ) , and increases with the path length if .the results for networks with two or three vertices can be generalized , at least as approximations , to larger networks . in a network with clear communities , for example ,the density within the communities is ( by definition ) much greater than the density between the communities , and thus the intra - community distances in an optimal layout are much smaller than the inter - community distances ( unless is very large ) .this can be approximated by assuming that the vertices of each community have the same position , and thus by considering each community as one big vertex . for networks with more than two communities , eq .( [ e : distdens ] ) can not be expected to hold precisely for all pairs of communities , because this would often imply distances that violate the triangle inequality .nevertheless , the qualitative reasoning generalizes : distances are less dependent on densities for large , and less dependent on path lengths for small . [ cols="^,^,^ " , ] as motivated in the previous subsection , the parameters of the energy model are set to and , with for networks with very nonuniform density ( ) , and for networks with fairly uniform density ( ) .the variation of improves the readability by ensuring that vertices are not placed too closely , but otherwise does not affect the grouping of the vertices .because the exact optimization of -energy and modularity is computationally hard , the presented layouts and clusterings are not guaranteed to be optimal ( except for the clustering of the book co - purchase network ) , but are the best known representations .the java program used for generating these representations is freely available .it employs the barnes - hut algorithm for energy minimization , and agglomeration with multi - level refinement for modularity maximization ( see sec .[ ss : algs ] ) . in the karate club network ( fig .[ f : karate ] ) , each vertex represents a member of a karate club , and the edge weight of each vertex pair specifies the number of contexts ( like university classes , bars , or karate tournaments ) in which the two members interacted .the main vertex groups in the -energy layout coincide with the four clusters of the modularity clustering , and the layout correctly indicates that joining triangles and circles into a single cluster is almost as good as separating them ( modularity vs. ) .the clustering and the layout both segregate the members who left the club after the instructor was fired ( gray boxes ) , with the exception of one member who followed the instructor mainly to preserve his chance for the black belt .-energy layout and modularity clustering ( represented by shapes ) of the karate club network .the modularity of the clustering is 0.445 .gray boxes represent members who left the club after the instructor was fired.,width=325 ] in the book co - purchase network ( fig .[ f : polbools ] ) , the vertices represent books on us politics , and edges of weight connect books that were frequently purchased together . the clusters are generally well - separated in the layout ; a few members of the smaller central clusters are placed closely to one of the two large clusters , which correctly indicates that they are densely connected with parts of these large clusters , and their assignment to a smaller cluster is a close decision .the clustering and the layout , especially their two main groups , conform well to newman s classification of the books as liberal ( light gray ) , neutral ( dark gray ) , or conservative ( black ) ; the layout is more suitable to represent the liberal - to - conservative ordering of the books .-energy layout and modularity clustering of the book co - purchase network .the modularity is 0.527 .shades represent the classification as liberal ( light gray ) , neutral ( dark gray ) , or conservative ( black).,width=325 ] the food classification network ( fig .[ f : food ] ) represents the categorizations of 45 foods by 38 subjects of a psychological experiment , who were asked to sort the foods into as many categories as they wished based on perceived similarity .each vertex represents a food , and the edge weight of each vertex pair is the number of subjects who assigned the corresponding foods to the same category .the clusters correspond well to groups in the layout , but the layout also indicates that the borders between some clusters are rather fuzzy ( e.g. , between snacks and sweets ) , that some clusters could be split into subclusters ( e.g. , fruits and vegetables ) , and that some foods can not be clearly assigned to a single cluster ( e.g. , water , spaghetti ) .the grouping in both the clustering and the layout largely conforms to common food categories . -energy layout and modularity clustering of the food classification network .the modularity of the clustering is 0.402 .( the edges are elided to avoid clutter.),width=325 ] the world trade network ( fig .[ f : trade ] ) models the trade between 66 countries in the year 1999 . the vertices represent countries , and the edge weight of each vertex pair specifies the trade volume between the corresponding countries in us dollar .the clustering and the layout both group the countries of the three major economic areas ( east asia / australia , america , and europe ) .the layout also reflects that countries like irn and egy can not be clearly assigned to either the east asian or the european group , and shows many smaller groups of closely interlocked countries like chn and hkg , aus and nzl , gbr and irl , and the nordic countries .-energy layout and modularity clustering of the world trade network .the modularity of the clustering is 0.275 .( the edges are elided to avoid clutter.),width=325 ]as representations for the community structure of networks , layouts subsume clusterings , thus quality measures for layouts subsume quality measures for clusterings , and in fact prominent existing quality measures for layouts namely , energy models based on the pairwise attraction and repulsion of vertices subsume a prominent existing quality measure for clusterings namely , the modularity measure of newman and girvan .this result has implications for the entire lifecycle of quality measures : * design : new and existing quality measures for layouts may be applied to clusterings and vice versa .for example , recent extensions of the modularity measure to directed networks and bipartite networks can be directly generalized to energy models for layouts .* evaluation : the evaluation of quality measures for clusterings and layouts can be partly unified , i.e. , performed without distinguishing between clusterings and layouts .this has been demonstrated in with a computation of the expected measurement value for networks with uniform expected density , a particularly important analysis technique . * optimization : components of clustering algorithms may be reused in layout algorithms and vice versa , for example the agglomeration ( coarsening ) phase of multi - level heuristics .moreover , energy - based layout algorithms might serve as initial stage of clustering algorithms , similarly to eigenvector - based layout algorithms in existing approaches ( see sec .[ ss : urel ] ) . *application : unified quality measures help to ensure the consistency of clusterings and layouts ( see sec . [ s : appl ] ) , which is crucial because both representations are often used together .
two natural and widely used representations for the community structure of networks are clusterings , which partition the vertex set into disjoint subsets , and layouts , which assign the vertices to positions in a metric space . this paper unifies prominent characterizations of layout quality and clustering quality , by showing that energy models of pairwise attraction and repulsion subsume newman and girvan s modularity measure . layouts with optimal energy are relaxations of , and are thus consistent with , clusterings with optimal modularity , which is of practical relevance because both representations are complementary and often used together .
although probabilistic ideas are at the heart of genetics , and have been used since the earliest days of the subject , it was in the study of genetic drift in the 1920s and 1930s that the notion of stochastic processes first played a major part in the theory of genetics . in the simplest case , a diploid population of a fixed size , , was considered , and attention was focused on two alleles at a particular locus . if selective differences between the two alleles , as well as the chances of mutation , are ignored , then the changes in the allele frequencies are purely due to genetic drift . assuming discrete non - overlapping generations, one can ask : what is the probability that of the genes in the population alive at time are of a given type ?this formulation of genetic drift is usually referred to as the fisher - wright model , being used implicitly by fisher , and explicitly by wright .although fisher and wright did not use this terminology , the stochastic process they defined through this model is a markov chain , since the probabilities of genetic change from one generation to the next do not depend on the changes made in previous generations .however the use of markov chains becomes cumbersome when the effects of mutation and selection are introduced , and for this reason there was a move away from the description of the process in terms of a discrete number of alleles , and discrete generations , to a process of diffusion where the fraction of alleles of one type is a real random variable in the interval ] and the are polynomials , described below .the function ( [ soln_2 ] ) satisfies both the initial and boundary conditions and holds for all in the interval , that is , excluding the boundaries at and . for not too small values of ,the solution is well approximated by keeping only the first few terms in . in these casesthe polynomials have a simple form , since is a polynomial in of order .for instance , the natural variable for these polynomials is , so that they are defined on the interval . for general are the jacobi polynomials .we plot the time development of the solution ( [ soln_2 ] ) in fig .[ neut_2a ] . beginning from a delta function at the initial point ,the distribution initially spreads out until the boundaries are reached .the distribution is soon nearly flat , and subsides slowly as probability escapes to the boundaries .note that the probability deposited on the boundaries is not shown here ( but will be discussed later ) .time evolution of the pdf for a biallelic system as determined from the analytic solution with no mutation and initial condition . ] for general , the solution may be written in the form as discussed in section [ soln ] . the solution is written in terms of the variables since the problem is separable and so the eigenfunctions are separable . the function is given by and depends on a set of non - negative integers according to this means that the sum over in eq .( [ soln_m_first ] ) is in fact an sum .the property , which we have already remarked upon , that the system with alleles is nested in that with , manifests itself here in the fact that the functions are very closely related to the functions which already appear in the two allele solution ( [ soln_2 ] ) .first of all , since the problem is separable in this coordinate system , they may be written as each of the factors in this product separately satisfies the same differential equation .we shall derive this equation in section [ soln ] , from which we will learn that depends only on and on the integers and .specifically , ^{1/2 } \left ( 1 - u_{i } \right)^{l_{i-1 } } p^{(1,2l_{i-1}+1)}_{l_i } ( 1 - 2u_i)\ , .\label{soln_m_third}\ ] ] here the are the analogue of the that appeared in eq .( [ soln_2 ] ) , but in this case we have included them in the function , so that it satisfies the simple orthogonality relation the explicit form of these constants is the , now with , are again jacobi polynomials .the general solution for has previously only been found for the case .we have checked that our solution agrees with the published result in this case , although the labels on the alleles are permuted between kimura s solution and ours ( ) due to a different change of variable being made .we plot this solution for a triallelic system in fig .[ neut_3a ] at several different times . just as in the two allele system ,the distribution initially spreads out and forms a bell shape , which quickly collapses , and becomes nearly flat , then subsides slowly as probability escapes to the boundaries .note that the probability deposited on the boundaries is not shown here .notice also the triangular shape of the region over which the distribution is supported .time evolution of the pdf for a triallelic system as determined from the analytic solution with no mutation and initial condition . ] when mutation is present , the solution is again well known for the case .again , no change of variable is required : . then the constant is given by while , and the parameter .the are jacobi polynomials .unlike the case without mutations , , and they can not be written in terms of simpler polynomials as in case ( i ) .the function ( [ soln_2_m ] ) satisfies both the initial and boundary conditions and holds for all in the interval ] .this corresponds to the extinction event , where .hence the mean time to this extinction , the first moment of the distribution , is given by the denominator being unity , since . however , from eqs .( [ fixtime ] ) and ( [ fixtime_result ] ) , this gives an expression for the function which we wish to evaluate in eq .( [ tau_r(m ) ] ) . changing the summation variable from to , and so identifying with gives \ln \left [ x_{i_1,0 } + \cdots + x_{i_s,0 } \right]\,.\ ] ] let be the probability that in the evolution allele becomes extinct first , followed by allele , and so on , ending with fixation of allele .littler found the result given in eq .( [ qp ] ) , but we give a derivation here .we define as the probability that this sequence of extinctions has occurred by time .so for example , in the biallelic system ( ) , .just as one can show that obeys the backward kolmogorov equation ( by setting equal to its value on the boundary and relating to ) , then in general one can show that satisfies the backward kolmogorov equation ( [ bke ] ) the function is the stationary ( i.e. , ) solution of this equation that satisfies the following boundary conditions .first , for any that corresponds to any allele other than already being extinct .that is , for any that has for any . on the hyperplane , we must have that equal to the probability of the subsequent extinctions taking place in the desired order by time . taking the limit , this boundary condition implies we have chosen this particular order of extinctions to demonstrate the result as it corresponds to the ordering implied in our definition of . in the variables ,the backward kolmogorov operator reads we conjecture a solution clearly , annihilates this product , so it is a stationary solution of ( [ qfp ] ) .the boundary condition that for , is also obviously satisfied .furthermore , if , hence the recursion ( [ recur ] ) is satisfied . it is easily established that by finding the stationary solution of the backward fokker - planck equation ( [ qfp ] ) with and imposing the boundary conditions and .therefore by induction , ( [ qm_u ] ) is the solution required .rewriting in terms of the variables , the probability for an arbitrary sequence of extinctions : allele to go extinct first , followed by leaving only allele can be determined by permuting the indices in ( [ qm ] ) .this then gives us ( [ qp ] ) .we earlier obtained the time - dependent pdf valid when mutation rates are nonzero by imposing reflecting boundary conditions .these have the effect of preventing currents at the boundaries , which in turn ensure that the limiting solution is nontrivial and hence the complete stationary distribution .one can check that this is the case from ( [ soln_m_first_m])([soln_m_sixth_m ] ) .when all the integers are zero , the eigenvalue in ( [ lambda_and_l_m ] ) is also zero , indicating a stationary solution .the remaining eigenvalues are all positive , which relate to exponentially decaying contributions to the pdf .retaining then only the term with in ( [ soln_m_first_m ] ) we find immediately that it is easy to check that this distribution is properly normalised over the hypercube , . to change variable back to the allele frequencies we use the transformation ( [ trans_prob_m ] ) , and using the fact that , and , we arrive at the result quoted earlier , ( [ pstarmut ] ) . when the mutation process is suppressed , one finds from ( [ lambda_and_l ] ) that all the eigenvalues are nonzero , indicating that the distribution we have derived is zero everywhere in the limit .this means that the stationary solution comprises the accumulation of probability at boundary points , as stated in section [ calculations ] .one way to obtain the moments is to perform averages over the distribution .the mean and variance when mutation is present can easily be calculated in this way from ( [ soln_2_m ] ) .calculating the mean directly from the explicit formula for the probability distribution ( [ soln_2 ] ) when fixation can occur is tricky because one must include the full , time - dependent formula for the fixation probability at the right boundary and the integrals one has to evaluate are not particularly convenient .alternatively , we can exploit the kolmogorov equation written in the form of a continuity equation ( [ continuity ] ) .this leads to a differential equation for each moment which depends on lower moments .the first few moments can then be found iteratively in a relatively straightforward way .we demonstrate the method by giving the derivation of the moments of the distribution when two alleles are initially present .we then show that this method extends in a straightforward way to the case when alleles are present .specifically for we have where the current and where .when mutations in either direction are present ( i.e. , both and ) the mean of is given by the expression and so {x=0}^{1}\end{aligned}\ ] ] where we have used the continuity equation ( [ continuity_m2 ] ) and integrated by parts .we have already stated that there is no current of probability through the boundaries .in other words , here and so the last term in ( [ meanxbyparts ] ) is zero .when fixation is a possibility , one _ does _ have a current at the boundaries and , the probability that allele is fixed at time being . since the function excludes contributions from the boundary in this case , we must add these explicitly into the mean of .that is , taking the derivative of this expression and carrying out the integration by parts as in ( [ meanxbyparts ] ) , we obtain {x=0}^{1 } + \frac{\partial f_1(x_0 , t)}{\partial t } \;.\ ] ] these last two terms cancel , and so in either case we are left with inserting the expression ( [ current_m2 ] ) for the current we find \;.\ ] ] this reveals that moments of the distribution can be calculated iteratively .this equation is valid whether or not the are non - zero .the equation for the mean ( ) can be solved directly , and the results used to find ( ) and so on . when there are more than 2 alleles , a similar derivation gives the equation for the general moment : \langle x_i^{k_i-1 } \prod_{j\neq i}x_j^{k_j}\rangle - \right.\\ \left .[ \frac{1}{2}(\sum_lk_l-1)+r]\langle \prod_i x_i^{k_i}\rangle \right\}\,.\end{gathered}\ ] ] thus , again we can calculate any moment by iteration .for example , with , obeys the equation and using ( [ meanx_m ] ) we find the result given in ( [ covarx_m ] ) .in the last decade or two the ideas and concepts of population genetics have been carried over to a number of different fields : optimisation , economics , language change , among others . while the essentials of the subject such as the existence of alleles , and their mutation , selection and drift are usually present in these novel applications , other aspects may not have analogues in population genetics .furthermore , phenomena such as epistasis , linkage disequilibrium , the relationship between phenotypes and genotypes , which form a large part of the subject of population genetics , may be unimportant or irrelevant in these applications .our motivation for the work presented here has its roots in the mathematical modelling of a model of language change where it is quite plausible that the number of variants ( alleles ) is much larger than two .it was this which led us to systematically analyse the diffusion approximation applied to multi - allelic processes , having noted that many of the treatments in the context of biological evolution tend to be couched in terms of a pair of alleles , the `` wild type '' and the `` mutant '' . in this paper we have shown how the kolmogorov equation describing the stochastic process of genetic drift or the dynamics of mutation at a single locus may be solved exactly by separation of variables in which the number of alleles is arbitrary .the key to finding separable solutions is , of course , to find a transformation to a coordinate system where the equation is separable .the change of variable ( [ cofv ] ) we used is similar , but slightly different to one suggested by kimura which he showed achieved separability up to and including alleles .kimura was of the opinion that novel mathematical techniques would be needed to proceed to the general case of alleles .what we have shown here is that with our change of variables this generalisation is possible without the need to invoke any more mathematical machinery than was required in the standard textbook case of alleles .a large part of the reason for this is the way that the problem with alleles can be constructed in a straightforward way from the problem with alleles and that with alleles .this simple `` nesting '' of the -allele problem in the -allele one is responsible for many of the simplifications in the analysis and is at the heart of why the solution in the general case can be relatively simply presented .an illustration of the simple structure of this nesting is the fact that the general solutions , valid for an arbitrary number of alleles , with or without mutation , is made up of products of polynomials more specifically jacobi polynomials . although the higher order versions of these polynomials can be quite complex , even after relatively short times only those characterised by small integers are important .as mentioned in section [ u_variables ] , we have given the solutions in terms of the transformed variables , but their form in terms of the original variables of the problem can be found using eqs .( [ cofv ] ) and ( [ trans_prob_m ] ) .we have also presented the derivations of many other quantities of interest . in the interests of concisenesswe have given only a flavour of some of these : some are new , some have already been derived by other means and yet others are simple generalisations of the two - allele results . yetother results are more naturally studied in the context of the backward kolmogorov equation , and we also took the opportunity to gather together the most significant , but nevertheless little - known , ones here .we have thus provided a rather complete description of genetic drift and mutation at a single locus .nevertheless , there are some outstanding questions . for example , as discussed in appendix b , the transformation ( [ cofv ] ) does not render the kolmogorov equation separable for an arbitrary mutation matrix , only one where that rate of mutation of the alleles to alleles ( ) occurs at a rate independent of .this is the reason for this simplified choice for the mutation matrix a choice also made in all other research we are aware of .it might seem possible in principle to find another transformation which would allow a different form of to be studied , but this seems difficult for a number of reasons .for instance , the transformation must still ensure that the diffusion part of the kolmogorov equation is separable .furthermore , the matrix has entries , and the transformation only degrees of freedom so for only certain , restricted , forms of will a transformation to a separable equation be possible .other questions involve the study of selection mechanisms or interactions between loci using the results here as a basis on which to build .we are currently investigating these questions in the context of language models , but we hope that the work reported in this paper will encourage further investigations along these lines among population geneticists .gb acknowledges the support of the new zealand tertiary education commission .rab acknowledges support from the epsrc ( grant gr / r44768 ) under which this work was started , and a royal society of edinburgh personal research fellowship under which it has been continued .as we have mentioned several times in the main text , although it might be thought that adding the possibility that mutations occur would complicate the problem , in many ways it is the situation of pure random mating that is the richer mathematically .so , for example , the case with mutations has a conventional stationary pdf given by eq .( [ pstarmut ] ) , whereas without mutations the stationary pdf is a singular function which exists only on some of the boundaries .of course , it is clear from the nature of the system being modelled that this has to be the case , but what interests us in this appendix is the nature of the boundary conditions which have to be imposed on the eigenfunctions of the kolmogorov equation ( [ kol_m ] ) to obtain the correct mathematical description .the nature of the boundary conditions required when fixation can occur are discussed in the literature both from the standpoint of the kolmogorov equation in general and in the specific context of genetic drift . herewe will describe a more direct approach , which while not so general as many of these discussions , is easily understood and illustrates the essential points in an explicit way .it is sufficient to discuss the one - dimensional ( two allele ) case , since all the points we wish to make can be made with reference to this problem .the question then , can be simply put : what boundary conditions should be imposed on eq .( [ kol_2 ] ) ? on the one handone might feel that that these should be absorbing boundary conditions , because once a boundary is reached there should be no chance of re - entering the region . on the other hand, however , the diffusion coefficient `` naturally '' vanishes on the boundaries which automatically guarantees absorption , so there would seem to be no need to further impose a vanishing pdf ( as an absorbing boundary condition would require ) . in that case , what boundary conditions should be imposed ? also is it even clear , given the fact that the diffusion coefficient becomes vanishingly small as the boundaries are approached , that the boundaries can be reached in finite time ? to address these questions let us begin with the kolmogorov equation ( [ kol_2_mut ] ) which includes a deterministic component as well as a random component represented by the diffusion coefficient .although we are interested in the case when is not present , it will help in the interpretation of the results if we initially include it .we also make two notational changes we put a subscript on which will be explained below and we will write .the function is real , since .therefore our starting equation is + \frac{1}{2 } \frac{\partial^{2}}{\partial x^2 } \left [ g^{2 } ( x ) p \right]\ , .\label{ito}\ ] ] it turns out that it will more useful for our purposes to write this in the form + \frac{1}{2 } \frac{\partial } { \partial x } \left [ g(x ) \frac{\partial } { \partial x } \left ( g ( x ) p \right ) \right ] \nonumber \\ \nonumber \\ & = & -\frac{\partial } { \partial x } \left [ a_{s}(x ) p \right ] + \frac{1}{2 } \frac{\partial } { \partial x } \left [ g(x ) \frac{\partial } { \partial x } \left ( g ( x ) p \right ) \right]\ , .\label{strat}\end{aligned}\ ] ] where equations ( [ ito ] ) and ( [ strat ] ) are known as the ito and stratonovich forms of the kolmogorov equation respectively .there is no need for us to explore the differences between these two formulations ; the only relevant point here is that the stratonovich form is more convenient for our purposes .we now transform eq .( [ strat ] ) by introducing the pdf which is a function of the new variable defined by the transformed equation reads + \frac{1}{2 } \frac{\partial^{2}q}{\partial y^2}\ , , \label{pdf_y}\ ] ] where the system with an -dependent diffusion coefficient has now been transformed to one with a state - independent diffusion coefficient , at the cost of adding an additional factor to the deterministic term .the problem of interest to us has and . from eq .( [ x_to_y ] ) we find the required transformation to be or with , and from eq .( [ trans_a_defn ] ) we find that .one way to understand this process intuitively is through a mechanical analogy : if we set , then the kolmogorov equation can be thought of as describing the motion of an overdamped particle in the one - dimensional potential , subject to white noise with zero mean and unit strength .an examination of fig .[ potentials ] shows that the boundaries are reached in a finite time .more importantly , from the relation we see that imposing the absorbing boundary conditions only implies that the pdf must diverge with a power smaller than at the boundaries and .these results should be contrasted with the hypothetical situation where , but . in this situationwe have that ^{-1} ] .they satisfy the following orthogonality relation in the variable : where . in the eigenfunctionscalculated when mutations are present , the functions which make up the right eigenfunction as defined in eq .( [ soln_m_second_m ] ) , contain the factor which appears in eq .( [ orthog ] ) , as can be seen in eq .( [ soln_m_third_m ] ) . on the other handthe functions which make up the left eigenfunction as defined in eq .( [ soln_m_fifth_m ] ) , do not contain this factor , as can be seen in eq .( [ soln_m_sixth_m ] ) .the result is that the right and left eigenfunctions satisfy the simple orthogonality relation . in the derivation of the fixation probability in section [ fixation ] we make use of the following relationship between jacobi and legendre polynomials which can be found in ref . : \,,\ ] ] as well as this identity for legendre polynomials , which can be found in ref . : \;.\ ] ] g. b. golding , c. strobeck , 2-locus , 4th - order gene - frequency moments - implications for the variance of squared linkage disequilibrium and the variance of homozygosity , theoretical population biology 24 ( 1983 ) 173191 .r. k. p. zia , r. j. astalos , statistics of an aged stuructured population with two competing species : analytic and monte carlo studies , in : d. p. landau , s. p. lewis , h .- b .schuttler ( eds . ) , computer simulation studies in condensed matter physics xiv , springer - verlag , berlin , 2002 , pp .235254 .m. kimura , diffusion models in population genetics with special reference to fixation time of molecular mutants under mutational pressure , in : t. ohta , k. aoki ( eds . ) , population genetics and molecular evolution , japan sci .soc . press / springer - verlag , 1985 , pp .1939 .j. crow , m. kimura , some genetic problems in natural populations , in : j. neyman ( ed . ) , proceedings of the third berkeley symposium on mathematical statistics and probability , volume 4 , university of california press , berkeley , 1956 , pp . 122 .
we give an exact solution to the kolmogorov equation describing genetic drift for an arbitrary number of alleles at a given locus . this is achieved by finding a change of variable which makes the equation separable , and therefore reduces the problem with an arbitrary number of alleles to the solution of a set of equations that are essentially no more complicated than that found in the two - allele case . the same change of variable also renders the kolmogorov equation with the effect of mutations added separable , as long as the mutation matrix has equal entries in each row . thus this case can also be solved exactly for an arbitrary number of alleles . the general solution , which is in the form of a probability distribution , is in agreement with the previously known results which were for the cases of two and three alleles only . results are also given for a wide range of other quantities of interest , such as the probabilities of extinction of various numbers of alleles , mean times to these extinctions , and the means and variances of the allele frequencies . to aid dissemination , these results are presented in two stages : first of all they are given without derivations and too much mathematical detail , and then subsequently derivations and a more technical discussion are provided . , , population genetics , diffusion , kolmogorov equation , genetic drift , single - locus .
alignment ( ia ) is a promising technique to mitigate interference in wireless communication systems .it was shown that dof is achievable per time , frequency or antenna dimension in a -user interference channel ( ic ) . for a -user constant mimo ic ,ia based schemes were introduced in - , where it was shown that more dof is achievable than that of conventional schemes . for a constant cellular interfering network ,it was shown in that their scheme provides respectable gain for a 19 hexagonal wrap - around - cell layout . however , interference - free dof is only achievable for a two - cell layout .it was shown in that optimal dof is achievable when for a two - cell mimo interfering broadcast channel , where each transmitter is equipped with antennas and each receiver is equipped with antennas . in this letter, we focus on a three - cell constant cellular interfering network by using a new property of alignment , i.e. , _ ia solution obtained in an user - cooperation scenario can also be applied in a non - cooperation environment_. we assume that each base station ( bs ) is equipped with antennas and each mobile station ( ms ) is equipped with antennas , and which is most possible in a practical environment .we also assume there are cell - edge users per cell where , and each user sends streams to its served bs simultaneously .we show that totally dof is achievable if and or if and . numerical results show that more dof can be achieved compared with conventional schemes in most cases .for a three - cell constant cellular interfering network ( an example is shown in fig .[ fig1 ] ) , we assume that each bs is equipped with antennas , each ms is equipped with antennas , and there are cell - edge users per cell . for notation convenience ,we refer to the -th user in the -th cell as user ] .then we have } = \textbf{w}^{[ij ] } \textbf{s}^{[ij]},\ ] ] where } ] , and satisfies an average power constraint , i.e. , } \|^2 \right ] \leq p ] is the additive white gaussian noise , and }_{i} ] to the -th bs .the channel is assumed to be constant over time , and perfect channel state information ( csi ) is available at all bss and mss .the -th bs decodes the desired signal for user ] and } ] } = \textbf{p}_j^{[i]\dag } \textbf{v}^{[i]\dag } \sum^3_{k=1 } \sum^k_{j=1 } \textbf{h}^{[kj]}_{i}\textbf{w}^{[kj ] } \textbf{s}^{[kj ] } + \widetilde{\textbf{n}}^{[ij]},\ ] ] where } ] is the normalized inter - user interference ( iui ) elimination matrix , and } = \textbf{p}_j^{[i]\dag } \textbf{v}^{[i]\dag } \textbf{n}^{[i]} ] denote the combined iui elimination matrix at the -th bs . we define the dof region as the following : } , \dots , d^{[3k ] } ) \in \mathbb{r}^{3k}_+ | \forall ( \omega_{11 } , \dots , \omega_{3k } ) \in \mathbb{r}^{3k}_+ , \\\sum^3_{i=1 } \sum^k_{j=1 } \omega_{ij } d^{[ij ] } \leq \limsup_{\textrm{snr } \rightarrow \infty } \big [ \sup_{\textbf{r } \in \mathcal{c } } \frac{1}{\log \textrm{snr } } \sum^3_{i=1 } \sum^k_{j=1 } \omega_{ij } r^{[ij ] } \big]\bigg\},\end{gathered}\ ] ] where is the capacity region , , and } ] .let }\ ] ] be the total dof in the network .in this section , an ia based scheme is introduced for the three - cell constant cellular interfering network .a motivating example is given first ( as is shown in fig .[ fig1 ] ) , where , , , and .we show that totally 18 dof is achievable in this scenario .we divide our scheme into two phases .first , ici is aligned into a smaller vector space at each bs by joint design of all the precoding matrices .second , iui is eliminated through cascaded receive beamforming matrices at each bs . * phase i : ici alignment . * by applying the ia solution obtained in an mimo ic to a cellular environment , which is presented in fig. [ fig1 ] , we show that 12 ici streams at each bs can be aligned into a vector space of 9 dimensions simultaneously , i.e. , } \textbf{w}^{[21 ] } ~ \textbf{h}_1^{[22 ] } \textbf{w}^{[22 ] } ~\\ \textbf{h}_1^{[31 ] } \textbf{w}^{[31 ] } ~ \textbf{h}_1^{[32 ] } \textbf{w}^{[32 ] } ] \big ) \big\}= 9,\end{gathered}\ ] ] } \textbf{w}^{[11]}~ \textbf{h}_2^{[12 ] } \textbf{w}^{[12 ] } ~\\ \textbf{h}_2^{[31 ] } \textbf{w}^{[31 ] } ~ \textbf{h}_2^{[32 ] } \textbf{w}^{[32 ] } \big ] \big ) \big\ } = 9,\end{gathered}\ ] ] } \textbf{w}^{[11 ] } ~ \textbf{h}_3^{[12 ] } \textbf{w}^{[12 ] } ~\\ \textbf{h}_3^{[21 ] } \textbf{w}^{[21 ] } ~ \textbf{h}_3^{[22 ] } \textbf{w}^{[22 ] } \big ] \big ) \big\}= 9.\end{gathered}\ ] ] let } = \big[\textbf{w}^{[i1]\dag}~ \textbf{w}^{[i2]\dag } \big]^{\dag} ] represent the combined channel matrix .then the effective channel is a three - user mimo ic where each node is equipped with antennas . following the analysis in ,there exists a } ] and } ]. then }=[\textbf{w}_1^{[ij]}~\textbf{w}_2^{[ij]}~\textbf{w}_3^{[ij ] } ] ] , with probability 1 .[ fig55 ] shows rank distribution of } ] .there are 7 interference - free dimensions left at each bs , and each bs can decode 6 streams sent from its cell - edge users. then we choose } \subseteq \text{null}\big ( \big [ \textbf{h}_1^{[21 ] } \textbf{w}^{[21 ] } ~~ \textbf{h}_1^{[22 ] } \textbf{w}^{[22 ] } ~~ \textbf{h}_1^{[31 ] } \textbf{w}^{[31 ] } ~~ \textbf{h}_1^{[32 ] } \textbf{w}^{[32 ] } \big ] \big),\ ] ] } \subseteq \text{null}\big ( \big [ \textbf{h}_2^{[11 ] } \textbf{w}^{[11]}~~ \textbf{h}_2^{[12 ] } \textbf{w}^{[12 ] } ~~ \textbf{h}_2^{[31 ] } \textbf{w}^{[31 ] } ~~ \textbf{h}_2^{[32 ] } \textbf{w}^{[32 ] } \big ] \big),\ ] ] } \subseteq \text{null}\big ( \big [ \textbf{h}_3^{[11 ] } \textbf{w}^{[11 ] } ~~ \textbf{h}_3^{[12 ] } \textbf{w}^{[12 ] } ~~ \textbf{h}_3^{[21 ] } \textbf{w}^{[21 ] } ~~ \textbf{h}_3^{[22 ] } \textbf{w}^{[22 ] } \big ] \big).\ ] ] * phase ii : iui elimination .* we finally obtain an ici - free channel , i.e. , } = \textbf{v}^{[j]\dag } \textbf{h}_j^{[jk ] } \textbf{w}^{[jk]} ] , } = \text{null}(\overline{\textbf{h}}_j^{[j1]}) ] at user ] will be full rank with probability 1 .the dimension of the interference space at each bs is decreased to , then each bs needs a vector space of ] and iui elimination matrix }$ ] , , can be calculated accordingly .then totally streams can be sent simultaneously , i.e. , when and , dof is achievable .for example , when , , and , one stream can be sent from each user simultaneously .then is achievable , while if orthogonal schemes are used , at most 8 interference - free streams can be sent simultaneously .the comparison of dof achievable between our scheme and orthogonal schemes is presented in fig .[ fig33 ] when and where varies from to when .it is shown that our scheme can achieve more dof compared with orthogonal schemes in most cases . however , achievable dof is less than orthogonal schemes when , 13 , or 19 , as some dimensions are wasted at bss .if symbol extensions are allowed , even with constant channel , it is expected that more dof can be achieved than that of orthogonal schemes by using similar scheme in , and we leave it for future work .r. tresch , m. guilland , and e. riegler , on the achievability of interference alignment in the -user constant mimo interference channel , " in _ proc .ieee workshop stat . signal process _277 - 280 , cardiff , u.k .m. razaviyayn , g. lyubeznik , and z .- q .luo , on the degrees of freedom achievalbe through interference alignment in a mimo interference channel , " _ ieee trans .signal process .60 , no.2 , pp .812 - 821 , feb . 2012. w. shin , n. lee , j .- b .lim , c. shin , and k. jang , on the design of interference alignment scheme for two - cell mimo interfering broadcast channels , " _ ieee trans . on wireless communications _ ,437 - 442 , feb . 2011 .p. mohapatra , k. e. nissar , and c. r. murthy , interference alignment algorithms for the user constant mimo interference channel , " _ ieee trans .signal process .5499 - 5508 , nov .
for a three - cell constant cellular interfering network , a new property of alignment is identified , i.e. , interference alignment ( ia ) solution obtained in an user - cooperation scenario can also be applied in a non - cooperation environment . by using this property , an algorithm is proposed by jointly designing transmit and receive beamforming matrices . analysis and numerical results show that more degree of freedom ( dof ) can be achieved compared with conventional schemes in most cases . interference channel , interference alignment , degrees of freedom , multi - user mimo .
electron clouds formed in circular particle accelerators with positively charged beams are known to degrade the quality of the beam .they are a concern for future accelerator facilities such as the international linear collider ( ilc ) damping rings , the superkekb , and also upgrade of existing facilities such as the large hadron collider ( lhc ) and the fermilab main injector ( mi ) .their study is also important for the optimum performance of spallation neutron sources .the detection of electron clouds has been a topic of study ever since their effects were first observed .the methods used for such a detection have included retarding field analyzers , clearing electrodes , shielded pickup detectors , te waves and study of the response of the beam in the presence of an electron cloud .the te wave method involves transmitting microwaves through a section of the beam pipe , and then studying the effect of the cloud on the microwave properties .the microwaves can be introduced either as traveling or standing waves within a section of the beam pipe .the method of using microwaves as a probe for investigating the presence of electron clouds was first proposed by f. caspers , in which experiments were conducted at the sps at cern based on this method .the measurement technique involves measuring the height of modulation side - bands off the carrier frequency of the microwave .the electron cloud , constituting a plasma modifies the dispersion relationship of the microwave .the periodic production and clearing of the cloud , based on the bunch train passage frequency , leads to modulation of the phase advance of carrier wave as it travels .this modulation gives rise to side bands in the spectrum of the propagated wave .the side bands are spaced from the carrier frequency by a value equal to integer multiples of the train passage frequency .the details of the distribution of all the side band heights depends upon the nature of the build up and decay of the cloud .the study of this paper is restricted to the effect of the wave dispersion from a static cloud under various conditions .a confirmation of the electron cloud induced modulation was shown in the pep ii low energy ring ( ler ) . in this experiment , the wave was transmitted across a solenoidal section of the ring and the cloud density was controlled by adjusting the strength of the solenoidal magnetic field .the estimations of the electron cloud density in this experiment , done based on the formulation given in , were reasonable when compared to earlier build up simulations .as shown in this paper , reflections within the beam pipe can alter the signal and thus misrepresent the cloud density in the region being sampled . instead of transmitting the wave at a point and receiving it from another called traveling wave rf diagnostics, one could also trap the wave within a section called resonant wave rf diagnostics .this has the advantage that one can sample a known section of the beam pipe .this method would not be affected by waves being reflected from other segments of the pipe coming back into the section of interest and thus compromising the precision of the measurement .another advantage of trapping , is that there is an enhancement of the signal as long as the point of measurement is close to a peak of the standing wave .additionally , at resonance , as demonstrated in ref , there is improved matching of the signal transfer from the electrodes into the waveguide , which is the beam pipe .as discussed in ref , the modulation of the cloud density would result in a modulation of the resonant frequency enabling one to relate the frequency modulation signal to an actual cloud density .the draw back of this technique is the need for having reflectors at both ends of the desired section .the electron cloud induced phase shift is known to undergo an enhancement in the presence of an external magnetic field under certain conditions , due to a modification in the dispersion relationship .this occurs when the wave magnetic field has a component perpendicular to the external magnetic field , and the frequency of the wave is close to the electron cyclotron frequency , corresponding to the value of the external magnetic field .this effect was demonstrated through simulations , and was later confirmed through experiments done at pep ii across a chicane .further measurements done at the same chicane , now installed in cesrta also confirm this effect .while it is good to have an enhanced signal , the drawback of such a measurement is that there is no available formulation that relates the enhanced side - band amplitude with the expected electron cloud density .in addition , as discussed in this paper , the phase shift progressively reduces as the electron cyclotron frequency exceeds the carrier wave frequency and at very high external magnetic fields the signal may not be observable at all .on can suppress the effect of the external magnetic field by aligning the wave electric field parallel to the external magnetic field , however it will be shown in this paper that this can not be fully eliminated unless the waveguide is rectangular .the program vsim , previously vorpal , was used throughout to perform the simulations .the simulations used electromagnetic particle - in - cell ( pic ) algorithms , consisting of propagating waves through a conducting beam pipe .the end of the pipe had perfectly - matched layers ( pmls ) [ 7 ] meant to absorb any transmissions , and thus simulate a long , continuous beam pipe .electrons were uniformly distributed and set to initially have zero velocity ( a cold plasma ) .waves were excited in the simulations with the help of a vertically pointing current density near one end of the beam pipe , which covered the full cross - sectional area and was two cells thick in the longitudinal dimension . in later simulations the pmls and rf current source were replaced with port boundary conditions that simultaneously absorb rf energy at a single frequency at the ends of the simulation while also launching rf energy into the simulation domain to simulate traveling waves .the waves were propagated along the channel using the yee [ 5 ] algorithm for solving the electromagnetic field equations .the computational parameters used did not change much between the various simulations performed in this paper .all the simulations were three dimensional using a cartesian grid .the grid sizes were around in each direction , the time steps varied between .the macro - particles - per - cell used was typically 10 , and they were loaded uniformly in position space , with zero initial velocity , often referred to as a cold start " .the duration of the simulation was about 140 rf cycles .overall , the modeling effort related to measuring electron clouds using te waves has served as a useful guide toward better understanding of the physical phenomenon and proper interpretation of the measured data .this paper provides a comprehensive account of simulations performed for various techniques currently under study .a derivation of the wave dispersion relationship for propagation through a beam pipe with a cold , uniform electron distribution in a field free region is given in the appendix .the first experiments using microwaves to assess the cloud densities involved simply transmitting the wave using a beam position monitor ( bpm ) and receiving the transmitted wave from another bpm downstream to the traveling wave . as shown in ref , the electron cloud induced phase shift per unit length of transmission in the absence of external magnetic fields and a uniform electron distribution , can be related to the electron density as follows , where is the angular cutoff frequency for a waveguide in vacuum, is the angular plasma frequency , with the electron number density , the charge of the electron , the speed of light , the electron mass and the free space permitivity . in ref , this formula was validated through simulations for a square cross section beam pipe .the derivation of the phase shift given by eq [ phaseshift ] used the dispersion relationship given in ref , which was proved to be valid for a circular waveguides free of external magnetic fields .the results shown in this paper validates the formula for a cesr beam pipe geometry .all these results collectively indicate that the formula is valid for any type of geometry . in the appedix, we provide an explicit proof that this is indeed the case .the cesr beam pipe geometry may be represented in the form of two circular arcs ( radius 0.075 m ) connected with flat side planes .about 0.090 m from side to side and 0.050 m between the apices of the arcs .the cutoff frequency of lowest mode for this geometry is known to be around ghz from past experiments and calculations related to the beam pipe .figure [ fig : vorpalpic ] provides a snapshot of the simulation , showing the propagation of a wave through the cesr beam pipe .calculation of electron induced phase shift was performed through separate simulations of the wave transmission through a vacuum beam pipe and through a beam pipe with electrons respectively . at a certain axial distance from the location at which the wave was launched ,the variation of the voltage between the midpoints of the top and bottom boundaries of the beam pipe cross section were recorded as a function of time .after normalizing the amplitudes of the two waves to unity , their difference gives a sinusoidal wave with an amplitude equal to the phase shift between the waves .suppose that the angular frequency of the wave is and phase shift is .the phase shift for nominal cloud densities is small enough that .hence we have .the amplitudes of all the waves were calculated from their respective numerical rms values . to confirm the relationship between the phase shift and electron cloud density ,simulations were done with a cesr beam pipe with a length of 0.5 m .figure [ fig : phaseshift1](a ) shows that simulations agree well with the analytically predicted values given by eq [ phaseshift ] .this establishes the accuracy of the simulation method as well as the validity of eq [ phaseshift ] for any geometry .we see that the electron cloud induced phase shift increases as one approaches the cutoff frequency . while this is desirable because it amplifies the modulation side - bands relative to the carrier signal , one will encounter reduced transmission as the carrier frequency approaches the cutoff . due to the presence of several mechanical and electronic components all along the vacuum chamber, it is not very likely that one can perform phase shift experiments that are entirely free of internal reflections . as discussed in ref , in an experiment, these internal reflections would potentially affect the value of the phase shift .a wave reflected from a device that lies beyond the segment being measured , would be received by the detector when it comes back after reflection . at the same time, waves could get reflected back and forth within the segment before eventually being received at the detector . in both cases ,the reflected wave would have sampled a length different from that meant to be sampled and would thereby contaminate the signal , because the phase shift is proportional to the length of transmission . in order to understand this effect ,the simulations were altered to include two protruding conductors , which would reflect some of the transmitted wave .the protrusions were slabs in the transverse plane , extending from the bottom to 1 cm above the apex of the lower arc ( see fig .[ fig : protrusions ] ) .they were spaced 0.4 meters apart , including the thickness of the protrusions , which was 1 mm . the frequencies used for this study 2.41 ghz and 3.87 ghz , the same as those shown in fig [ fig : phaseshift1](a ) , correspond to the resonant harmonics ( and , respectively ) of a 0.4 meter resonant cavity " .this was done in order to maximize reflections .[ fig : phaseshift1](b ) shows the resulting phase shifts from these calculations .the solid shapes represent the data for no reflections and are the same data that appear in fig .[ fig : phaseshift1](a ) .the open shapes represent the phase shifts in the presence of reflection .these results clearly indicate that internal reflections modify the expected phase shift .the nature of the alteration of phase shift depends upon the complexities of the transmission - reflection combination , and the instrumentation used for the method .however , the results show that the linear relationship between phase shift and electron density is always preserved .while internal reflections may interfere with phase shift measurements , they can also be used to trap a wave .this trapped wave can be used to measure the electron cloud density , as discussed in ref .this section shows the results of numerical simulation of such an experiment .the geometries used were ( a ) cesr beam pipe also used in the previous section and ( b ) beam pipe with a circular cross section of radius 4.45 cm . both of them had conducting protrusions that were 1 mm thick and 1 cm high from the base as shown in fig [ fig : protrusions ] .the cutoff frequency for the circular beam pipe can be calculated from the analytic expression , and is 1.9755ghz for the lowest ( ) mode . a trapped mode results when standing waves are induced between the reflectors .in order to test the effectiveness of inducing such a standing wave using partial reflectors , simulations were done with an empty wave guide over a large range of frequencies .it should be noted that , since there is only a partial reflection of the wave taking place , there is always a net transmission of energy across the segment between the protrusions .the wave energy that escapes the partial reflectors gets absorbed into the pml regions .identification of a resonance was done as follows . at each frequency and each time step , the wave energy flux was computed by integrating the poynting vector across a plane located at the mid point between the two protrusions .this plane was oriented transverse to the axis and covered the entire cross section . for each of these frequencies ,the mean of the energy flux was calculated over the period of the simulation .it is expected that as the frequency approaches that of a standing wave , this averaged flux would reach a local minimum .this is because of increased `` back and forth '' transmission which does not contribute to the average flux due to cancellation .figure [ fig : poyntingflux ] shows the average energy flux calculated for a variety of frequencies spanning over several resonance points for ( a ) the cesr beam pipe ( b ) the circular cross section beam pipe .the local minima seen on these plots correspond to a standing wave mode .the length of the section , including the width of the protrusions was 0.4 m for the cesr beam pipe .the length of the circular beam pipe , including the width of the protrusions was 0.88 m showing the linear relationship between and according to eq [ vacstandwave],title="fig : " ] showing the linear relationship between and according to eq [ vacstandwave],title="fig : " ] a standing wave occurs when the wavelength and the length of the segment are related such that for any integer , .the dipersion relationship of a waveguide with wave frequency and cutoff frequency , is given by .expressing in terms of , for the standing wave , this then gives this indicates the linear relationship between and , and also the relationship of with the slope , and with the intercept of the straight line .equation ( [ vacstandwave ] ) was used to confirm that the local minima in fig [ fig : poyntingflux ] correspond to resonance points .figure [ fig : f2vsn2 ] shows the value of plotted as a function of the corresponding value of for ( a ) the cesr beam pipe and ( b ) beam pipe with a circular cross section . performing a straight line fit on these points for case ( a ), yielded the relationship .this gives and .for case ( b ) , a similar operation gives giving and .these values are close enough to the expected ones and ascertain the accuracy of such a method in determining standing waves between partial reflectors .the presence of an electron cloud would result in a shift in the standing wave frequency .experimentally , it is possible to measure this in the form of frequency modulation side - bands associated with the periodic passage of a train of bunches creating electron clouds .using , eq ( [ disprel ] ) we can show that the condition for standing waves given by eq ( [ vacstandwave ] ) is modified by an electron cloud as follows , where the wave frequency is now denoted by and is the plasma frequency .subtracting eq [ vacstandwave ] from eq [ elecstandwave ] , and in the limit of small frequency shifts , we get , , where . on inserting the expression for , this then gives a simple expression relating the shift in resonant frequency as a function of electron density . in which is the classical electron radius .this shows that the frequency shift is proportional to the electron cloud density .an effort is underway at cesrta to use this method to measure the density of the electron cloud within the beam pipe section where the reflections are occurring .thus , it became necessary to test this phenomenon with simulations .simulation of the frequency shift of standing waves was done for a cesr beam pipe cross section as well as a beam pipe with a circular cross section .all the parameters were the same as before except that the length of the section between the partial reflectors for the cesr beam pipe was modified from 0.4 m to 0.9 m , which was somewhat close to one the setups under study at cesrta .the length of the circular cross section was retained at 0.88 m .since the frequency shift induced by electrons is very small , it is required that the resonant frequency be determined accurately . to do this, a parabolic fit was made to the averaged energy flux in the vicinity of the minimum point , using the available points obtained from simulation .the expression for the parabola may be obtained from a taylor expansion of the function around the minimum .this gives the mean energy flux as a function of frequency near the minimum point .thus , we have \ ] ] where is the averaged energy flux , is the second derivative of evaluated at .the first derivative of at the minimum point vanishes .using the computed coefficients of the parabola one can solve for .figure [ resshift ] shows the frequency shift induced by electron clouds for the mode for ( a ) the cesr beam pipe and the ( b ) the circular beam pipe .for case ( a ) , the electron density used in this calculation was m , which is rather high , but helps validate eq ( [ freqshift ] ) with simulations more accurately .for these parameters , the resonance occurs at 2.0033 ghz and the expected frequency shift due to electrons is 2 mhz .simulations show a shift of 2.05 mhz . for case( b ) , we used two values of electron densities , m and m .the expected frequency shift for the m case is 2.013mhz .the simulated frequency shift for this was 2.1mhz , and for an electron density of m , it was 4.2mhz .thus we were able to establish that reasonably accurate values of frequency shift for such an experiment may be determined from simulations .it is interesting to note that the error obtained in the resonant frequency itself was around , however the shift induced by electron clouds always had reasonable agreement with eq [ freqshift ] .the agreement with theory provides confidence in estimating electron densities based on eq [ freqshift ] from measurements .typical electron cloud densities are of the order of leading to frequency shifts about a factor of 100 smaller than those simulated here . simulating frequency shifts this small would in principle be possible by scaling all numerical parameters appropriately , but this would have been a far more intensive computational process .however , it is well known that in practice , such small frequency modulations can be easily measured with standard spectrum analyzers .resonance frequency induced by electron clouds in ( a ) the cesr beam pipe and ( b ) the circular cross section beam pipe.,title="fig : " ] resonance frequency induced by electron clouds in ( a ) the cesr beam pipe and ( b ) the circular cross section beam pipe.,title="fig : " ]this section discusses the effect of external dipole and wiggler magnetic fields on the phase shift measurements .simulations done in the past , have revealed that the phase shift is greatly amplified in the presence of an external magnetic field if the electron cyclotron frequency lies in the vicinity of the carrier frequency . following these results ,the cyclotron resonance was soon confirmed at an experiment performed at the slac chicane .the slac chicane has since been transferred to cesrta , where these studies continue to be made . in the presence of an external magnetic field and electron clouds ,the medium is no longer isotropic and the polarization of the transmitted microwave plays an important role in the outcome of the measurement .when the wave electric field is oriented perpendicular to the external magnetic field , the mode is referred to as an extraordinary wave or simply x - wave . in this situation ,if the external dipole field corresponds to an electron cyclotron frequency close to the carrier wave frequency , we see an enhanced phase shift . the phenomenon is well understood in the case of open boundaries .it is usually referred to as upper hybrid resonance .the dispersion relationship for the open boundary case is given as follows ( see for example ref ) . the quantity is the upper hybrid frequency which is given by , being the electron cyclotron frequency for the given magnetic field .when , it is clear that .it can also be seen that as , i.e. , for very high magnetic fields , the relationship between and approaches that of propagation through vacuum . in the case of electron clouds in beam pipes ,the plasma frequency is of the order of a few 10 mhz while the carrier frequency is around 2 ghz . in this regime, it is reasonable to state that resonance occurs when . since the phase advance is the product of the wave vector and the length of propagation , we see that the electron cloud induced phase shift will theoretically go to infinity .equation ( [ openbdryres ] ) is not valid for waveguides , which have finite boundaries .nevertheless , simulations show that the same qualitative features are exhibited also for propagation through waveguides. figure [ [ xwave ] ] shows the enhanced phase shift for three values of cloud densities when the cyclotron frequency approaches the carrier frequency .the beam pipe cross section was circular with a radius of 4.45 cm , which leads to a cutoff at 1.9755ghz at the fundamental mode .these parameters match with the beam pipe geometry of the pep ii / cesrta chicane section .the wave frequency used in the simulation was 2.17 ghz .the magnetic field corresponding to this cyclotron frequency is 0.077576 t .the wave was excited with the help of a sinusoidally varying electric field pointing perpendicular to the external magnetic field . when the wave is polarized with an electric field that is parallel to the external magnetic field ,the mode is often referred to as the ordinary wave , or the o - wave . in this caseone would not expect any effect created by the external magnetic field. this would be the case for a rectangular cross - section waveguide , or with open boundaries .however , with a circular cross section as in our study , the boundary conditions would force a component perpendicular to the magnetic field in the wave electric field even if it is launched with a purely parallel electric field .thus , it is inevitable that a weak component of a wave with an orthogonal polarization gets excited . additionally , the method employed in launching the wave in the simulations is similar to that performed in experiments , where an electric field wave is excited along a particular direction over a surface area .the functions describing the wave for a cylindrical geometry are bessel functions involving the radial and azimuthal variables . unless care is taken to excite a wave having the given functional form , the wave is expected to couple itself to two orthogonal modes with varying degrees of intensity .additionally , the detection system , would receive the effect of the two modes to varying degrees .disentangling this combination will involve more analysis , guided by simulations . due to these effects, we see a weak resonance effect even in a wave excited with a purely vertical electric field , that is aligned to the external magnetic field as indicated .figure [ owave ] shows the presence of such a weak resonance and this effect has been observed at cesrta as well .figure [ [ dens_scan ] ] shows the variation of phase shift with electron cloud density at different settings of external magnetic fields . in these simulations ,the wave was excited with an electric field perpendicular to the external magnetic field .these densities are typical of what is produced in cesrta .the plots show that the variation of phase shift with density remains linear even when one is close to resonance .this is expected to be true as long as the plasma frequency is much smaller than the wave frequency , regardless of how complex the dispersion relationship of the wave is .thus , one could easily amplify the signal with the help of an external magnetic field to monitor relative changes in cloud density , if not the absolute density .experiments have been done to study the phase shift across the damping wigglers at cesrta .these experiments correspond to various bunch currents and wiggler field settings . the wiggler field setting influences the measurement in more than one way .the wiggler field affects the motion of the electrons , which influences the secondary production of the cloud .the synchrotron radiation flux is determined by the strength of the wiggler field , and this in turn determines the photoemission rate of the cloud .both these effects determine the density of the cloud .the electron density is not uniform across the length of the wiggler , as shown in ref .as already shown in this paper , the external magnetic field , by itself , alters the phase shift for a given cloud density .given that the wiggler field is rather complex , along with a cloud density that is longitudinally nonuniform , simulations become particularly important to fully interpret results from such an experiment . in this paper , we examine just the effects of the nonuniform wiggler field on the phase shift . the wiggler chamber cross section in cesrta is close to that of a rectangle .the height is 5 cm and width is 9 cm .the corners of the rectangle are chopped , so that the actual width of the base and top is 64.6 mm and the height of the side walls are 24.6 mm .this which would only moderately alter the results obtained from using a perfect rectangle .thus , for the sake of simplicity , the simulations were done with a perfect rectangle with the above parameters .the length of the section simulated is 80 cm , which corresponds to half the length of the wiggler .this is sufficient to account all the variations in the wiggler magnetic field .the computed wiggler magnetic field used was based on the formulation given in .figure [ wigglerfield ] shows the magnetic field , in three dimensions . for most of the region, the field is oriented vertically , while in the transition region between poles there is a longitudinal component to the field .there is almost no magnetic field in the horizontal direction . even before performing simulations with the full wiggler field turned on , some preliminary studies were done with a just a constant dipole field within the same geometry .since the shape of the vacuum chamber cross section is rectangular , the the cutoff frequency of the wave is determined by the polarization of the te wave .if the wave electric field is pointing in the vertical direction , the cutoff frequency is 1.66ghz and if the field is horizontal , the cutoff is 3 ghz .all simulations were done at a frequency 10% above the respective cutoff .figure [ wiggdipvertwave ] shows the phase shift associated with propagation of a vertically polarized wave under different conditions .the figure shows that when the wave electric field is polarized along the external magnetic field , the phase shift matches with that predicted by eq ( [ phaseshift ] ) .this result is expected to be true in the case of a rectangular cross section , where the wave electric field is pointing parallel to the external magnetic field everywhere in the pipe , and is thus unaffected by the external magnetic field . as already shown , this would not be true in the case of a curvature in the cross - section boundary .the figure also shows that the phase shift is suppressed in the case of the wave electric field pointing perpendicular to the external field , referred to as extraordinary wave .the cyclotron resonance in this case occurs when the magnetic field is equal to t and the field here was set to a much higher value . in general , the wave electric field perturbs the electrons , causing them to oscillate and thereby alter the wave dispersion relation .when the external magnetic field is very high , the electrons tend to get locked against any motion transverse to the magnetic field .since the wave electric field is perpendicular to the external magnetic field they will encounter electrons that tend to be `` frozen '' . as a resultthe wave will undergo reduced electron cloud induced phase shift at magnetic fields much higher than that causing cyclotron resonance for the specific carrier frequency .figure [ wiggdiphorwave ] corresponds to a wave with the electric field pointing horizontally .this shows similar features as those in figure [ wiggdipvertwave ] .the wave frequency is 3.3ghz and cyclotron resonance occurs at field of 0.11 t , and so the figure shows enhanced phase shift at a magnetic field setting close to this value .exciting the chamber at this higher frequency could be less efficient due to poorer matching between the various hardware components .in addition , one can expect a mixing of modes to take place at higher frequencies because of the presence of various irregularities in a real beam pipe as opposed to a simulated one .nevertheless , studying this mode is important because the wiggler magnetic field is mostly pointing in the vertical direction .one could amplify the signal by setting a wiggler field so that a cyclotron resonance is excited .this effect might prove useful in detecting the presence of very low density electrons in wiggler regions .the presence of low energy electrons in wiggler and undulators is of particular interest if the device is cryogenic , in which case the electrons are believed to contribute to the heat load of the system .the electron cloud in such systems would be produced by electron beams primarily through photoemission , and thus if present , they will occur at very low densities , requiring greater sensitivity in the detection . in the end , we look at the phase shift in the presence of the full wiggler field .figure [ wiggfullfield ] shows that the phase shift gets suppressed by about 20% when compared that expected in the absence of any fields . in this case, the wave electric field is pointing in the vertical direction , which is a configuration that should have little effect over the phase shift , since the external magnetic field is largely vertical .a longitudinal magnetic field would alter the dispersion relationship , where the wave gets split into a left and right circularly polarized components .this has been analyzed for guided wave propagation in cylindrical geometries in ref .while the 20% reduction in phase shift in our result can be attributed to such an effect , a detailed analysis of the same is beyond the scope of this paper .overall , it is clear that simulations are of prime importance to accurately interpret the observed electron cloud induced phase shifts across such wiggler fields .in this paper , we provide a comprehensive account of the simulation and analysis effort that has been carried out in conjunction with the experimental effort of using te waves to measure electron clouds in cesrta .these simulations helped confirm several physical phenomena either in a quantitative or in a qualitative manner .for example , they helped validate eq [ phaseshift ] that relates the phase shift with cloud density for geometries like the cesrta beam pipe , which does not have a regular shape such as rectangular or circular. the effect of reflections on phase shift measurements has always been a concern , and simulations show that one must be careful especially of standing waves excited within the beam pipe due to partial reflectors .the feasibility of using standing waves to measure the cloud density is clearly demonstrated by simulations and has provided valuable guidance to the experimental effort being carried out at cesrta . in the process , we were able to determine a novel method of detecting the presence of standing waves in simulations by averaging the total poynting vector flux across a surface over time for varying frequencies .simulations of phase shifts in the presence of external magnetic fields were modeled for a variety of cases .the nature of results vary greatly based on the the parameters present in the system .for example , the possibility of exciting cyclotron resonances is clearly shown in simulations , which would be possible to produce in dipole fields present in a chicane . however , dipole fields used in bend regions of an accelerator are much higher and they can suppress the electron induced phase shift , depending on the polarization of the wave . in the presence of curved boundaries ,there is always a mixture of effects from ordinary and extraordinary wave propagation .since most accelerator vacuum chambers have a curvature , this effect is important to understand .a direct comparison with the analytic expression eq [ disprel ] would lead to an incorrect interpretation of the measured data .the results will always be hard to interpret when a system is near the cyclotron resonance , when the phase shift is theoretically infinity .simulations and experiments would never yield an infinity , and there may be poor agreement between the two in such regimes .however , an enhancement of the signal that is still predictable can always be obtained by moving reasonably close to a cyclotron resonance point .electron cloud formation in wiggler fields can be very important in machines such as positron damping rings . given the complexity of such a system , experiments of determining the cloud density using te waves have to be accompanied by careful simulations before interpreting any results obtained from measurements .it may be noted that the relationship between the phase shift and cloud density is always linear regardless of how complex the system is .this would be true for very low cloud densities , which would have low plasma frequencies .thus , this method can be of great utility if one is interested in relative changes in electron cloud densities for example when a machine is undergoing conditioning .simulations can then be used to obtain a proportionality constant between cloud density and phase shift .the studies in this paper were always done with an electron density that was cold and uniformly distributed transversely and longitudinally .simulations have not indicated a dependence on temperatures associated with typical electron cloud densities . in a transverse nonuniform distribution, there will be a slight enhancement in the phase shift if more electrons are populated in regions with high peak electric fields produced by the wave . as mentioned earlier ,longitudinal variation of the cloud density becomes important in wiggler fields because it couples with the longitudinal variation of the magnetic field .these additional complexities could be topics for future studies .the te wave method is an attractive technique for measuring electron cloud densities that can replace or complement other measurement methods . besides cesrta , this method is being studied at other accelerator facilities .the measurement technique and its required instrumentation are simple , the process is noninvasive , and can be kept in operation continuously .thus , it holds the promise of wide usage wherever it is useful to monitor the electron cloud properties continuously and at all locations of the accelerator .it is evident that a careful study toward understanding of the physical phenomenon through analysis and simulation are very important toward proper interpretation of the measured data .in this appendix , we provide a derivation of eq [ disprel ] which is not specific to the geometry of the cross - section of the waveguide .we also discuss the approximations and assumptions associated with the derivation of this equation .this dispersion relationship is specific to guided waves propagating through electron clouds in field free regions .the starting equations for such a system would include the fluid and maxwell s equations .these are , + e({\bf e } + { \bf v } \times { \bf b } ) = 0 \nonumber \\ & & \frac{\partial n_e}{\partial t } + \nabla \cdot ( n_e{\bf v } ) = 0 \nonumber \\ & & \nabla \cdot { \bf e } = \frac{en_e}{\epsilon_0 } \nonumber \\ & & \nabla \cdot { \bf b } = 0 \nonumber \\ & & \nabla \times { \bf e } = - \frac{\partial { \bf b}}{\partial t } \nonumber \\ & & \nabla \times { \bf b } = \mu_0(-en_e{\bf v } + { \bf j_{ext } } ) + \mu_0 \epsilon_0 \frac{\partial { \bf e}}{\partial t } \label{unperturbeqs}\end{aligned}\ ] ] where is the number density of the electrons , is the velocity of the fluid , and is an external current density .the other terms have their usual meanings .the continuity equation is not independent from the rest of the equations as it can be obtained from maxwell s equations .we perturb all quantities about an equilibrium , so that we have , , , where the zeroth order quantities satisfy the steady state condition . as a result, we get such an equilibrium state requires for the particles to be confined in a steady state indefinitely .this is normally associated with neutral plasmas in which the ions may be considered immobile , or charged particles trapped for a long period of time in confinement devices . in this paper ,the electron cloud density is sustained for the duration of the bunch train passage .the above equilibrium condition may be considered valid as long as where is time of confinement of the charge and is the frequency of the perturbing wave .periodic changes in the state occurring over time scales greater than the wave periodicity manifest themselves as modulations of the output signal , while changes occurring over much smaller time scales , remain unresolved by the carrier wave .thus , the spectrum of the output signal would depend on the variation of the electron cloud associated with its build up and decay . inserting the perturbation expansion into the original fluid and maxwell s equation , and imposing the above equilibrium conditions and ignoring terms of second and higher order , we get , these equations are linear and we can seek all perturbations to have the following form , from the above , it is clear that this form is valid as long as the geometry along " , the longitudinal coordinate is uniform and infinite .this is not entirely true when there are partial reflectors . in the case of perfect reflectors, one would obtain discrete values for , representing standing waves .thus one can expect that the above form to be more accurate when close to a resonance in the presence of partial reflectors .in general , the reflectors would be small enough so that the above form of solutions can be considered a valid approximation . for the sake of convenience, we drop the accent and the arguments in the functions given in eq ( [ wavesoln ] ) . to simplify the analysis ,we make two assumptions , ( 1 ) the fluid is cold and at rest .so , and ( 2 ) the density is uniform , which means = constant .we further assume that there is no external magnetic field , which means that . in the absence of any static external magnetic fields , the only contribution to would be the magnetic field produced by the beam .since the beam is highly relativistic , this would be confined along the length of the bunch .it is reasonable to disregard this when the gap between the bunches is much larger than the bunch length , in which case the wave would sample mostly a field free region .inserting the waveform solutions ( eq [ wavesoln ] ) into the perturbed momentum and continuity equations in eq ( [ perturbeqns ] ) , with and using the relationships of eq ( [ waveform ] ) , we have , combining these , we get combining this with the perturbed electrostatic field equation in eq ( [ perturbeqns ] ) gives , which means , up to the first order , there is no perturbation in the charge density due to the wave electric and magnetic fields .this gives us implying that the wave is purely electromagnetic . by combining the first ( momentum ) , and the last ( ampere s law ) equation of eq ( [ perturbeqns ] ) , and using the relationships of eq ( [ waveform ] ) , we get after some algebra , where .similarly , it is easy to see that , using eqs [ wavegauss ] , [ waveampere ] and [ wavefaraday ] , along with , and assuming that the boundary conditions are perfectly conducting , one can follow the steps given in ref . to get the constant must be nonnegative for oscillatory solutions , and will take on a set of discrete eigenvalues " , corresponding to the different modes associated with the geometry of the cross - section of the waveguide . combining eq ( [ eigenwaveeq ] ) with a similar relationship for a vacuum waveguide where , , one can easily show that , being the angular cutoff frequency for the vacuum waveguide . this relationship is the same as eq ( [ disprel ] ) .the authors wish to thank john sikora for many useful discussions and for suggesting us to do the simulations with partial internal reflections .thanks to jim crittenden for helping us in generating the complete the wiggler magnetic field .we also wish to thank david rubin , mark palmer and peter stoltz for their support and guidance .this work was supported by the us national science foundation ( phy-0734867 , phy-1002467 , and phy-1068662 ) and the us department of energy ( de - fc02 - 08er41538 and de - sc0006505 ; de - fc02 - 07er41499 as part of the compass scidac-2 project , and de - sc0008920 as part of the compass scidac-3 project ) .24 f. caspers , w. hofle , j. m. jimenez , j. f. malo , j. tuckmantel , and t. kroyer , in proceedings of the 31st icfa beam dynamics workshop : electron cloud effects ( ecloud04 ) , napa , california 2004 ( cern report no .cern-2005 - 001 , 2004 ) .t. kroyer , f. caspers , e. mahner , proceedings of 2005 particle accelerator conference , knoxville , tennessee 2212 - 2214 s. de santis , j. m. byrd , f. caspers , a. krasnykh , t. kroyer , m. t. f. pivi , and k. g. sonnad phys .100 , 094801 ( 2008 ) kiran sonnad , miguel furman , seth veitzer , peter stoltz and john cary , proceedings of pac07 , albuquerque , new mexico , usa , pp .thpas008 k g sonnad _ et al _ proceedings of particle accelerator conference 2009 , vancouver , canada , 2009 , pp.th5rfp044 j.p .et al _ , proceedings of international particle accelerator conference 2011 , san sebastin , spain , pp.tupc170 k.g .et al _ http://meetings.aps.org/link/baps.2007.dpp.tp8.134 49th annual meeting of the division of plasma physics s. veitzer , doe scientific and technical information , http://www.osti.gov/bridge identifier number 964651 m. t. f. pivi , _ et al _ , proceedings of european particle accelerator conference 2008 , genoa , italy pp .mopp065 c. nieter and j. r. cary , j. comp .196 , 448 - 472 ( 2004 ) .h. s. uhm , k. t. nguyen , r. f. schneider an d j. r. smith , journal of applied physics , vol 64(3 ) , 1988 , pages 1108- 1115 . j. berenger , journal of computational physics 114 , 185 ( 1994 ) k. yee , ieee transactions on antennas and propagation , ap-14 , 302 ( 1966 ) s. de santis , phys . rev .st accel . beams 13 , 071002 ( 2010 ) john sikora and stefano desantis , http://arxiv.org/abs/1311.5633 s. de santis _et al _ , proceedings of 2011 particle accelerator conference , new york , ny , usa , pp .mop228 r j goldstone and p h rutherford , introduction to plasma physics institute of physics publishing , 1995 c. celata phys .st - accel .beams 14 , 041003 ( 2011 ) d. sagan , j. a. crittenden , d. rubin and e. forest , proceedings of particle accelerator conference 2003 , portland , or , usa pp.1023 s. casalbuoni , s. schleede , d. saez de jauregui , m. hagelstein , and p. f. tavares , phys . rev .beams 13 , 073201 ( 2010 ) s. federmann , f. caspers , and e. mahner , phys .beams 14 , 012802 ( 2011 ) n. eddy , j. crisp , i. kourbanis , k. seiya , b. zwaska , s. de santis proceedings of particle accelerator conference 2009 , vancouver , bc , canada pp . we4grc02 j. c. thangaraj , n. eddy , b. zwaska , j. crisp , i. kourbanis , k. seiya proceedings of the electron cloud workshop 2010 , ithaca , new york , usa pp .dia00 j d jackson .classical electrodynamics , wiley , new york , ny , 3rd ed . edition , ( 1999 )
the use of transverse electric ( te ) waves has proved to be a powerful , noninvasive method for estimating the densities of electron clouds formed in particle accelerators . results from the plasma simulation program vsim have served as a useful guide for experimental studies related to this method , which have been performed at various accelerator facilities . this paper provides results of the simulation and modeling work done in conjunction with experimental efforts carried out at the cornell electron storage ring test accelerator " ( cesrta ) . this paper begins with a discussion of the phase shift induced by electron clouds in the transmission of rf waves , followed by the effect of reflections along the beam pipe , simulation of the resonant standing wave frequency shifts and finally the effects of external magnetic fields , namely dipoles and wigglers . a derivation of the dispersion relationship of wave propagation for arbitrary geometries in field free regions with a cold , uniform cloud density is also provided .
as is well known , the deflection of light by a gravitating body was one of the first predictions of einstein s general theory of relativity to be observationally confirmed .later on , einstein himself predicted what is called today a _microlens _ : the momentarily increase in apparent brightness of a background star as it passes close to a foreground massive body einstein . both the deflection of light , and the change in apparent brightness of a radiation source by an external gravitational field , are collectively known as a _ gravitational lens_. nowadays , _gravitational lensing _ is a very active area of research , and it has found applications ranging from the search of extrasolar planets and compact dark matter to estimate the value of the cosmological parameters . in most of these applicationsit is only necessary to assume that the gravitational field is weak and that the deflection angle ( due to a spherically symmetric body of mass can be approximated by : , where is the impact parameter . on the other hand, it is well known that for a schwarzschild black hole , the deflection angle diverges as , allowing photons to orbit the black hole many times before reaching the observer .this gives rise to an infinite set of images at both sides of the black hole .notice that in this region the gravitational field is no longer weak , and the above approximation fails .recently , a paper by virbhadra and ellis have renewed interest in such images , which they called _ relativistic images _ .later on , bozza and eiroa , romero and torres developed an approximation method for the case of strong _ spherically - symmetric _ gravitational fields .in fact , by expanding the deflection angle near the point of divergence , these authors were able to find analytic expressions for the positions and magnifications of the resulting relativistic images .interestingly , such images were only characterized by the number of windings around the black hole .although several authors have began to study gravitational lenses with a rotating black hole as a gravitational deflector ( see and references therein ) , most of their approaches to this subject still use the weak field approximation , and focus only in null geodesic motion at the black hole s equatorial plane . in passing ,we shall mention for the interested reader a nice discussion of a kerr black hole as a gravitational lens by bray .unfortunately , in this work there are approximations that are valid only for small deviations from the straight line path , and therefore are not suited for studying relativistic images . if we want to consider the phenomenology of the relativistic images in the strong gravitational field of a kerr black hole , no approximations can be taken and we need to work with the full equations of motion for null rays .there have been numerous articles about the motion of null rays in the gravitational field of a kerr black hole .although some of them have address the gravitational lens problem , most have concentrated in the observational effects on accretion disks and sources orbiting at the equatorial plane of the black hole and have not discussed the phenomenology of the relativistic images .recently , bozza studied quasi - equatorial " orbits of photons around a kerr black hole and provided analytical expressions for the positions and magnifications of the relativistic images .however , these approximations fail when the observer and the source are located far away from the equatorial plane .moreover , as we show in sec .v , bozza s procedure begins to fail when the black hole s angular momentum increases near its maximum value , even if the observer and the source are close to the rotating black hole s equatorial plane . in this paper , we shall discuss the phenomenology of the relativistic images by using the exact null equations of motion , with the assumption that the observer and the source are far away from the black hole .although this new procedure is far more complicated than previous works on this subject , it allow us to calculate , estimate , and discuss for the first time the observational properties of the relativistic images for arbitrary source and observer inclinations in a kerr gravitational lens .therefore the purpose of this paper is two - fold .first , to extend the study of relativistic images for the case when the gravitational deflector is a rotating kerr black hole . secondly , since the trajectory of a photon will not always will be confined to a plane , we shall also be concerned with photon trajectories off the black hole s equatorial plane . as a consequence of this undertaking, we show below that all relativistic images deflected by a rotating kerr black hole are characterized by only two integers numbers : namely , the number of turning points in the polar coordinate , and the number of windings around the black hole s rotation axis . to facilitate reading this paper is divided as follows : in sec .ii to make this study self - contained a review of the null geodesic s equations of motion in the kerr space - time is briefly discussed . in sec .iii we explain the gravitational lens geometry , and present a general classification for all images that allow us to define formally what we mean by relativistic images " . in sec .iv , analytical expressions for the images magnification are derived . in sec .v , we calculate the positions and magnifications of the relativistic images for several special cases , and compare our results with those in the literature . finally , in sec .vi a discussion of our results is undertaken .an appendix is also included to show how the null equations of motion can be solved in terms of elliptic integrals .in this section we briefly review the equations of motion for light rays in the kerr space - time .we use the usual boyer - lidquist coordinates which , at infinity , are equivalent to the standard spherical coordinates .we also discuss about the relevant range of coordinates and constants of motion for the case of an observer and a source located far away from the black hole .details about the relation between the null geodesic equations , the gravitational lens geometry , and observable quantities are given in the sec .iii . by a convenient choice of the affine parameter ,null geodesics in kerr space - time can be described by the following first - order differential system : where r - a^2 \eta \ ; , \\ \nonumber\\\label{theta } \theta(\theta ) & = & \eta + a^2 \cos^2\theta - \lambda^2 \cot^2\theta \ ; , \\ \nonumber\\ p & = & ( r^2 + a^2 ) - \lambda a\ ; , \\\nonumber \\ \sigma & = & r^2 + a^2 \cos^2\theta \ ; , \\\nonumber\\ \delta & = & r^2 - 2 r + a^2\;.\end{aligned}\ ] ] in the above , is the four - velocity ( is an affine parameter ) , and are the constants of motion , is the black hole s angular momentum per unit mass , and units are chosen such as .as is well known , for a kerr black hole , the parameter is restricted to . from eqs .( [ ur ] ) - ( [ ut ] ) it follows that the relevant integrals of motion are ^ 2}\,dr \nonumber \\ & & + \int^\theta \frac{a^2 \cos^2 \theta } { \pm \sqrt{\theta(\theta)}}\,d\theta\;.\end{aligned}\ ] ] the signs of and are those of and respectively .thus , the positive sign is chosen when the lower integration limit is smaller than the upper limit , and the negative sign otherwise .for an observer and a source located far away from the black hole , the relevant radial integrals can be written as follows : where is the only turning point in the photon s trajectory , and it is defined by the largest positive root of . for the angular integrals ,however , there could be more than one turning point . in consequence ,the most general trajectory is described by the turning points in are defined by and are given by ^{1/2 } \right\}\;. \nonumber \\\end{aligned}\ ] ] it is easy to show from eq .( [ utheta ] ) that for any pair of parameters , the motion in is bounded by and .thus , the space of parameters is restricted because .this constrain will be implemented in sec .iii after discussing the lens geometry .finally , the deflection angle has to be chosen to satisfy a given source - observer geometry ( see sec .iii for details ) .now we want to know what region of the parameter space ( ) correspond to photons that after reaching can escape to infinity .writing eq .( [ utheta ] ) as we see that could be negative .however , assuming a photon crosses the equator ( ) , implies that . since in this researchwe consider a source behind the black hole and we are mostly interested in images formed by photons that go around the black hole before reaching the observer , we will only consider the case of positive . for a photon to be able to return to infinity ,we need at . setting we get : }{a^2(1 - \underline{r})^2}\;,\end{aligned}\ ] ] where is the lower bound of with , and is the kerr black hole s horizon .this is a parametric curve in the ( ) space and it is shown in fig .[ parspace ] for the case .photons with constants of motion inside the shaded region do not have a turning point outside the horizon so they will fall into the kerr black hole . ) inside the shaded region will fall into the black hole .the parametric curve is for and is defined by eqs .( [ lambdamin ] ) and ( [ etamin]).,width=317,height=226 ] this is analogous to the motion of null geodesics around a schwarzschild black hole , where any photon with impact parameter can not escape to infinity .this value of the parameter , correspond to which defines the photon sphere " . in the parameter space( ) , this correspond to a close region bounded by the line and the curve , .this forbidden region in the parameter space shall be called : _photon region_. the radial integrals of eqs .( [ firstint ] ) and ( [ secondint ] ) diverge at the boundary of the photon region ( except at the line ) and take complex value inside of it .therefore , knowledge of the mapping of this region will allow us to ensure that the equations of motion will remain well behaved .since kerr space - time is asymptotically flat , an observer far away from the black hole ( ) can set up a reference euclidean coordinate system ( ) with the black hole at the origin ( see fig .[ lensgeometry ] ) .the boyer - lidquist coordinates coincide with this reference frame _ only for large . the coordinate system is chosen so that , as seen from infinity , the black hole is rotating around the axis . for rotation will be assumed to be in the counterclockwise direction as seen from the positive axis . without loss of generality ,we choose so that the coordinates of the observers in the boyer - lidquist system are ( ) .similarly , for the source we have ( ) . with the black hole at the origin .the boyer - lidquist coordinates coincide with this system only at infinity .the reference frame is chosen so that , as seen from infinity , the black hole is rotating around the axis . in this system ,the line joining the origin with the observer is normal to the - plane .the tangent vector to an incoming light ray defines a straight line , which intersects the - plane at the point ( ).,width=326,height=289 ] in the observer s reference frame , an incoming light ray is described by a parametric curve , , , where . for large is just the usual radial coordinate in the boyer - lidquist system . at the location of the observer, the tangent vector to the parametric curve is given by : this vector describes a straight line which intersect the - plane shown in fig . [ lensgeometry ] at ( ) .a line joining the origin with the observer is normal to this plane .we call this plane the _ observer s sky_. the point ( ) in this plane is the point ( ) in the ( ) system . changing to spherical coordinates and using the equations of the straight line ,is easy to show that by using eqs .( [ ur ] ) - ( [ uphi ] ) in ( [ alphai ] ) and ( [ betai ] ) and further assuming , it is possible to relate the constants of motion and to the position of the images in the observer s sky : these equations can , in turn , be written in terms of the angles ( ) in the observer s sky by : , .we would like to mention that eqs .( [ alphai ] ) - ( [ eta ] ) are not equivalent to eqs .( 7 ) , ( 8) , ( 11 ) and ( 12 ) of ref . .however , our equations exactly coincide with eqs .( 28a ) and ( 28b ) of ref .we believe that in ref . , there is a calculation mistake in the intersection of the tangent to the light ray and the - plane .the author in ref . also ignore any contribution of the black hole s spin when expanding and .as can be seen later , it is also useful for our purposes to write the source s angular coordinates ( ) in terms of its position in the observer s sky . from the geometry of fig .[ lensgeometry ] , we obtain where ( ) are the angles of the source in the observer s sky . inverting eqs .( [ xs ] ) and ( [ ys ] ) for the polar and azimuthal angles , it is convenient to consider good source alignments ( e.g. small and ) .the motivation for this approximation will become clear when we consider the magnifications of the relativistic images ( defined below ) . if the observer is not exactly at , we can approximate : and with and ( an observer exactly at will be considered in sec .v ) . expanding eqs .( [ xs ] ) and ( [ ys ] ) to second order in the perturbations , one finds that now we want to know how the restrictions in the parameter space ( ) and in the lens geometry are reflected in the possible values of the image coordinates ( ) .we begin with the photon region discussed in sec .this constraint correspond to a closed region in the observer s sky .it is useful to write its boundary as a parametric curve , where and . inserting eqs .( [ lambdamin ] ) , ( [ etamin ] ) , ( [ lambda ] ) and ( eta ) in the definition of ( using the small angle approximations , ) , we can solve for and obtain the desired parametric curve . when doing so , one encounters a sixth order polynomial in .the largest positive root is valid for and the second largest positive root for .another constraint comes from the polar movement of the light rays . in sec .ii we pointed out that the motion in was restricted between the turning points defined by eq .( [ um ] ) . therefore , points in the parameters space where the inequality is not satisfied must be discarded .now we will prove that if and are given by eqs .( lambda ) and ( [ eta ] ) , then the inequality is always satisfied for . using the notation : , ( is any subscript ) , the above inequality is equivalent to ( stands for min / max " ) . in this notation ,the condition for a turning point in , , becomes writing and substituting into eq .( [ turneq ] ) , we get ^ 2 - 4a^2\beta_i^2 w_o } \ ; \right\}\;.\end{aligned}\ ] ] on the other hand , using eqs .( [ lambda ] ) and ( [ eta ] ) we have the radius " must be greater or equal than the boundary of the photon region defined by eqs .( [ lambdamin ] ) and ( [ etamin ] ) and the line .it is then easy to show that the minimum value of this radius " is reached when and .the actual minimum value is thus .therefore , by eq .( [ rad ] ) we have that , and since , it follows the inequality and hence , .the equality is satisfied only for .we then can conclude that .if , we can ensure that by doing the same construction in the frame of the source : where and are the coordinates in the source s sky . by using eqs .( [ lambda ] ) , ( [ eta ] ) , ( [ lambdasource ] ) and ( [ etasource ] ) it follows that since for physically relevant situations we must have ( where the equality holds only for ) , the excluded points will be those between the curves ^{1/2 } \;,\end{aligned}\ ] ] where and .the forbidden regions in the observer s sky are shown in fig .[ freg ] for the case of . for ,only the photon region is present . .the forbidden regions are shown in shaded gray . in this casewe used , and .for only the photon region is present.,width=326,height=263 ] next , we focus in relating the null equations of motion considered in sec .ii with the geometry of the gravitational lens system . in the familiar case of a schwarzschild black hole lens , the movement of light raysis restricted to a plane .therefore , without loss of generality we can work at the equatorial plane : .then , by substituting eq .( [ firstint ] ) in ( [ secondint ] ) and setting , we obtain the familiar schwarzschild deflection angle : where by the fact that we are working at the equatorial plane [ see eq .( [ etacarter ] ) ] . this deflection angle can be written in terms using eq .( [ lambda ] ) .then , by use of the familiar lens equation " we can solve for the position of the virtual " images .however , another approach is to give in terms of the geometry of the observer - source pair and solve eq .( [ schang ] ) for . in this casewe have where is the number of windings around the axis .the solutions for are the familiar weak field images , and for we have the relativistic images studied in .this last method is more useful when we consider trajectories outside the equatorial plane . in this case ,( [ firstint ] ) and ( [ secondint ] ) with ( [ dphi ] ) become our lens equations " .nevertheless , for trajectories outside the equatorial plane , an additional parameter appears : the number of turning points in the polar coordinate ( ) . for the case of the schwarzschild black hole it is easy to prove that is related to as by the fact that photons travel only in a plane .what about the case of a kerr black hole ?in this situation , since the movement of light rays is not necessarily restricted to a plane , we must consider and as independent parameters . additionally , in trying to write as in eq .( [ dphi ] ) one is faced with a possible complication : unlike in the schwarzschild space - time , the kerr geometry admits turning points in which could complicate the analysis . by using eq( [ uphi ] ) with is easy to show that the possible turning points occur at both solutions are surfaces of revolution around the rotation axis of the black hole .it is easy to prove that for a given , only one of the surfaces lies outside the horizon . since in our casewe are considering photons that come from infinity , reach a single turning point in and return to infinity , they can cross such a surface at most in two points . in consequence, there can be at most two turning points in .however , by using eqs .( [ uphi ] ) and ( [ ut ] ) for an observer and a source at large , it is easy to prove that : ( conservation of angular momentum ) . therefore , the asymptotic sign of must be unchanged by the gravitational interaction .this rule out a single turning point in which would obviously change that sign .we conclude that the number of turning point in must be zero or two . in that case , the most general expression for is still given by eq .( [ dphi ] ) but with the following modification : the first case is to be use when the right hand side of eq .( [ secondint ] ) is negative , and the second case otherwise . by using eqs .( [ lambda ] ) , ( [ eta ] ) , ( [ thetas ] ) and ( [ phis ] ) , the lens equations " ( [ firstint ] ) and ( [ secondint ] ) can be expressed as where and ( and ) are the radial ( angular ) integrals of eqs .( [ firstint ] ) and ( [ secondint ] ) respectively , and and are the number of windings around the axis and the number of turning points in the polar coordinate respectively . although the integers and should be considered independent , as we will see in sec .iv , the magnification does not depend directly on .therefore , in this paper we consider to be the fundamental parameter .this interpretation is confirmed by our numerical results where we find that the magnification always decreases as we increase , and for a given , it can ever increase for images with larger ( see sec .v ) . in the familiar schwarzschild gravitational lens, we always have two images which are formed by light rays which suffer small deviations in their trajectories .they are called _ weak field images_. additionally , we have an infinite set of faint images at both sides of the black hole .these are _ relativistic images _ , and are the result of photons that orbit the black hole several times before reaching the observer . in kerr space - time , where photons trajectories are no longer confined to a plane , the concept of orbiting the black hole several times " can be subtle . for this reason ,we classify images as follows : images with are called _ direct images _ ( di hereafter ) , and images with are called _ relativistic images _ of order ( ri hereafter ) . as we show in the appendix , eqs .( [ lens1 ] ) and ( [ lens2 ] ) can be written in terms of elliptic integrals , and are highly non - linear in all arguments with the exception of and .therefore , their solution require numerical and graphical methods . to solve the lens equations , we use the standard routine findroot" build in mathematica . to give an initial approximation and to ensure that a solution exist at all, we use a variety of graphical methods .since , as we will see in sec .v , ris appear very near the boundary of the photon region , we found that is useful to write : and , where , and is the boundary of the photon region as discussed earlier in this section .we then express all functions in terms of and to avoid entering the photon region .we find that ris form at very small . to find the images we plot the surfaces and for a given source position and image numbers .the intersection of both surfaces with the plane form various curves in that plane .if the curves formed by the two surfaces intersect each other , there is a solution .visual inspection of these curves allow us to give an initial approximation for the numerical routine . as an example, we plot part of both surfaces in fig .[ approxmethod ] for and . and for a source at arcmin , an observer at and for , , .we have chosen kpc as in the numerical examples of section v. the intersection of the surfaces with the plane ( gray ) form various curves in that plane .the intersection of two such curves marks the position of a solution to the lens equations " .the range of has been restricted to in order to clarify the intersection of the curves ., width=326,height=340 ] we usually do this for the two lowest values of where a solution can be found .the possible values of are bounded from above by the requirement . in consequence , for a given we usually find images for just a few values of .this procedure is very tedious and time consuming , and for these reasons we are unable to give a complete phenomenological description of the behavior of the images for any given observer - source geometry .instead , we shall present some numerical examples of the kind of behavior that can be expected by having a kerr black hole as a gravitational deflector and null geodesic motion off the equatorial plane . as pointed out before ,several authors have already obtained analytic approximations to the radial integrals in the strong field limit " for schwarzschild , reissner - norsdstrn and kerr black holes .their approximations are valid for relativistic photons , with close to and for small deviations from the equatorial plane in the case of the kerr black hole . however , since in this article we are interested in orbits that can deviate significantly from the equatorial plane we can no longer use such approximation schemes .moreover , their whole approach fail for the angular integrals because , in general , there is no relation between and the turning points in . in sec .v , to make contact with the relevant literature , we consider trajectories close to the equatorial plane to be compared with bozza procedures .the magnification of an image is defined as the ratio of the observed flux to the flux of the unlensed source . by liouvilles theorem , the surface brightness is unchanged by the gravitational light deflection . therefore , the magnification is defined as the ratio of the solid angle subtended by the image to the solid angle of the unlensed source : where is the jacobian of the transformation ( ) ( ) . writing and we can find expressions for , , and by differentiating the lens equations ( [ lens1 ] ) and ( [ lens2 ] ) with respect to and . after some algebraic manipulations , we find that where the above derivatives are very cumbersome and we will not expand it , neither show explicitly them here .in fact , instead we shall use the equivalent numerical derivatives in all forthcoming calculations .note that the magnification do not depend on directly , since we have to take the derivative of . because of the complexity of these expressions , we are unable to give a complete description of the caustic structure of the kerr space - time .thus , we shall limit ourselves to compute the magnification of the images that we found . for a description of the caustic structure of a kerr space - time ,see .in this section we present numerical calculations for the positions and magnifications of the ris for different source - observer geometries .our purpose is to provide a physical insight of the phenomenology that can be expected from a rotating black hole behaving as a gravitational deflector .also , our general procedures and numerical solutions can provide a set of test - bed " calculations for more sophisticated gravitational lens models to be developed in the near future . to be able to compare our results with recent published articles we shall consider a gravitational lens composed of a rotating black hole at the galactic center with a mass of , and located at a distance of kpc .we shall also take .the section is subdivided as follows : we first consider two simple cases involving a schwarzschild black hole ( ) and an observer located at the pole ( ) .next , as a consistency check , we consider an observer at the equator ( ) and compare our calculations with those of bozza . finally , we work out a more general case of an observer at say .the ris are classified as follows : we use a plus ( + ) sign for images with sign sign ( e.g. in the same side " of the source ) and a negative ( ) sign otherwise .we do all calculations for the two lowest values of where a solution exist .we find that , in general , images with larger are more demagnified .the following notation will also be used : , , where are the angular positions in the observer s sky . in all geometric configurations considered in this section, we found that for we recovered the usual weak field images without any noticeable effect from the black hole s spin .moreover , we could not find any image with for this kind of geometry . in the appendixwe show that for , the angular integrals can be solved in close form . as expected , we find that the ris are always found in the line joining the source position and origin of the observer s sky regardless of the inclination of the observer .the separation of the two outermost images is always the same regardless of the source position ( or ) .they are : ( ) and ( ) for the two lowest order images .these results are close to those found in : and respectively .the magnifications calculated by eq .( [ mag ] ) are also close to those in the literature .however , we encountered problems of numerical noise when using the exact expression for .for that reason , to check that we obtain the right magnifications , we used the expression for [ eq .( [ jelliptic ] ) ] and set .the magnifications obtained this way for the outermost image as a function of the source separation agree with the literature .for instance , we obtained and for arcsec and respectively . for comparision , in ref . we read : and respectively .although is very unlikely that an observer will be exactly at , this is the simplest case that can be solved quasi - analytically . using eq .( [ lambda ] ) for we find that .the minimum value of , corresponding to the photon sphere , is found from eqs .( [ lambdamin ] ) and ( [ etamin ] ) by setting .we obtain where \ ; , \\ \nonumber \\\tan \psi = \frac{3\sqrt{3 } a \sqrt{108 - 135 a^2 + 36 a^4 - 4 a^6}}{54 - 81a^2 + 18a^4 - 2a^6}\;.\end{aligned}\ ] ] eq .( [ etaminpolar ] ) is a decreasing function of satisfying .since we can relate the position of the images to by eq .( [ eta ] ) : , eq .( [ etaminpolar ] ) sets a lower bound to the separation of the images .the null equations of motion simplify considerable for .first we note that from eq .( 6 ) , there are no turning points in since .therefore , the angular integration limits can be shown to be where is the parity of the image ( for images that are in the same side of the source when and for images in the opposite side ) . here , is the number of loops around the black hole . to relate to the unperturbed position of the source in the observer s sky we can no longer use the approximate expressions ( [ thetas ] ) and ( [ phis ] ) .instead , it can be shown that where we are assuming that .now , using eqs .( [ helliptic ] ) , ( [ lelliptic ] ) and ( [ int2 ] ) given in the appendix , the null equations of motion ( [ firstint ] ) and ( [ secondint ] ) become : \right .\nonumber \\ & & \;\ ; \;\;\;\;\;\;\ ; \;\;\;\;\ ; \left .+ 2 ( 2n+1)\ , k \left(\frac{a^2}{r_o^2\xi_i^2}\right ) \right\ } = 0\;,\end{aligned}\ ] ] \;,\end{aligned}\ ] ] where and , , , ( ) are the roots of , and eq .( [ firstintpolar ] ) is valid only for ( for the right hand side of eq .( [ secondintpolar ] ) has to be substituted by eq .( [ lelliptica0 ] ) of the appendix ) .the procedure to calculate the positions of the ris is the following : for a given source separation , we use eq .( [ firstintpolar ] ) to calculate the angular separation of the ri ( ) and then insert this value in eq .( [ secondintpolar ] ) to obtain the offset from the source inclination . in calculating one has to use eq .( [ etaminpolar ] ) to determine its minimum value and avoid numerical problems .note that for , eq .( [ firstintpolar ] ) gives as expected . in fig .[ deltavarphi ] we plot for the three outermost images ( ) as a function of for a source located at . ) as a function of the normalized black hole angular momentum . herewe consider an observer located at and a source located at .the parameter is the number of loops around the black hole .these plots are for , but photons with have almost the same curves ( indistinguishable within the plot resolution).,width=326,height=229 ] the sign of is that of ( which is what one intuitively expects ) . for a given , both images with opposite parity have almost the same .however , the greater the number of loops , the greater the deflection for a given . the physical picture emerging is a very simple one : the angular momentum of the black hole just adds a twist " in the direction of the rotation to the usual schwarzschild trajectory . to give an intuition of the full movement of the images , including their separation , in fig .[ polarmovement ] we have plotted the position of the two lowest order images ( ) as a function of the spin parameter .and ) as a function of the normalized angular momentum of the black hole .the arrows in the curves represent the direction of the movement as we increase from 0 to 1 .the source is located at and .,width=321,height=342 ] what about the magnifications ? in this simple case where we have circular symmetry as seen by the observerthe magnification is given by where is the left hand side of eq .( [ firstintpolar ] ) .this expression for the magnification can be verified by calculating the jacobian of the transformation .we find that , as in the schwarzschild case , both images with the same winding number have approximately the same magnification ( to at least three significant figures ) . in fig .[ magpolarplot ] we plot the magnifications for the three lowest order images ( ) and with a source located at . ) for an observer at and a source at . herewe consider the case of , but the magnifications for are almost the same ( indistinguishable in this plot).,width=329,height=210 ] the net effect of the angular momentum in this case is to enhance the brightness of the images . for the schwarzschild case ( ) , the magnifications obtained with eq .( [ magpolar ] ) agree perfectly with the results of ref . . in this subsectionwe consider an observer located exactly at .our purpose is not to give a complete account of the phenomenology of the ris since , from an astronomical perspective , it is very unlikely that an observer will be exactly at the equator ( also setting does not significantly simplify our analysis ) . rather than that , this special case will serve as a consistency check by allowing us to compare some of our numerical results with those of bozza . in that article, the author considered quasi - equatorial " orbits in kerr spacetime .one of the main approximations employed was that the horizontal position of the ris ( in our notation ) was calculated independently using the familiar lens equation in the equatorial plane .this , of course , assumed that the motion in was unaffected by the motion in .therefore , we are forced to consider very small source declinations .to this end ( and for simplicity ) we fix the source at arcsec and ( ) .then , we calculate the position and magnification of the lowest order ri in the same side of the source as a function of the spin parameter .we fixed to be consistent with the reference , where the ris are classified only with the winding number .we also use for latter convenience .the results using the approximate equations ( 66 ) , ( 77 ) and ( 90 ) of ref . and the equations of this paper are shown in table i. and , in their notation , is equivalent to in our notation . ] [ cols="^,^,^,^,^,^,^,^ " , ] with regard to the magnifications , we observe that as in previous cases , the effect of the black hole angular momentum is to demagnify the images ( although there is a slight increase in the magnification for ) .also note that the images with in the first quadrant have magnifications very similar to those with ( located in the third quadrant ) .therefore , these images should be considered dual " to each other since they form the pair of brightest images for a given value of ( ) .an interesting consequence of the black hole rotation is that for large values of ( ) the two brightest images have different magnifications .this is to be compared to the case of a schwarzschild black hole where the two brightest ris have exactly the same magnification .also , for large the ris are very static as in the schwarzschild case ( although for they have ) .in other words , their positions do not depend on the location of the source .what is even more surprising is that for and within our numerical precision , it seems that the ratio of the magnifications of the two brightest ris is insensitive to the position of the source .this raises the possibility of extracting information about the orientation and spin of the black hole by comparing the brightness of this two images .however , to move this proposal forward we would need to study the behavior of the ratio of their magnifications for different values of .we do not attempt to carry such analysis this time .we hope that any new approximation scheme developed in the future can allow us to address that question .in this article we have explored the phenomenology of strong field gravitational lensing by a kerr black hole .in particular we have developed a general procedure to calculate the positions and magnification of all images for an observer and source far away from the black hole and at arbitrary inclinations .we have applied our developed procedure to the case of a black hole at the galactic center with mass and at a coordinate distance of kpc .we have reproduced the positions and magnifications of the lowest order relativistic images found in the references for a schwarzschild black hole and for quasi - equatorial " trajectories around a kerr black hole .we have also presented new numerical results for the case of an observer located at . although we have not been able to give a full account of the phenomenology for all possible combination of the source - observer geometry and the spin parameter , our limited results were useful to get a physical insight of the effects of the black hole angular momentum in the strong - field regime of gravitational lensing .moreover , our findings can serve as a test - bed " calculation for any improved new model of gravitational lens .there is no doubt that observations of the strong - field regime of gravitational lensing will be an extremely challenging task in the near future .this is because , as we have seen , the relativistic images are always highly demagnified .however , if we are able to observe them in any foreseeable future , they will provide one of the best tests of einstein s general theory of relativity in strong gravitational fields .moreover , as we have seen , they could provide new tools to astrophysics by allowing the measurement of the orientation and/or magnitude of the angular momentum of the black hole .however , to fully confirm that this is the case , more research is needed toward developing an analytical solution of the strong - field gravitational lens problem in kerr space - time .s. e. vzquez is very grateful to e. p. esteban for all his support and advise during his undergraduate years .he would also like to thank nsf for a graduate research fellowship and the university of california at santa barbara for a broida excellence fellowship .e. p. esteban thanks the support given by upr - humacao and rice university during his sabbatical leave . where is the normal elliptic integral of the first kind .also , where , , , ( ) are the roots of . for the radial integral of eq .( [ secondint ] ) we have : \frac{dr}{\sqrt{r(r ) } } \nonumber \\ & = & \frac{g}{r_a - 1}\left\ { \beta^2 \left [ 2 + \frac{\beta^2 ( 2 - \lambda)}{r_a - 1}\right]f(\psi , k ) +2(1 - \beta^2)\left [ 1 + \frac{\beta^2 ( 2 - \lambda)}{r_a - 1}\right]\pi(\psi,\alpha^2,k ) + \frac{(1 - \beta^2)^2(2 - \lambda)}{r_a - 1 } v \right\ } \ ; , \nonumber \\\end{aligned}\ ] ] we begin by defining , such that since all angular integrals considered in this paper are symmetric around , we can restrict our interval to and discard the sign of eq .( [ difold ] ) ( remember that all integrals are positive definite ) . however , we need to compensate for the case of a photon that crosses the equator . to this endwe define the operator for any two angles and . is clear that if both angles are at the same hemisphere and negative otherwise .therefore , is easy to show that we can write \int_0^{\underline{u } } \;,\end{aligned}\ ] ] where and .now , for a trajectory that encounters turning points ( ) we have \int_0^{u_s } \nonumber \\+ \int_{u_o}^{u_m } + \left[1 - \textrm{sign}(\theta_o * \theta_{mo})\right ] \int_0^{u_o } \nonumber \\ + 2(m - 1 ) \int_{0}^{u_m}\ ; , \nonumber \\\end{aligned}\ ] ] where \ ; , \\ \nonumber \\ \theta_{ms } & \equiv & \left\ { \begin{array}{ll } \theta_{mo}\ ; , & \hbox { odd ; } \\ \pi - \theta_{mo}\ ; , & \hbox{ even , } \\\end{array } \right.\end{aligned}\ ] ] with as the ( possible ) position of the image . in deriving eq .( [ thetaintegrals ] ) we have use the fact that , as discussed in section iii , . for we can write \int_0^{\underline{u } } \nonumber \\ & = & \int_{\underline{u}}^{u_m } - \int_{\overline{u}}^{u_m } + \left[1 - \textrm{sign}(\theta_s * \theta_o)\right ] \int_0^{\underline{u}}\ ; , \nonumber \\\end{aligned}\ ] ] where and . to write the angular integrals of eq .( [ firstint ] ) as elliptic integrals we change variables to so that , for and in the same hemisphere , we have where ^{1/2 } \right\}\;,\end{aligned}\ ] ] and ( ) .on the other hand we have where using eqs .( [ thetaintegrals ] ) , ( [ m=0 ] ) , ( [ int1 ] ) and ( [ int2 ] ) we can write the left hand side of eq .( [ firstint ] ) as f(\phi_s,\kappa)\right.\nonumber \\ & & \left .+ \left[1 - \textrm{sign}(\theta_o * \theta_{mo})\right ] f(\phi_o,\kappa ) \right .\nonumber \\ & & \left .+ 2(m - 1)k(\kappa ) \right\}\;,\end{aligned}\ ] ] for .here , is the compete elliptic integral of the first kind : . for we have f(\underline{\phi},\kappa ) \right\}\;,\nonumber\\\end{aligned}\ ] ] where and are given by eq .( [ psij ] ) with the substitutions and respectively , and is given by eq .( [ phij ] ) with the substitution .now we turn our attention to the angular integrals of eq .( [ secondint ] ) . making the usual change of variable ,we get : where the second integral we need is \ ; , \nonumber \\\end{aligned}\ ] ] where + \frac{h\,\lambda}{1 - u_3 } \left\{\left(1 - \textrm{sign}(\theta_s * \theta_{ms } ) \right)\left [ f(\phi_s,\kappa ) - u_3 \pi(\phi_s,\rho^2,\kappa ) \right ] \right .\nonumber \\ & & \left .+ \left(1 - \textrm{sign}(\theta_o * \theta_{mo } ) \right)\left [ f(\phi_o,\kappa ) - u_3 \pi(\phi_o,\rho^2,\kappa ) \right ] + 2(m-1)\left [ k(\kappa ) - u_3 \pi(\rho^2,\kappa ) \right ] \right\}\;,\end{aligned}\ ] ] + \frac{h\,\lambda}{1 - u_3 } \left\{\left[1 - \textrm{sign}(\theta_s * \theta_{o } ) \right]\left [ f(\underline{\phi},\kappa ) - u_3 \pi(\underline{\phi},\rho^2,\kappa ) \right ] \right\}\ ; , \nonumber \\\end{aligned}\ ] ] for the case of a schwarzschild black hole ( ) the angular integrals can be solved in close form since the polynomial in the square root become one of second order : \;,\end{aligned}\ ] ] where similarly , \;.\end{aligned}\ ] ] therefore , using eqs .( [ thetaintegrals ] ) and ( [ m=0 ] ) , the right hand side of eq .( [ firstint ] ) becomes \;,\nonumber \\\end{aligned}\ ] ] for , and \right\}\ ; , \nonumber \\\end{aligned}\ ] ] for . using the expressions for , , and given above, the lens equations " ( [ firstint ] ) and ( [ secondint ] ) become : with where is the number of windings around the axis , and , and can be written in terms the observer s sky coordinates by using eqs .( [ lambda ] ) , ( [ eta ] ) , ( [ xs ] ) and ( [ ys ] ) .
we consider a kerr black hole acting as a gravitational deflector within the geometrical optics , and point source approximations . the kerr black hole gravitational lens geometry consisting of an observer and a source located far away and placed at arbitrary inclinations with respect to the black hole s equatorial plane is studied in the strong field regime . for this geometry the null geodesics equations of our interest can go around the black hole several times before reaching the observer . such photon trajectories are written in terms of the angular positions in the observer s sky and therefore become lens equations " . as a consequence , we found for any image a simple classification scheme based in two integers numbers : the number of turning points in the polar coordinate , and the number of windings around the black hole s rotation axis . as an application , and to make contact with the literature , we consider a supermassive kerr black hole at the galactic center as a gravitational deflector . in this case , we show that our proposed computational scheme works successfully by computing the positions and magnifications of the relativistic images for different source - observer geometries . in fact , it is shown that our general procedure and results for the positions and magnifications of the images off the black hole s equatorial plane , reduce and agree with well known cases found in the literature .
it took more than eighty years from its discovery till it was possible to experimentally determine and visualize the most fundamental object in quantum mechanics , the wave function . the forward route from quantum stateto probability distribution of measurement results has been the basic stuff of quantum mechanics textbooks for decennia . that the corresponding mathematical inverse problem had a solution , provided ( speaking metaphorically ) that the quantum state has been probed from a sufficiently rich set of directions , had also been known for many years .however it was only with , that it became feasible to actually carry out the corresponding measurements on one particular quantum system in that case , the state of one mode of electromagnetic radiation ( a pulse of laser light at a given frequency ) .experimentalists have used the technique to establish that they have succeeded in creating non - classical forms of laser light such as squeezed light and schrdinger cats .the experimental technique we are referring to here is called quantum homodyne tomography : the word homodyne referring to a comparison between the light being measured with a reference light beam at the same frequency .we will explain the word tomography in a moment .the quantum state can be represented mathematically in many different but equivalent ways , all of them linear transformations on one another .one favorite is as the wigner function : a real function of two variables , integrating to plus one over the whole plane , but not necessarily nonnegative .it can be thought of as a `` generalized joint probability density '' of the electric and magnetic fields , and .however one can not measure both fields at the same time and in quantum mechanics it makes no sense to talk about the values of both electric and magnetic fields simultaneously .it does , however , make sense to talk about the value of any linear combination of the two fields , say . andone way to think about the statistical problem is as follows : the unknown parameter is a joint probability density of two variables and .the data consists of independent samples from the distribution of , where is chosen independently of , and uniformly in the interval ] and distribution depending on the unknown parameter which is an infinite dimensional matrix {j , k=0,\ldots,\infty} ] .as the wigner function is in one - to - one correspondence with the density matrix , our state reconstruction problem can be stated as to estimate the wigner function .this is an ill posed inverse problem as seen from the formula for the inverse of the radon transform where makes sense only as a generalized function . to correctthis one usually makes a cut - off in the range of the above integral and gets a well behaved kernel function .then the tomographic estimator of is the average sampled kernel for consistency one needs to let the ` bandwidth ' depend on the sample size and as at an appropriate rate . in this paperwe will not follow this approach , which will be treated separately in future work . instead, we use a plug - in type estimator based on the property where s are known functions and we replace by its above mentioned estimators. we shall prove consistency of the proposed estimators of the wigner function w. r. t. and supremum norms in the corresponding space .this subsection serves as a short introduction to the basic notions of quantum mechanics which will be needed in this paper . for simplicitywe will deal first with finite dimensional quantum systems and leave the infinite dimensional case for the next subsection . for further details on quantum statistical inferencewe refer to the review and the classic textbooks and . in classical mechanics the state of macroscopic systems like billiard balls , pendulums or stellar systemsis described by points on a manifold or `` phase space '' , each of the point s coordinates corresponding to an attribute which we can measure such as position and momentum .therefore the functions on the phase space are called observables .when there exists uncertainty about the exact point in the phase space , or we deal with a statistical ensemble , the state is modelled by a probability distribution on the phase space , and the observables become random variables .quantum mechanics also deals with observables such as position and momentum of a particle , spin of an electron , number of photons in a cavity , but breaks from classical mechanics in that these are no longer represented by functions on a phase space but by hermitian matrices , that is , complex valued matrices which are invariant under transposition followed by complex conjugation .for example , the components in different directions of the spin of an electron are certain complex hermitian matrices .any -dimensional complex hermitian matrix can be diagonalized by changing the standard basis of to another orthonormal basis such that for , with .the vectors and numbers are called eigenvectors and respectively eigenvalues of . with respect to the new basiswe can write the physical interpretation of the eigenvalues is that when measuring the observable we obtain ( randomly ) one of the values according to a probability distribution depending on the state of the system before measurement and on the observable .this probability measure is degenerate if and only if the system before measurement was prepared in a special state called an eigenstate of .we represent such a state mathematically by the projection onto the one dimensional space generated by the vector in . given a probability distribution over the finite set , we describe a statistical ensemble in which a proportion of systems is prepared in the state by the convex combination .the expected value of the random result when measuring the observable for this particular state is equal to which can be written shortly similarly , the probability distribution can be recovered as thanks to the orthogonality property .now , let be a different observable and suppose that does not commute with , that is , then the two observables can not be diagonalized in the same basis , their eigenvectors are different .consequently , states which are mixtures of eigenvectors of typically will not be mixtures of eigenvectors of and vice - versa .this leads to an expanded formulation of the notion of state in quantum mechanics independent of any basis associated to a particular observable , and the recipe for calculating expectations and distributions of measurement results .any preparation procedure results in an statistical ensemble , or state , which is described mathematically by a matrix with the following properties 1 . ( positive definite matrix ) , 2 . ( normalization ) . in physics is called a _ density matrix _ , and is for a quantum mechanical system an analogue of a probability density .notice that the special state defined above is a particular case of density matrix , since it is a mixture of eigenstates of the observable .the density matrices of dimension form a convex set , whose extremals are the _ pure _ or _ vector _ states , represented by orthogonal projections onto one dimensional spaces spanned by _ arbitrary _vectors .any state can be represented as a mixture of pure states which are not necessarily eigenstates of a particular observable . when measuring an observable , for example , of a quantum system prepared in the state we obtain a random result with probability distribution given by equation ( [ eq.quantumprobability ] ) , expectation as in equation ( [ eq.quantumexpectation ] ) , and characteristic function in order to avoid confusion we stress the important difference between which is a matrix and which is a real - valued random variable . more concretely ,if we write in the basis of eigenvectors of then we obtain the map from states to probability distributions over results notice that is indeed a probability distribution as a consequence of the defining properties of states , and it does not contain information about the off - diagonal elements of , meaning that measuring only the observable is not enough to identify the unknown state . roughly speaking , as , one has to measure on many identical systems each one of a number of mutually non - commuting observables in order to have a one - to - one map between states and probability distributions of results .the probing of identically prepared quantum systems from different ` angles ' in order to reconstruct their state is broadly named _quantum state tomography _ in the physics literature .let us suppose that we have at our disposal systems identically prepared in an unknown state , and for each of the systems we can measure one of the fixed observables .we write the observables in diagonal form where eigenvalues and eigenstates .we will perform a randomized experiment , i.e. for each system we will choose the observable to be measured by randomly selecting its index according to a probability distribution over .the results of the measurement on the system are the pair where are i.i.d .with probability distribution and is the result of measuring the observable whose conditional distribution is given by the statistical problem is now to estimate the parameter from the data . in the next subsectionwe will describe quantum homodyne tomography as an analogue of this problem for infinite dimensional systems .although correct and sufficient when describing certain quantum properties such as the spin of a particle , the model presented above needs to be enlarged in order to cope with quantum systems with ` continuous variables ' which will be central in our statistical problem .this technical point can be summarized as follows : we replace by an infinite dimensional complex hilbert space , the hermitian matrices becoming _ selfadjoint operators _ acting on .the spectral theorem tells us that selfadjoint operators can be ` diagonalized ' in the spirit of ( [ eq.diagonalization ] ) but the spectrum ( the set of ` eigenvalues ' ) can have a more complicated structure , for example it can be continuous as we will see below .the density matrices are _ positive _ selfadjoint operators such that and can be regarded as infinite dimensional matrices with elements for a given orthonormal basis in .the central example of a system with continuous variables in this paper is the quantum particle .its basic observables position and momentum , are two unbounded selfadjoint operators and respectively , acting on , the space of square integrable complex valued functions on for arbitrary functions .the operators satisfy heisenberg s _ commutation relations _ which implies that they can not be measured simultaneously .the problem of ( separately ) measuring such observables has been elusive until ten years ago when pioneering experiments in quantum optics by , led to a powerful measurement technique called _ quantum homodyne detection_. this technique is the basis of a continuous analogue of the measurement scheme presented at the end of the previous subsection where observables were measured in the case of a -dimensional quantum system .the quantum system to be measured is a beam of light with a fixed frequency whose observables are the electric and magnetic field amplitudes which satisfy commutation relations identical to those characterizing the quantum particle , with which they will be identified from now on .their linear combinations are called _ quadratures _ , and homodyne detection is about measuring the quadratures for _ all _ phases ] .then the joint probability distribution for the pair consisting in measurement result and phase has density equal to with respect to the measure on ] . in quantum homodyne tomographythe role of the unknown distribution is played by the wigner function which is in general not positive , but has a probability density as marginal along any direction .the following diagram summarizes the relations between the various objects in our problem : + ( 10,30) ( 30,30) ( 65,30) ( 110,30) .( 47,10) ( 11,30)(29,30 ) [ , ] ( 29,30)(11,30 ) [ , ] ( 31,30)(60,30) [ , ] ( 31,29)(45,11) [ , ] ( 60,29)(47,11) [ , ] ( 70,30)(95,30)experiment [ , ] finally in table [ tbl.states ] we give some examples of density matrices and their corresponding wigner function representations for different states .the matrix elements are calculated with respect to the orthonormal base corresponding to the wave functions of photons states where are the hermite polynomials normalized such that .a few graphical representations can be seen in figure [ fig.examples ] ..density matrix and wigner function of some quantum states [ cols="^,^,^",options="header " , ] the vacuum is the pure state of zero photons , notice that in this case the distributions of and are gaussian .the thermal state is a mixed state describing equilibrium at temperature , having gaussian wigner function with variance increasing with the temperature .the coherent state is pure and characterizes the laser pulse .the photon number is poisson distributed with an average of photons .the squeezed states have gaussian wigner functions whose variances for the two directions are different but have a fixed product . the parameters and satisfy the condition , is a normalization constant , , and . presented the density matrix analogue of formula ( [ eq.inverse.radon.transform ] ) of the wigner function as inverse radon transform of the probability density where is the generalized function given in equation ( [ eq.kernel ] ) whose argument is a selfadjoint operator .the method has been further analyzed in , , see also .we recall that in the case of the wigner function we needed to regularize the kernel by introducing a cut - off in the integral ( [ eq.kernel ] ) . for density matricesthe philosophy will be rather to project on a finite dimensional subspace of whose dimension will play the role of the cut - off .in fact all the matrix elements of the density matrix with respect to the orthonormal basis defined in ( [ eq.psi_n ] ) , can be expressed as kernel integrals with bounded real valued functions which in the quantum tomography literature are called _pattern functions_. the singularity of the kernel is reflected in the asymptotic behavior of as .a first formula for was found in and uses laguerre polynomials .this was followed by a more transparent one due to , for , where and represent the square integrable and respectively the unbounded solutions of the schrdinger equation , \psi=\omega~\psi , \qquad \omega\in\r.\ ] ] figure [ fig.ptnfnc ] shows pattern functions for different values of and .we notice that the oscillatory part is concentrated in an interval centered at zero whose length increase with and , the number of oscillations increases with and and the functions become more irregular as we move away from the diagonal . it can be shown that tails of the pattern function decay like .more properties of the pattern function can be found in and .equation ( [ eq.quantum_tomographic_rho_n,n+d ] ) suggests the _ unbiased estimator _ of , based on i.i.d .observations of , whose matrix elements are : where , see , . by the strong law of large numbers the individual matrix elements of this estimatorconverge to the matrix elements of the true parameter .however the infinite matrix need not be positive , normalized , or even selfadjoint , thus it can not be interpreted as a quantum state .these problems are similar to those encountered when trying to estimate an unknown probability density by using unbiased estimators for all its fourier coefficients .the remedy is to estimate only a finite number of coefficients at any moment , obtaining a projection estimator onto the subspace generated by linear combinations of a finite subset of the basis vectors .in our case we will project onto the space of matrices of dimension with respect to the basis , and for . in order to test the performance of our estimators we introduce the and distances on the space of density matrices .let and be two density matrices with the diagonal form of their difference , and notice that some of the eigenvalues are positive and some negative such that their sum is zero due to the normalization of the density matrices .we define the absolute value and the norms ^{1/2}= \left[\sum_{j , k= 0}^\infty|\rho_{k , j}-\tau_{k , j}|^2\right]^{1/2}.\end{aligned}\ ] ] let us consider now the mean integrated square error ( mise ) and split it into the bias and variance parts : by choosing as the bias converges to zero. for the variance we have the upper bound the proof of the following lemma on the norms of the pattern functions can be found in .there exist constants such that by applying the lemma to equation ( [ eq.variance.upper.bound ] ) we conclude that the estimator is consistent with respect to the distance if we choose as such that .based on the property we can prove a similar result concerning -consistency , see .[ th.norm2 ] let be the dimension of the pattern function projection estimator .if , then if then rates of consistency can be obtained by assuming that the state belongs to a given class for which upper bounds of the bias can be calculated and is chosen such as to balance bias and variance . this problem will be attacked in future work within the minimax framework . in section [ sec.experimental ]we present a data - dependent way of selecting the dimension of the projection estimator based on the minimization of the empirical -risk using a cross - validation technique .we will consider now a maximum likelihood approach to the estimation of the state .let us recall the terms of the problem : we are given a sequence of i.i.d .random variables with values in ] with such that for all , and such that for each there is a satisfying then is called _-entropy with bracketing _ of .we note that this definition relies on the concept of positivity of matrices and the existence of the -distance between states .but the same notions exist for the space of integrable functions thus by replacing density matrices with probability densities and selfadjoint operators with functions we obtain the definition of the -entropy with bracketing for some space of probability densities , see .moreover by using equation ( [ eq.norm_1.inequality ] ) and the fact that the linear extension of the map from density matrices to probability densities sends a positive matrix to a positive function , we get that for any -bracketing ] form a -bracketing for , i.e. they satisfy and for any there exists a such that . thus the following proposition gives an upper bound of the `` quantum '' bracketing entropy and in consequence for .its proof can be found in and relies on choosing a maximal number of nonintersecting balls centered in having radius and then providing a pair of brackets for each ball .let be the class of density matrices of dimension .then for some constant independent of and .by combining the previous inequalities with equation ( [ contractivity.t ] ) we get the following bound for the bracketing entropy of the class of square - root densities with respect to the -distance we will concentrate now on the hellinger consistency of the sieve maximum likelihood estimator .we will appeal to a theorem from , which is similar to other results in the literature on non - parametric -estimation ( see for example ) .there are two competing factors which contribute to the convergence of .the first is related to the approximation properties of the sieves with respect to the whole parameter space .such a distance from to the sieve can take different expressions , for example in terms of the -distance between the corresponding probability measures where the -distance between two probability distributions is given by notice that depends on through the growth rate of the sieve .the second factor influencing the convergence of is the size of the sieves which is expressed by the bracketing entropy .the non - parametric sieve maximum likelihood estimation theory shows that consistency holds if there exists a sequence such that the following _ entropy integral inequalities _ are satisfied for all where is some universal constant , . from ( [ eq.bracketing.entropy.hellinger ] )we get ,\ ] ] which implies the following constraint for and , [ th.hellinger.consistency ] suppose that the state satisfies .let be the sieve mle with and satisfying ( [ eq.entropy.integral.inequality ] ) , then _ proof ._ details can be found in based on theorem 10.13 of . from the physical point of view , we are interested in the convergence of the state estimator with respect to the and -norms on the space of density matrices. clearly the rates of convergence for such estimators are slower than those of their corresponding probability densities .as shown in the beginning of this subsection the map sending probability densities to density matrices is continuous , thus an estimator taking values in the space of density matrices is consistent in the or -norms if and only if converges to almost surely with respect to the hellinger distance .[ cor.consistency.mle.densitymatrix ] the hellinger consistency of is equivalent to the -consistency of . in particular ,if and the assumptions of theorem [ th.hellinger.consistency ] hold , then we have , a.s .. the wigner function plays an important role in quantum optics as an alternative way of representing quantum states and calculating an observable s expectation : for any observable there exists a function from to such that besides , physicists are interested in estimating the wigner function for the purpose of identifying features which can be easier visualized than read off from the density matrix , for example a `` non - classic '' state may be recognized by its patches of negative wigner function , while `` squeezing '' is manifest through the oval shape of the support of the wigner function , see table [ tbl.states ] and figure [ fig.examples ] . as described in subsection [ sec.qhomodyne ] the wigner functionshould be seen formally as a joint density of the observables and which may take non - negative values , reflecting the fact that the two observables can not be measured simultaneously .however the wigner function shares some common properties with probability densities , in particular their marginals and are probability densities on the line .in fact this is true for the marginals in any direction which are nothing else then the densities . on the other hand thereexist probability densities which are not wigner functions and vice - versa , for example the latter can not be too `` peaked '' : as a corollary of this uniform boundedness we get for any density matrices and .indeed we can write where and represent the positive and negative part of . then another important property is the fact that the linear span of the wigner functions is dense in , the space of real valued , square integrable functions on the plane , and there is an isometry ( up to a constant ) between the space of wigner functions and that of density matrices with respect to the -distances in section [ sec.problem ] we have described the standard estimation method employed in computerized tomography which used a regularized kernel with bandwidth converging to zero as at an appropriate rate .this type of estimators for the wigner function will be analyzed separately in future work in the minimax framework along the lines of .the estimators which we propose in this subsection are of a different type , they are based on estimators for plugged into the following linearity equation where are known functions corresponding to the matrix with the entry equal to and all the rest equal to zero , see .the isometry ( [ eq.isometry ] ) implies that the family forms an orthogonal basis of .following the same idea as in the previous section we consider the projection estimator [ th.wigner.norm2 ] let be such that and then _ proof : _ apply isometry property and theorem [ th.norm2 ] .similarly we can extend the sml estimator of the density matrix to the wigner function .define the subspace with as in equation ( [ def.q(n ) ] ) , and define the corresponding sml estimator as where was defined in ( [ def.smle ] ) .[ th.hellinger.consistency.wigner ] suppose that satisfies .let be the sml estimator with and satisfying ( [ eq.entropy.integral.inequality ] ) and .then we have almost surely . under the same conditions almost surely ._ proof : _ apply the inequalities ( [ eq.norm1.norminfty.inequality ] , [ eq.isometry ] ) and corollary [ cor.consistency.mle.densitymatrix ] .the homodyne tomography measurement as presented up to now does not take into account various losses ( mode mismatching , failure of detectors ) in the detection process which modify the distribution of results in a real measurement compared with the idealized case .fortunately , an analysis of such losses ( see ) shows that they can be quantified by a single _ efficiency _ coefficient and the change in the observations amounts replacing by the noisy observations with a sequence of i.i.d .standard gaussians which are independent of all .the problem is again to estimate the parameter from for .the efficiency - corrected probability density is then the convolution ~dx.\ ] ] the physics of the detection process detailed in offers an alternative route from the state to the probability density of the observations . in a first step oneperforms a _ bernoulli transformation _ on the state which is a quantum equivalent of the convolution with noise for probability densities , and obtains a new density matrix . to understand the bernoulli transformationlet us consider first the diagonal elements and which are both probability distributions over and represent the statistics of the number of photons in the two states .let be the binomial distribution .then which has a simple interpretation in terms of an `` absorption '' process by which each photon of the state goes independently through a filter and is allowed to pass with probability or absorbed with probability .the formula of the bernoulli transformation for the whole matrix is ^{1/2}\rho_{j+p , k+p}.\ ] ] the second step is to perform the usual quantum tomography measurement with ideal detectors on the `` noisy '' state obtaining the results with density .it is noteworty that the transformations form a semigroup , that is they can be composed as and the inverse of is simply obtained by replacing with in equation ( [ eq.bernoulli ] ) .notice however that if the power series appearing in the inverse transformation diverges , thus we need to take special care in this range of parameters .a third way to compute the inverse map from to is by using pattern functions depending on which incorporate the deconvolution map from to : such functions are analyzed in where it is argued that the method has a fundamental limitation for in which case the pattern functions are unbounded , while for numerical calculations show that their range grows exponentially fast with both indices .the two estimation methods considered in section [ sec.densitymatrixestimation ] can be applied to the state estimation with noisy observations .the projection estimator has the same form as in subsection [ sec.pfe ] with a similar analysis of the mean -risk taking into account the norms of the new pattern functions .the sieve maximum likelihood estimator follows the definition in subsection [ sec.mle ] and a consistency result can be formulated on the lines of corollary [ cor.consistency.mle.densitymatrix ] .we expect however that the rates of convergence will be dramatically slower and we will leave this analysis for a separate work .in this section we study the performance of the pattern function projection estimator and the sieve maximum likelihood estimator using simulated data . in table[ tbl.states ] we showed some examples of density matrices and wigner functions of quantum states . in figure [fig.examples ] , we display their corresponding graphical representation . for some of them the corresponding probability distribution can be expressed explicitly and it is possible to simulate data . in particular we shall simulate data from qht measurements on squeezed states with efficiency .+ + + in order to implement the two estimators we need to compute the basis functions and the functions , which are solutions of schrdinger equation ( [ eq.schrodinger ] ) . for this, we use an appropriate set of recurrent equations , see , ch .pattern functions can then be calculated as for all , otherwise and then . on the practical side ,finding the maximum of the likelihood function over a set of density matrices is a more complicated problem due to the positivity and trace one constraints which must be taken into account .a solution was proposed in , where the restriction on positivity of a density matrix is satisfied by writing the cholevski decomposition where of _ upper triangular _ matrices of the same dimension as with complex coefficients above the diagonal and reals on the diagonal .the normalization condition translates into which defines a ball in the space of upper triangular matrices with the -distance .we will denote by the set of such matrices having dimension .the sieve maximum likelihood estimator is the solution of the following optimization problem with the numerical optimization was performed using a classical descendent method with constraints .notice that we have an optimization problem on real variables . given the problem of high dimensionality and computational cost we propose an alternative method to the procedure mentioned above .it exploits the mixing properties of our model .any density matrix of dimension can be written as convex combination of _ pure _ states , i.e. , where , and is a one dimensional projection whose cholevski decomposition is of the form where is the row vector of dimension on which projects , and is the column vector of the complex conjugate of .it should be noted that even though decomposition of our state in pure states is not unique this is not a problem given we are actually not interested in this representation but in the resulting convex combination , the state itself .now we can state the problem as to find the maximizer of the loglikelihood where represents the m coordinate of .we now maximize over all where we propose an em algorithm as an alternative method to the one presented in .see , for an exposition on the formulation of the em algorithm for problems of ml estimation with mixtures of distributions .the iteration procedure is given then in the following steps .+ 1 ) compute the expectation of the conditional likelihood : 2 ) maximize over all and obtain with components where as initial condition one could take very simple ad hoc states .for example , take as the vector and and to be the null vector and for .this corresponds to the _ one photon _ state .another possible combination is to take and .this corresponds to the state represented by a diagonal matrix , called a chaotic state .another strategy is to consider a preliminary estimator , based on just few observations , diagonalize it and take equal to its eigenstates and the corresponding eigenvalues . in this way onehopes to start the iteration from a state closer to the optimum one . in terms of speedour simulations suggests to use the em version as dimension grows . establishing any objective comparison between direct optimization and em algorithmhas proven to be difficult given the dependency on initial conditions , and high dimensionality of the problem . in figure[ fig.estimation.squeezed ] we show the result of estimating the squeezed state defined in table [ tbl.states ] using samples of size 1600 , for both pattern function and maximum likelihood estimators . at a first glanceone can see that the pattern function estimator result is rougher when compared to the maximum likelihood estimator .this is due to the fact discussed in subsection [ sec.pfe ] that the variance of increases as a function of and as we move away from the diagonal .the relation between quality of estimation and dimension of the truncated estimator is seen more clearly in figure [ fig.mlevspf.l2error ] where the -errors of estimating the coherent state is shown for both estimators at different sample sizes .the -dot represents the point of minimum and thus , optimum dimension for each curve .the curves presented there are the mean -error estimated using 15 simulations for each sample size .from there we can see that the optimum pfp estimator for the sample of size is the one corresponding to while the optimum sml would be obtained using the sieve of size .let us first analyze the performance of pfp estimator .notice that for the mean -error increases quadratically with due to contribution from the variance term .as increases the variance decreases like for a fixed dimension and consequently , the optimal dimension increases .one can see that the minimum is attained rather sharply .this suggests that , in order to get a good result a refined method of guessing the optimum dimension becomes necessary , eg .bic , aic or cross - validation .figure [ fig.cross-validation ] shows a cross - validation estimator for the -error of the pfp estimator for three simulations ( continuous lines ) , each one based on 1600 observations with a squeezed state , and for comparison the expected -error ( dashed line ) .this represents only a first attempt to implement model selection procedures for this problem which should be investigated more thoroughly from the theoretical and practical point of view .we pass now to the sml estimator . in figure [ fig.mlevspf.l2error ]we see that it has smaller -error the the pfp estimator at its optimum sieve dimension .it is remarkable that the behavior of the -error , for has a different behavior in this case , increasing much slower than the pfp estimator at the right side of its corresponding optimal dimension .this suggests that sml estimators could have a lower risk if the optimum dimension is overestimated .error for pattern function estimator and sieve maximum likelihood estimator and different sample sizes : .last graphic represents the optimum error for different sample size , using a logarithmic scale on both axis.,width=529 ] the bottom right pane of figure [ fig.mlevspf.l2error ] shows the optimum value of the -risk in terms of sample size .both axis are represented in a logarithmic scale .the observed linear pattern indicates that the -risk decreases as .the slope of both curves correspond to , showing an almost parametric rate which is not surprising given the smoothness of the example that we consider .the value of for the pfp estimator is a bit smaller than for the sml estimator , confirming its worse performance .notice also that the constant is bigger for pfp than for the sml estimator .we expect that the contrast between the two estimators will be accentuated when .finally , in figure [ fig.estimation.wigner ] we show the result of estimating the wigner function of the squeezed state using both estimators .as explained in section [ sec.wigner ] the corresponding estimator can be obtained by plugging - in the density matrix estimator in equation [ eq.linearity.wigner.rho ] .the density matrix estimators for the same state are represented in figure [ fig.estimation.squeezed ] .in this paper we have proposed a pattern function projection estimator and a sieve maximum likelihood estimator for the density matrix of the quantum state and its wigner function .we proved they are consistent for different norms in their corresponding spaces .there are many open statistical questions related to quantum tomography and we would like to enumerate a few of them here . * _ cross - validation ._ for both types of estimators , a data dependent method is needed for selecting the optimal sieve dimension .we mention criteria such as unbiased cross - validation , hard thresholding or other types of minimum contrast estimators . * _ efficiency . _a realistic detector has detection efficiency which introduces an additional noise in the homodyne data .from the statistical point of view we deal with a gaussian deconvolution problem on top of the usual quantum tomography estimation . *_ rates of convergence . _ going beyond consistency requires the selection of classes of states which are natural both from the physical , as well as statistical point of view .one should study optimal and achieved rates of convergence for given classes . for the ratesare expected to be significantly lower than in the ideal case , so it becomes even more crucial to use optimal estimators . in applications ,sometimes only the estimation of a functional of such as average number of photons or entropy may be needed .this will require a separate analysis , cf . . * _ kernel estimators for wigner function ._ when estimating the wigner function it seems more natural to use a kernel estimator such as in and to combine this analysis with the deconvolution problem in the case noisy observations , .* _ other quantum estimation problems ._ the methods used here for quantum tomography can be applied in other problems of quantum estimation , such as for example the calibration of measurement devices or the estimation of transformation of states under the action of quantum mechanical devices .smithey , d. t. , beck , m. , raymer , m. g. , and faridani , a. ( 1993 ) .measurement of the wigner distribution and the density matrix of a light mode using optical homodyne tomography : application to squeezed states and the vacuum ._ 70 _ , 12441247 .
the quantum state of a light beam can be represented as an infinite dimensional density matrix or equivalently as a density on the plane called the wigner function . we describe quantum tomography as an inverse statistical problem in which the state is the unknown parameter and the data is given by results of measurements performed on identical quantum systems . we present consistency results for pattern function projection estimators as well as for sieve maximum likelihood estimators for both the density matrix of the quantum state and its wigner function . finally we illustrate via simulated data the performance of the estimators . an em algorithm is proposed for practical implementation . there remain many open problems , e.g. rates of convergence , adaptation , studying other estimators , etc . , and a main purpose of the paper is to bring these to the attention of the statistical community .
team decision theory has its roots in control theory and economics .marschak was perhaps the first to introduce the basic elements of teams and to provide the first steps toward the development of a _team theory_. radner provided foundational results for static teams , establishing connections between person - by - person optimality , stationarity , and team - optimality . the work of witsenhausen , , , , on dynamic teams and characterization of information structures has been crucial in the progress of our understanding of dynamic teams .further discussion on the design of information structures in the context of team theory and economics applications are given in and , among a rich collection of other contributions not listed here .establishing the existence and structure of optimal policies is a challenging problem .existence of optimal policies for static and a class of sequential dynamic teams has been studied recently in .more specific setups and non - existence results have been studied in , . for a class of teams which are _ convex _, one can reduce the search space to a smaller parametric class of policies ( see , and for a comprehensive review , see ) . in this paper ,our aim is to study the approximation of static and dynamic team problems using finite models which are obtained through the uniform discretization , on a finite grid , of the observation and action spaces of agents . in particular , we are interested in the asymptotic optimality of quantized policies . in the literaturerelatively few results are available on approximating static or dynamic team problems .we can only refer the reader to , and a few references therein . with the exception of , these works in general study a specific setup ( the witsenhausen counterexample ) and are mostly experimental ; that is , they do not rigorously prove the convergence of approximate solutions . in ,a class of static team problems are considered and the existence of smooth optimal strategies is studied . under fairly strong assumptions ,the existence of an optimal strategy with lipschitz continuous partial derivatives up to some order is proved . by using this result , an error bound on the accuracy of near optimal solutionsis established , where near optimal strategies are expressed as linear combinations of basis functions with adjustable parameters . in ,the same authors investigated the approximation problem for witsenhausen s counterexample , which does not satisfy the conditions in ; the authors derived an analogous error bound on the accuracy of the near optimal solutions . for the result in both the error bound and the near optimal solutions depend on the knowledge of the optimal strategy for witsenhausen s counterexample .moreover , the method devised in implicitly corresponds to the discretization of only the action spaces of the agents .therefore , it involves only the approximation with regard to the action space , and does not correspond to a tractable approximation for the set of policies / strategies .particular attention has been paid in the literature to witsenhausen s counterexample .this problem has puzzled the control community for more than 40 years with its philosophical impact demonstrating the challenges that arise due to a non - classical information structure , and its formidable difficulty in obtaining an optimal or suboptimal solution .in fact , optimal policies and their value are still unknown , even though the existence of an optimal policy has been established using various methods . some relevant results on obtaining approximate solutions can be found in .certain lower bounds , that are not tight , building on information theoretic approaches are available in , see also . in this paper , we show that finite models obtained through the uniform quantization of the observation and action spaces lead to a sequence of policies whose cost values converge to the optimal cost .thus , with high enough computation power , one could guarantee that for any , an -optimal policy can be constructed .we note that the operation of quantization has typically been the method to show that a non - linear policy can perform better than an optimal linear policy , both for witsenhausen s counterexample and another interesting model known as the gaussian relay channel problem .our findings show that for a large class of problems , quantized policies not only may perform better than linear policies , but that they are actually almost optimal .we finally note that although finding optimal solutions for finite models for witsenhausen s counterexample as the ones constructed in this paper was shown to be np - complete in , the task is still computationally less demanding than the method used in . loosely speaking , to obtain a near optimal solution using the method in , one has to compute the optimal partitions of the observation spaces and the optimal representation points in the action spaces .in contrast , the partitions of the observation spaces and the available representation points in the action spaces used by our method are known _a priori_. we also note that if one can establish smoothness properties of optimal policies such as differentiability or lipschitz continuity ( e.g. , as in ) , the methods developed in our paper can be used to provide rates of convergence for the sequence of finite solutions as the finite models are successively refined . * contributions of the paper . *( i ) we establish that finite models asymptotically represent the true models in the sense that the solutions obtained by solving such finite models lead to cost values that converge to the optimal cost of the original model .thus , our approach can be viewed to be constructive ; even though the computational complexity is typically at least exponential in the cardinality of the finite model .( ii ) the approximation approach here provides , to our knowledge , the first rigorously established result showing that one can construct an -optimal strategy for any through an explicit solution of a simpler problem for a large class of static and dynamic team problems , in particular for the witsenhausen s celebrated counterexample .the rest of the paper is organized as follows . in section [ sub1sec1 ]we review the definition of witsenhausen s _ intrinsic model _ for sequential team problems . in section [ boundedcase ]we consider finite _ observation _ approximations of static team problems with compact observation spaces and bounded cost , and prove the asymptotic optimality of strategies obtained from finite models . in section [ unboundedcase ]an analogous approximation result is obtained for static team problems with non - compact observation spaces and unbounded cost functions . in section [ sec3 ]we consider finite observation approximations of dynamic team problems via the static reduction method . in sections [ sec4 ] and [ gaussrelay ]we apply the results derived in section [ sec3 ] to study finite observation space approximations of witsenhausen s celebrated counterexample and the gaussian relay channel .discretization of the action spaces is considered in section [ discact ] .section [ sec5 ] concludes the paper .in this section , we introduce the model as laid out by witsenhausen , called _ the intrinsic model _ ; see for a more comprehensive overview and further characterizations and classifications of information structures . in this model, any action applied at any given time is regarded as applied by an individual agent , who acts only once .one advantage of this model , in addition to its generality , is that the definitions regarding information structures can be compactly described .suppose that in the decentralized system , there is a pre - defined order in which the agents act .such systems are called _ sequential systems _ ( for non - sequential teams , we refer the reader to , and , in addition to ) . in the following , all spaces are assumed to be borel spaces ( i.e. , borel subsets of complete and separable metric spaces ) endowed with borel -algebras . in the context of a sequential system , the _ intrinsic model _ has the following components : * a collection of _ measurable spaces _ , specifying the system s distinguishable events , and the action and measurement spaces . here is the number of actions taken , and each of these actions is supposed to be taken by an individual agent ( hence , an agent with perfect recall can also be regarded as a separate decision maker every time it acts ) .the pair is a measurable space on which an underlying probability can be defined .the pair denotes the measurable space from which the action of agent is selected .the pair denotes the measurement ( or observation ) space of agent . *a _ measurement constraint _ which establishes the connection between the observation variables and the system s distinguishable events .the -valued observation variables are given by , where , is a stochastic kernel on given , and denotes the action of agent . * a _ design constraint _ , which restricts the set of admissible -tuple control strategies , also called _ policies _ , to the set of all measurable functions , so that , where is a measurable function .let denote the set of all admissible policies for agent and let . * a _ probability measure _ defined on which describes the measures on the random events in the model .we note that the intrinsic model of witsenhausen gives a set - theoretic characterization of information fields ; however , for borel spaces , the model above is equivalent to the intrinsic model for sequential team problems . under this intrinsic model , a sequential team problem is _ dynamic _ if the information available to at least one agent is affected by the action of at least one other agent . a decentralized problem is, if the information available at every decision maker is only affected by state of the nature ; that is , no other decision maker can affect the information at any given decision maker . information structures ( iss ) can also be classified as classical , quasi - classical , and nonclassical .an is is _ classical _ if contains all of the information available to agent for .an is is _ quasi - classical _ or _ partially nested _ , if whenever ( for some ) affects , then agent has access to .an is which is not partially nested is _nonclassical_. for any , we let the cost of the team problem be defined by ,\ ] ] for some measurable cost function , where and .[ def : tb1 ] for a given stochastic team problem , a policy ( strategy ) is an _ optimal team decision rule _if the cost level achieved by this strategy is the _ optimal team cost_. [ def : tb2 ] for a given stochastic team problem , a policy constitutes a _nash equilibrium _( synonymously , a _ person - by - person optimal _solution ) if , for all and all , the following inequalities hold : where we have adopted the notation unless otherwise specified , the term ` measurable ' will refer to borel measurability in the rest of the paper . inwhat follows , the terms _ policy _ , _ measurement _ , and _ agent _ are used synonymously with _, _ observation _ , and _ decision maker _ , respectively . in this section , for the ease of reference we state some well - known results in measure theory and functional analysis that will be frequently used in the paper .the first theorem is lusin s theorem which roughly states that any measurable function is _ almost _ continuous .( lusin s theorem ( * ? ? ?* theorem 7.5.2))[lusin ] let and be two borel spaces and let be a probability measure on .let be a measurable function from into .then , for any there is a closed set such that and the restriction of to is continuous .the second theorem is the dugundji extension theorem which is a generalization of the tietze extension theorem .( dugundji extension theorem ( * ? ? ?* theorem 7.4))[dugundji ]let be a borel space and let be a closed subset of .let be a convex subset of some locally convex vector space. then any continuous has a continuous extension on .the next theorem originally states that the closed convex hull of a compact subset in a locally convex vector space is compact if the vector space is completely metrizable .since the closure of a convex set is convex and a closed subset of a compact set is compact , we can state the theorem in the following form .* theorem 5.35)[clconv ] in a completely metrizable locally convex vector space , the closed convex hull of a compact set is convex and compact .the same statement also holds when is replaced with any of its closed and convex subsets .in this section , we consider the finite observation approximation of static team problems . inwhat follows , the static team problem is formulated in a state - space form which can be reduced to the intrinsic model introduced in section [ sub1sec1 ] .let be a probability space representing the state space , where is a borel space and is its borel -algebra .we consider an -agent static team problem in which agent , , observes a random variable and takes an action , where takes values in a borel space and takes values in a borel space . given any state realization , the random variable has a distribution ; that is , is a stochastic kernel on given .the team cost function is a non - negative function of the state , observations , and actions ; that is , , where and .for agent , the set of strategies is given by recall that .then , the cost of the team is given by where . here , with an abuse of notation , denotes the joint distribution of the state and observations .therefore , we have with these definitions , we first consider the case where the observation spaces are compact and the cost function is bounded . in the second part, teams with non - compact observation spaces and unbounded cost function will be studied . in this section ,we consider the finite observation approximation of static team problems with compact observation spaces and bounded cost function .we impose the following assumptions on the components of the model .[ as1 ] * * the cost function is bounded .in addition , it is continuous in and for any fixed . * for each , is a convex subset of a locally convex vector space .* for each , is compact .we first prove that the minimum cost achievable by continuous strategies is equal to the optimal cost . to this end , for each , we define and .[ prop1 ] we have let .we prove that there exists a sequence such that as , which implies the proposition .let denote the distribution of . for each , by lusin s theorem , there is a closed set such that and the restriction of to is continuous .let us denote , and so is continuous . by the dugundji extension theorem, there exists a continuous extension of .therefore , and we have for the following \rp(dx , d{\bf y } ) \biggr| \nonumber \\ & \leq \int_{\sx \times ( \sy \setminus f_k ) } \bigl| c(x,{\bf y},\underline{\gamma } ) - c(x,{\bf y},\underline{\gamma}_{k } ) \bigr| \text { } \rp(dx , d{\bf y } ) \nonumber \\ & \leq 2 \|c\| \text { } \rp\bigl(\sx \times ( \sy \setminus f_k)\bigr ) , \nonumber\end{aligned}\ ] ] where is the maximum absolute value that takes . since , we have .this completes the proof .let denote the metric on .since is compact , one can find a finite set such that is an -net in ; that is , for any we have define function mapping to by where ties are broken so that is measurable . in the literature , is called the nearest neighborhood quantizer . if ] for some , the finite set can be chosen such that becomes an uniform quantizer .we let denote the extension of to given by where is some auxiliary element .define ; that is , is defined to be the set of all strategies of the form , where .define also .the following theorem states that an optimal ( or almost optimal ) policy can be approximated with arbitrarily small approximation error for the induced costs by policies in for sufficiently large and .[ newthm1 ] for any , there exist and such that by proposition [ newprop1 ] , there exists such that , where and each is convex and compact . for each and , we define and .define also .let denote the collection of all subsets of except the empty set .for any , we define recall that is the strategy which maps any to .let and observe that note that since the range of the strategy is contained in , we have for all by assumption [ newas1]-(d ) .hence , there exists a sufficiently large such that let .then , we have note that in the last expression , the integrands in the third and fourth terms are less than .since is -integrable by assumption [ newas1]-(d ) , on as ( recall that is continuous ) , and is continuous in , the third and fourth terms in the last expression converge to zero as by dominated convergence theorem .hence , there exists a sufficiently large such that the last expression becomes less than .therefore , , completing the proof .the above result implies that to compute a near optimal policy for the team problem it is sufficient to choose a strategy based on the quantized observations for sufficiently large and .furthermore , this nearly optimal strategy can have a compact range of the form , where is convex and compact for each .however , to obtain a result analogous to the theorem [ thm2 ] , we need to impose a further assumption . to this end, we first introduce a finite observation model . for each ,let ( i.e. , the output levels of ) and define the stochastic kernels on given as follows : where .let and .define as where , , , and .note that the probability measure can also be treated as a measure on . in this case, it is not difficult to prove that converges to weakly as . for any compact subset of , we also define .[ nnewas1 ] for any compact subset of of the form , we assume that the function is uniformly integrable with respect to the measures ; that is , [ convergence - unbounded ] let be a sequence of strategies such that , where and each is convex and compact . for each and , define , where .then , we have let us introduce the following finite measures on : since converges to weakly , by ( * ? ? ?* theorem 3.5 ) and assumption [ nnewas1 ] we have weakly as .hence , the sequence is tight .therefore , there exists a compact subset of such that and for all .then , we have the first term in the last expression goes to zero as by dominated convergence theorem and the fact that is bounded and continuous in .the second term is less than .since is arbitrary , this completes the proof .the following theorem is the main result of this section which is a consequence of theorem [ newthm1 ] .it states that to compute a near optimal strategy for the original team problem , it is sufficient to compute an optimal ( or an almost optimal policy if an optimal one does not exist ) policy for the team problem described above .[ newthm2 ] suppose assumptions [ newas1 ] and [ nnewas1 ] hold .then , for any , there exists a pair and a compact subset of such that an optimal ( or almost optimal ) policy in the set for the cost is -optimal for the original team problem when is extended to via .fix any . by lemma [ newprop2 ] andtheorem [ newthm1 ] , there exists compact subset of of the form such that for each , let be such that . define as the restriction of to the set .then , we have for the reverse inequality , for each , let be such that .define .then , we have this completes the proof .the results for the static case apply also to the dynamic case , through a static reduction .first we review the equivalence between sequential dynamic teams and their static reduction ( this is called _ the equivalent model _ ) . consider a dynamic team setting according to the intrinsic model where there are decision epochs , and agent observes , and the decisions are generated as .the resulting cost under a given team policy is .\ ] ] this dynamic team can be converted to a static team provided that the following absolute continuity condition holds .[ as2 ] for every , there exists a function , where , and a probability measure on such that for all we have therefore , for a fixed choice of , the joint distribution of is given by where .the cost function can then be written as where and .the observations now can be regarded as independent , and by incorporating the terms into , we can obtain an equivalent _ static team _ problem .hence , the essential step is to appropriately adjust the probability space and the cost function .note that in the static reduction method , some nice properties ( such as continuity and boundedness ) of the cost function of the original dynamic team problem can be lost , if the functions in assumption [ as2 ] are not well - behaved .however , the observation channels between and the are quite well - behaved for most of the practical models ( i.e , additive gaussian channel ) which admits static reduction .therefore , much of the nice properties of the cost function are preserved for most of the practical models .the next theorem is the main result of this section .it states that for a class of dynamic team problems , finite models can approximate an optimal policy with arbitrary precision . in what follows, denotes the distribution of state and observations in the finite model approximation of the static reduction .[ thm4 ] suppose assumptions [ newas1]-(a),(b),(c),(e ) and [ as2 ] hold .in addition , for each , is continuous in and , and for all compact of the form . then , the static reduction of the dynamic team model satisfies assumptions [ newas1 ] and [ nnewas1 ] .therefore , theorems [ newthm1 ] and [ newthm2 ] hold for the dynamic team problem .in particular , theorems [ newthm1 ] and [ newthm2 ] hold for the dynamic team problems satisfying assumptions [ newas1 ] , [ nnewas1 ] and [ as2 ] , if is bounded and continuous in and for each .an important dynamic information structure is the _ partially nested _ information structure .an is is partially nested if whenever affects for some , agent i has access to ; that is , there exists a measurable function such that for all and all realizations of .for such team problems , one talks about _ precedence relationships _ among agents : agent is _ precedent _ to agent ( or agent _ communicates _ to agent ) , if the former agent s actions affect the information of the latter , in which case ( to be partially nested ) agent has to have the information based on which the action - generating policy of agent was constructed .dynamic teams with such an information structure always admit a static reduction through an informational equivalence . for such partially nested ( or quasi - classical ) information structures, a static reduction was studied by ho and chu in the context of lqg systems and for a class of non - linear systems satisfying restrictive invertibility properties . for such dynamic teams ,the cost function does not change as a result of the static reduction , unlike in the static reduction in section [ staticred ] .therefore , if the partially nested dynamic team satisfies assumptions [ newas1 ] and [ nnewas1 ] , then its static reduction also satisfies it .hence , theorems [ newthm1 ] and [ newthm2 ] hold for such problems . before proceeding to the next section , we prove an auxiliary result which will be used in the next two sections .[ uniform ] let and be non - negative real functions defined on metric spaces and , respectively .suppose for some sequence of probability measures and .then , we have let \coloneqq \int f d\mu_n ] . it is easy to prove that \eqqcolon a < \infty ] . note that . hence , \int_{\{f > \sqrt{r}\ } } f \text { } d\mu_n + e_n[f ] \int_{\{g >\sqrt{r}\ } } g \text { } d\nu_n \nonumber \\ &\phantom{xxxx}\leq b \int_{\{f > \sqrt{r}\ } } f \text { } d\mu_n + a \int_{\{g > \sqrt{r}\ } } g \text { } d\nu_n . \nonumber\end{aligned}\ ] ] since the last term converges to zero as by assumption , this completes the proof .in witsenhausen s celebrated counterexample ( see fig . [ fig1 ] ) , thare are two decision makers : agent observes a zero mean and unit variance gaussian random variable and decides its strategy .agent observes , where is a standard ( zero mean and unit variance ) gaussian noise independent of , and decides its strategy .= [ draw , fill = white!20 , minimum size=3em ] = [ pin edge = to-,thin , black ] = [ draw , circle ] \(a ) ; ( b ) [ left of = a , node distance=2 cm , coordinate ] a ; ( d ) [ right of = a ] ; ( c ) [ right of = d ] ; ( end ) [ right of = c , node distance=2 cm ] ; ( e ) [ below of = d , node distance=1 cm ] ; ( e ) edge node ( d ) ; ( b ) edge node ( a ) ; ( a ) edge node ( d ) ; ( d ) edge node ( c ) ; ( c ) edge node ( end ) ; the cost function of the team is given by where . here, the state of the nature can be regarded as a degenerate random variable .we let .then we have where denotes the lebesgue measure .let so that .the static reduction proceeds as follows : for any policy , we have where denotes the standard gaussian distribution .hence , by defining and , we can write as hence , in the static reduction of witsenhausen s counterexample , agents observe independent zero mean and unit variance gaussian random variables . in this sectionwe study the approximation problem for witsenhausen s counterexample by using the static reduction formulation .we show that the conditions in theorem [ thm4 ] hold for witsenhausen s counterexample , and therefore , theorems [ newthm1 ] and [ newthm2 ] can be applied .the cost function of the static reduction is given by where is given in ( [ eq20 ] ) .note that the strategy spaces of the original problem and its static reduction are identical , and same strategies induce the same team costs . recall that denotes the standard gaussian distribution .[ newlemma1 ] for any with , we have < \infty ] and , we have <\infty ] .then , we also have <\infty ] and ^{1/2 } \leq e\bigl [ ( u^1)^2 \big]^{1/2 } + e\bigl [ ( u^1-u^2)^2 \big]^{1/2} ] .let denote the uniform quantizer on having output levels ; that is , where .let us extend to by mapping to .for each , let ( i.e. , output levels of the extended ) and define the probability measure on as moreover , let and define with the help of lemma [ newlemma1 ] , we now prove the following result .[ newprop3 ] witsenhausen s counterexample satisfies conditions in theorem [ thm4 ] .assumption [ newas1]-(a),(b),(c ) and assumption [ as2 ] clearly hold . to prove assumption [ newas1]-(e ) , we introduce the following notation . for any strategy , we let denote the corresponding expectation operation .pick with . since \biggr ]= e_{\gamma^1,\gamma^2 } \biggl [ \bigl ( u^1 - u^2 \bigr)^2 \biggr ] \nonumber\end{aligned}\ ] ] by the law of total expectation , there exists such that < \infty ] for some , since any compact set in is contained in ^ 2 ] is also -uniformly integrable .since is arbitrary , this completes the proof .proposition [ newprop3 ] and theorem [ thm4 ] imply that theorems [ newthm1 ] and [ newthm2 ] is applicable to witsenhausen s counterexample .therefore , an optimal strategy for witsenhausen s counterexample can be approximated by strategies obtained from finite models .the theorem below is the main result of this section .it states that to compute a near optimal strategy for witsenhausen s counterexample , it is sufficient to compute an optimal strategy for the problem with finite observations obtained through uniform quantization of the observation spaces .[ thm3 ] for any , there exists and such that an optimal policy in the set for the cost is -optimal for witsenhausen s counterexample when is extended to via , , where ] and . analogous to witsenhausen s counterexample , the agents observe independent zero mean and unit variance gaussian random variables . in this subsection , the approximation problem for the gaussian relay channel is considered using the static reduction formulation .analogous to section [ sec4 ] , we prove that the conditions of theorem [ thm4 ] hold for gaussian relay channel , and so theorems [ newthm1 ] and [ newthm2 ] can be applied .the cost function of the static reduction is given by \nonumber\end{aligned}\ ] ] where is given in ( [ eq20 ] ) .recall the uniform quantizer on ] .for any , by the law of total expectation we can write \biggr ] \nonumber \\ \intertext{and } e_{\underline{\gamma } } \biggl [ \bigl ( u^n \bigr)^2 \biggr ] & = e_{\underline{\gamma } } \biggl [ e_{\underline{\gamma } } \bigl [ \bigl ( u^n \bigr)^2 \bigl| u^i \bigr ] \biggr ] .\nonumber\end{aligned}\ ] ] therefore , for each , there exists such that < \infty ] .then we have \nonumber \\ & \phantom{xxx}\leq e_{\underline{\gamma}^{-i},\gamma_{u^{i,*}}^i}\biggl [ 2 x^2 + 2 \bigl ( u^n \bigr)^2 + \sum_{j=1}^{n-1 } l_j \bigl(u^j\bigr)^2\biggr ] \nonumber \\ & \phantom{xxx}=e_{\underline{\gamma}}\biggl [ \sum_{j=1}^{i-1 } l_j \bigl(u^j\bigr)^2 + 2 x^2 \biggr ] \nonumber \\ & \phantom{xxxxxxxx}+ e_{\underline{\gamma}}\biggl [ 2 \bigl ( u^n \bigr)^2 + \sum_{j = i}^{n-1 } l_j \bigl(u^j\bigr)^2 \biggr| u^i = u^{i , * } \biggr ] < \infty . \nonumber\end{aligned}\ ] ] therefore , assumption [ newas1]-(e ) holds .analogous to the proof of proposition [ newprop3 ] , for -uniform integrability , it is sufficient to consider compact sets of the form ^n ] is also -uniformly integrable .since is arbitrary , this completes the proof . the preceding proposition and theorem [ thm4 ]imply , via theorems [ newthm1 ] and [ newthm2 ] , that an optimal strategy for gaussian relay channel can be approximated by strategies obtained from finite models .the following theorem is the main result of this section .[ thm5 ] for any , there exists and such that an optimal policy in the set for the cost is -optimal for gaussian relay channel when is extended to via , , where ] ) .note that is a quantizer applied to subsets of action spaces , ( not to be confused with in section [ sec4 ] ) .the preceding theorem implies that for each and , there exists a and such that where and is the set of output levels of .therefore , to compute a near optimal strategy for witsenhausen s counterexample , it is sufficient to compute an optimal strategy for the finite model that is obtained through uniform quantization of observation and action spaces ( i.e. , ) on finite grids when the number of grid points is sufficiently large .in particular , through constructing the uniform quantization so that both the _ granular region _ and the _ granularity _ of the quantizers are successively refined ( that is the partitions generated by the quantizers are successively nested ) , we have the following proposition which lends itself to a numerical algorithm . there exists a sequence of finite models obtained through a successive refinement of the measurement and action set partitions generated by uniform quantizers whose optimal costs will converge to the cost of the witsenhausen s counterexample .approximation of both static and dynamic team problems by finite models was considered . under mild technical conditions, we showed that the finite model obtained by quantizing uniformly the observation and action spaces on finite grids provides a near optimal strategy if the number of grid points is sufficiently large . using this result ,an analogous approximation results were also established for the well - known counterexample of witsenhausen and gaussian relay channel .our approximation approach to the witsenhausen s counterexample thus provides , to our knowledge , the first rigorously established result that for any , one can construct an optimal strategy through an explicit solution of a conceptually simpler problem .the authors are grateful to professor tamer baar for his technical comments and encouragement .a. gupta , s. yksel , t. basar , and c. langbort , `` on the existence of optimal policies for a class of static and sequential dynamic teams , '' _siam j. control optim ._ , vol .53 , no . 3 , pp .16811712 , 2015 .y. wu and s. verd , `` witsenhausen s counterexample : a view from optimal transport theory , '' in _ proceedings of the ieee conference on decision and control , florida , usa _ , december 2011 , pp. 57325737 . j. l. s. j.c .krainak and s. marcus , `` static team problems part i : sufficient conditions and the exponential cost criterion , '' _ ieee transactions automatic contr . _ , vol . 27 , pp . 839848 , april 1982 .j. lee , e. lau , and y. ho , `` the witsenhausen counterexample : a hierarchical search approach for nonconvex optimization problems , '' _ ieee trans .46 , no . 3 , pp . 382397 , mar .2001 .p. grover , s. y. park , and a. sahai , `` approximately optimal solutions to the finite - dimensional witsenhausen counterexample , '' _ ieee transactions on automatic control _58 , no . 9 , pp . 21892204 , 2013 .m. andersland and d. teneketzis , `` information structures , causality , and non - sequential stochastic control , i : design - independent properties , '' _ siam j. of control optimization _ , vol .30 , p. 14471475
in this paper , we consider finite model approximations of a large class of static and dynamic team problems where these models are constructed through uniform quantization of the observation and action spaces of the agents . the strategies obtained from these finite models are shown to approximate the optimal cost with arbitrary precision under mild technical assumptions . in particular , quantized team policies are asymptotically optimal . this result is then applied to witsenhausen s celebrated counterexample and the gaussian relay channel problem . for the witsenhausen s counterexample , our approximation approach provides , to our knowledge , the first rigorously established result that one can construct an -optimal strategy for any through a solution of a simpler problem .
ubiquitous , secure , and high data rate communication is a basic requirement for the next generation wireless communication systems .the rapid growth of wireless data traffic in the past decades has heightened the energy consumption in both transmitters and receivers . as a result ,multiuser multiple - input multiple - output ( mimo ) has been proposed in the literature for facilitating energy efficient wireless communication . although the energy dissipation of the transmitters may be significantly reduced by multiuser mimo technology , mobile communication devices and sensor devices are still often powered by batteries with limited energy storage capacity .frequently replacing the device batteries can be costly and inconvenient in difficult - to - access environments , or even infeasible for medical sensors embedded inside the human body .hence , the limited lifetime of communication networks constitutes a major bottleneck in providing quality of service ( qos ) to the end - users .recently , energy harvesting based mobile communication system design has drawn significant interest from both academia and industry since it enables self - sustainability of energy constrained wireless devices .traditionally , wind , solar , and biomass , etc .are the major sources for energy harvesting .although these renewable energy sources are perpetual , their availability usually depends on location and climate which may not be suitable for mobile devices . on the other hand , wireless energy transfer technology , which allows receivers to scavenge energy from the ambient radio frequency ( rf ) signals ,has attracted much attention lately although the concept can be traced back to nikola tesla s work in the early 20th century .there have been some preliminary applications of wireless energy transfer such as wireless body area networks ( wban ) for biomedical implants , passive radio - frequency identification ( rfid ) systems , and wireless sensor networks .indeed , the combination of rf energy harvesting and communication provides the possibility of simultaneous wireless information and power transfer ( swipt ) which imposes many new and interesting challenges for wireless communication engineers . in , the trade - off between channel capacity and harvested energywas studied for near - field communication over a frequency selective channel . in ,the authors investigated the performance limits of a three - node wireless mimo broadcast channel for swipt .in particular , the tradeoffs between maximal information rate versus energy transfer were characterized by the boundary of a rate - energy ( r - e ) region . in , a power splitting receiver and a separated receiver were proposed to realize concurrent information decoding and energy harvesting for narrow - band single - antenna communication systems . in , different resource allocation algorithms were proposed for improving the utilization of limited system resources .optimal beamforming and power allocation design was studied for multiuser narrow - band systems with multiple transmit antennas in , while the resource allocation algorithm design for wide - band swipt systems was studied in . in and , the energy efficiency of multi - carrier modulation with swiptwas investigated for single user and multiuser systems , respectively .in particular , it was shown in that the energy efficiency of a communication system can be improved by integrating an energy harvester into a conventional information receiver which further motivates the deployment of swipt in practice .besides , a power allocation algorithm was designed for the maximization of spectral efficiency of swipt systems employing power splitting receivers in . the results in reveal that the amount of harvested energy at the receivers can be increased by increasing the transmit power of the information signals .however , a high signal power may also lead to substantial information leakage due to the broadcast nature of the wireless communication channel and facilitate eavesdropping .the notion of physical ( phy ) layer security in swipt systems has recently been pursued in . by exploiting multiple antennas ,transmit beamforming and artificial noise generation can be utilized for providing communication security while guaranteeing qos in wireless energy transfer to energy harvesting receivers .however , the resource allocation algorithms in were designed for a single information receiver and single - antenna eavesdroppers .the results in may not be applicable to the case of multiple - antenna eavesdroppers . besides, the works in did not take into account fairness issues in transferring energy to energy harvesting receivers .nevertheless , fairness is an essential qos figure of merit for wireless energy transfer . in this paper , we focus on the resource allocation algorithm design for providing fairness to energy receivers in the wireless energy transfer process while guaranteing communication secrecy to information receivers . the resource allocation algorithm design is formulated as a non - convex optimization problem . in particular , we promote the dual use of artificial noise for secrecy communication and efficient wireless energy transfer provisioning .the considered non - convex problem is solved optimally by semidefinite programming ( sdp ) relaxation and our simulation results unveil the potential performance gain achieved by the proposed optimization framework .we use boldface capital and lower case letters to denote matrices and vectors , respectively . , , , and represent the hermitian transpose , trace , rank , and determinant of matrix , respectively ; denotes the maximum eigenvalue of matrix ; and indicate that is a positive definite and a positive semidefinite matrix , respectively ; is the identity matrix ; denotes the set of all matrices with complex entries ; denotes the set of all hermitian matrices .the circularly symmetric complex gaussian ( cscg ) distribution is denoted by with mean vector and covariance matrix ; indicates distributed as " ; denotes statistical expectation ; represents the absolute value of a complex scalar .^+ ] .then , we project the vector onto the null space of and use the resulting vector as the direction of beamforming vector .we note that in baseline scheme 2 is a rank - one matrix by construction. it can be observed in figure [ fig : hp_sinr ] that for the proposed optimal scheme , the energy harvesting receivers are able to harvest more energy compared to the two baseline schemes , due to the joint optimization of and . besides, the performance gain of the optimal scheme over the two baseline schemes is further enlarged for an increasing number of transmit antennas .this can be explained by the fact that the optimal scheme can fully utilize the degrees of freedom offered by the system for resource allocation .in contrast , although multiuser and artificial noise interference are eliminated at the information receivers for baseline scheme 1 and baseline scheme 2 , respectively , the degrees of freedom for resource allocation in the baseline schemes are reduced which results in a lower harvested energy at the energy harvesting receivers .figure [ fig : cap_sinr ] illustrates the average secrecy capacity per information receiver versus the minimum required sinr of the information receivers for different numbers of transmit antennas and different resource allocation schemes .it can be observed that the average system secrecy capacity , i.e. , , is a non - decreasing function with respect to .in fact , the channel capacity of energy harvesting receiver for decoding information receiver is constrained to be less than bit / s / hz , cf .table i. besides , we note that baseline scheme 1 is unable to meet the minimum required of secrecy capacity as specified by constraints c1 and c2 . in other words , there are time instants for baseline 1 such that . thereby , although baseline scheme 1 satisfies constraint , unlike the optimal scheme , it does not necessarily satisfy constraint c2 . on the other hand , both the proposed algorithm and baseline scheme 2 are able to meet the minimum required secrecy capacity due to the rank - one solution for beamforming matrix .however , the exceedingly large secrecy capacity achieved by baseline scheme 2 comes at the expense of a smaller harvested power compared to the proposed optimal scheme , cf .figure [ fig : hp_sinr ] .in this paper , we studied the resource allocation algorithm design for swipt .the algorithm design was formulated as a non - convex optimization problem to ensure the max - min fairness in energy transfer to the energy harvesting receivers .the proposed problem formulation enabled the dual use of artificial noise for efficient energy transfer and secure communication .sdp relaxation was adopted to obtain the optimal solution of the considered non - convex optimization problem .simulation results unveiled the potential gain in harvested energy of our proposed optimal resource allocation scheme compared to baseline schemes .we start the proof by rewriting constraint c2 as then , we propose a lower bound on the left hand side of ( [ eqn : det_ineq2 ] ) by introducing the following lemma .d. w. k. ng , e. s. lo , and r. schober , `` energy - efficient resource allocation in multiuser ofdm systems with wireless information and power transfer , '' in _ proc .ieee wireless commun . and networking conf . _ , 2013 .d. w. k. ng , e. s. lo , and r. schober , `` multi - objective resource allocation for secure communication in cognitive radio networks with wireless information and power transfer , '' _ submitted for possible journal publication _ , mar .[ online ] .available : http://arxiv.org/abs/1403.0054 .q. li and w. k. ma , `` spatially selective artificial - noise aided transmit optimization for miso multi - eves secrecy rate maximization , '' _ ieee trans . signal process ._ , vol .27042717 , may 2013 .
this paper considers max - min fairness for wireless energy transfer in a downlink multiuser communication system . our resource allocation design maximizes the minimum harvested energy among multiple multiple - antenna energy harvesting receivers ( potential eavesdroppers ) while providing quality of service ( qos ) for secure communication to multiple single - antenna information receivers . in particular , the algorithm design is formulated as a non - convex optimization problem which takes into account a minimum required signal - to - interference - plus - noise ratio ( sinr ) constraint at the information receivers and a constraint on the maximum tolerable channel capacity achieved by the energy harvesting receivers for a given transmit power budget . the proposed problem formulation exploits the dual use of artificial noise generation for facilitating efficient wireless energy transfer and secure communication . a semidefinite programming ( sdp ) relaxation approach is exploited to obtain a global optimal solution of the considered problem . simulation results demonstrate the significant performance gain in harvested energy that is achieved by the proposed optimal scheme compared to two simple baseline schemes .
one of the most challenging problems in mathematical fluid dynamics is to understand whether a solution of the 3d incompressible euler equations can develop a finite time singularity from smooth initial data with finite energy .a main difficulty is due to the presence of the vortex stretching term , which has a formal quadratic nonlinearity in vorticity .this problem has attracted a lot of attention in the mathematics community and many people have contributed to its understanding , see the recent book by majda and bertozzi for a review of this subject .an important development in recent years is the work by constantin , fefferman , and majda who showed that the local geometric regularity of vortex lines can lead to depletion of nonlinear vortex stretching .inspired by the work of , deng , hou , and yu obtained more localized non - blowup criteria by exploiting the geometric regularity of a vortex line segment whose arclength may shrink to zero at the potential singularity time . to obtain these results ,deng - hou - yu used a lagrangian approach and explored the connection between the local geometric regularity of vortex lines and the growth of vorticity .guided by this local geometric non - blowup analysis , hou and li performed large scale computations with resolution up to to re - examine some of the most well - known blow - up scenarios , including the two slightly perturbed anti - parallel vortex tubes that was originally investigated by kerr .the computations of hou and li provide strong numerical evidence that the geometric regularity of vortex lines , even in an extremely localized region near the support of maximum vorticity , can lead to depletion of vortex stretching .we refer to a recent survey paper for more discussions on this topic . in this paper, we derive new growth rate estimates of maximum vorticity for the 3d incompressible euler equations .we use a framework similar to that adopted by deng - hou - yu .the main innovation of this work is to introduce a method of analysis to study the dynamic evolution of the integral of the absolute value of vorticity along a local vortex line segment . specifically , we derive a dynamic estimate for the quantity : where is a parameterization of a vortex line segment , , and is the arclength of .the assumption on is less restrictive than that in . as in , we assume that the vorticity along is comparable to the maximum vorticity , i.e. .let , and . here be the unit vorticity vector of , and the unit normal vector . under the assumption that and , we derive a relatively sharp growth estimate for , which can be used to obtain an upper bound on the growth rate of the maximum vorticity : where and depend on .if we further assume that has a positive lower bound , the above estimate implies no blow - up up to , if .this generalizes the result of cordoba and fefferman .the above estimate extends the result of deng - hou - yu in .in fact , it is easy to check that under the assumption that and with , the right hand side of ( [ vorticity_growth ] ) remains bounded up to the time , implying no blow - up up to .our result can be also applied to the critical case when , which was considered in . in this case, we have if we further assume that there exists such that where depends on , and the scaling constants in , and , then our growth estimate implies that application of the beale - kato - majda non - blow - up criterion would exclude blow - up at since implies . of particular interestis the case when the vorticity has a local clebsch representation . in this case, the vorticity can be represented by the two clebsch variables and near the support of maximum vorticity as follows : where and are carried by the flow , that is where is the velocity field .in addition to the geometric regularity assumption on , if we further assume that one of the clebsch variables has a bounded gradient and , then we prove that the maximum vorticity can not grow faster than double exponential in time , i.e. . as an application of this result , we re -examine the computations of the 3d incompressible euler equations with the two slightly perturbed anti - parallel vortex tubes initial data by hou and li . by examining the vorticity field carefully near the support of maximum vorticity ( see fig .1 ) , the vorticity field seems to have a local clebsch representation .one of the clebsch variables may be chosen along the vortex tube direction , which appears to be regular . moreover, the vortex lines within the support of maximum vorticity seem to be quite smooth and has length of order one , implying that has a positive lower bound .thus the result that we described above may apply .one of the important findings of the hou - li computations is that the maximum vorticity does not grow faster than double exponential in time .our new estimate on the vorticity growth may offer a theoretical explanation to the mechanism that leads to this dynamic depletion of vortex stretching . .computation from hou and li for the 3d incompressible euler equations with two slightly perturbed anti - parallel vortex tubes initial data.,width=302 ] we also apply our method of analysis to the surface quasi - geostrophic model ( sqg ) . as pointed out in , a formal analogy between the sqg model and the 3d euler equations can be established by considering as the corresponding vorticity in the 3d euler equations . here is a scalar quantity that is transported by the flow : let be a level set segment of along which is comparable to and denote by the unit tangent vector of . under the assumption that and , we obtain a much better growth estimate for : in particular , if , the above estimate implies that .this seems to be consistent with the numerical results obtained in , see also .the rest of the paper is organized as follows . in section 2 ,we derive our estimate on the integral of vorticity over a vortex line segment for the 3d euler equations , and apply this estimate to obtain an upper bound for the dynamic growth rate of maximum vorticity . in section 3 ,we generalize our analysis to the sqg model . in the appendix , we prove a technical result for the 3d euler equations which states that the maximum velocity is bounded by when the vorticity field has a local clebsch representation and one of the clebsch variables has a bounded gradient .in this section , we derive a new dynamic growth estimate of the maximum vorticity for the 3d incompressible euler equations .we adopt a framework similar to that used in .let .we consider , at time , a vortex line segmant along which the maximum of ( denoted by in the following ) is comparable to .we use to parameterize with being the arclength variable . in our paper, we do not assume that is a subset of , the flow image of at time , for .this assumption was required in the analysis of .further , we denote by the arclength of . the unit tangential and normal vectors are defined as follows: , the unsigned curvature is defined as , and .finally , we denote , and .[ lemma euler ] let be a family of vortex line segments which come from the same vortex line .define as the mean of over , then , we have |\omega|({\mathbf{x}}(l , t),t)\nonumber\\ & & + \frac{1}{l}\left[|\omega({\mathbf{x}}(l , t),t)|-|\omega({\mathbf{x}}(0,t),t)|\right]\left(\frac{d { \mathbf{x}}}{dt}\cdot { \mbox{\boldmath}}+{\mathbf{u}}\cdot { \mbox{\boldmath}}\right)({\mathbf{x}}(0,t),t)\nonumber\\ & & + \frac{l_t}{l}\left(|\omega({\mathbf{x}}(l , t),t)|-q\right ) . \quad\quad\end{aligned}\ ] ] differentiating with respect to yields let be the arclength parameter of this vortex line at time . then we can write , for this specific vortex line , .let be the corresponding coordinates of the end points of , i.e. first , we can change the integral variable from to in ( [ dqw ] ) , in , deng - hou - yu proved the following equality , substituting the above relation to ( [ dqw-1 ] ) yields where we have used with .note that the arclength can be expressed as follows : differentiating the both sides with respect to , we get substituting the above relation to ( [ dqw-1 ] ) , we obtain observe that substituting the above equality to ( [ dqw-1-update ] ) , we get now , we have using and integrating by parts , we obtain ) gives |\omega|({\mathbf{x}}(l , t),t)\nonumber\\ & & + \frac{1}{l}\left[|\omega({\mathbf{x}}(l , t),t)|-|\omega({\mathbf{x}}(0,t),t)|\right]\left(\frac{d { \mathbf{x}}}{dt}\cdot { \mbox{\boldmath}}+{\mathbf{u}}\cdot { \mbox{\boldmath}}\right)({\mathbf{x}}(0,t),t)\nonumber\\ & & + \frac{l_t}{l}\left(|\omega({\mathbf{x}}(l , t),t)|-q\right ) . \quad\quad\end{aligned}\ ] ] this completes the proof of lemma [ lemma euler ] .now we are ready to state the main result of this paper .[ theorem 3d euler ] assume there is a family of vortex line segments which come from the same vortex line and , such that for some for all and .further we assume that there exist constants , such that the following condition is satisfied : then , the maximum vorticity satisfies the following growth estimate : where , and . without the loss of generality, we may assume that is monotonically decreasing , i.e. and is sufficiently small .in lemma 1 of , deng - hou - yu proved the following equality : it follows from the assumption that where . by lemma [ lemma euler ] , we have |\omega|({\mathbf{x}}(l , t),t)\nonumber\\ & & + \frac{1}{l}\left[|\omega({\mathbf{x}}(l , t),t)|-|\omega({\mathbf{x}}(0,t),t)|\right]\left(\frac{d { \mathbf{x}}}{dt}\cdot { \mbox{\boldmath}}+{\mathbf{u}}\cdot { \mbox{\boldmath}}\right)({\mathbf{x}}(0,t),t)\nonumber\\ & & + \frac{l_t}{l}\left(|\omega({\mathbf{x}}(l , t),t)|-q\right)\nonumber\\ & \equiv & i_1+i_2+i_3+i_4.\end{aligned}\ ] ] recall that we choose the endpoint of such that which implies that . we also have by our assumption .thus , we conclude that to estimate , we use the assumption , which implies that it remains to estimate and on the right hand side of ( [ dqw - update ] ) .first of all , can be estimated as follows : as for , we proceed as follows : : |\omega|({\mathbf{x}}(l , t),t ) \le2c_1vq / l .\end{aligned}\ ] ] now , combining ( [ dqw-2 ] ) , ( [ dqw-1 - 3 ] ) , ( [ dqw-1 - 1 ] ) and ( [ dqw-1 - 2 ] ) , we obtain the following estimate for , where , .it follows from the above inequality that therefore , we have proved that this completes the proof of theorem [ theorem 3d euler ] .if we further assume has a positive lower bound , then the above growth estimate for the maximum vorticity implies no blowup up to , if is integrable from 0 to .this extends the result of cordoba and fefferman . in the critical casewhen , if we further assume that there exists a positive constant such that then the solution remains regular up to time .[ euler - corollary ] using theorem [ theorem 3d euler ] and the assumption ( [ assumption - critical ] ) , we have since .then , the beale - kato - majda non - blowup criterion implies that there is no blowup up to time .we remark that corollary [ euler - corollary ] generalizes the result of deng - hou - yu in with less restrictive requirement on the scaling constants . more specifically, if there is \xi\xi ] , such that for and for .let be a small positive parameter to be determined later .then we have by a direct calculation , we get to estimate , we split it into two terms as follows : we first estimate .integration by parts gives \mathbf{y}\right|\nonumber\\ & \le & \frac{1}{4\pi}\left| \int_{\mathbb{r}^3}\left[{\mathbf{u}}({\mathbf{x}}+\mathbf{y},t)\times \nabla\left(1-\chi\left(\frac{|\mathbf{y}|}{\rho}\right)\right)\right ] \frac{\mathbf{y}}{|\mathbf{y}|^3}d\mathbf{y}\right|\nonumber\\ & & + \frac{1}{4\pi}\left| \int_{\mathbb{r}^3}\left(1-\chi\left(\frac{|\mathbf{y}|}{\rho}\right)\right ) \left[\left({\mathbf{u}}({\mathbf{x}}+\mathbf{y},t)\times\nabla\right)\times \frac{\mathbf{y}}{|\mathbf{y}|^3}\right]d\mathbf{y}\right|\nonumber\\ & \equiv & a+b.\end{aligned}\ ] ] to estimate , using the assumptions and , we can to split into three terms for any : d\mathbf{y}\right|\nonumber\\ & \le & \frac{1}{4\pi}\left| \int_{\mathbb{r}^3}\left(1-\chi\left(\frac{|\mathbf{y}|}{\delta}\right)\right ) \left[\left(\left(\phi\nabla \psi\right)({\mathbf{x}}+\mathbf{y},t)\times \nabla\chi\left(\frac{|\mathbf{y}|}{\rho}\right)\right)\times \frac{\mathbf{y}}{|\mathbf{y}|^3}\right ] d\mathbf{y}\right|\nonumber\\ & & + \frac{1}{4\pi}\left| \int_{\mathbb{r}^3}\chi\left(\frac{|\mathbf{y}|}{\rho}\right ) \left[\left(\left(\phi\nabla \psi\right)({\mathbf{x}}+\mathbf{y},t)\times \nabla\left(1-\chi\left(\frac{|\mathbf{y}|}{\delta}\right)\right)\right)\times \frac{\mathbf{y}}{|\mathbf{y}|^3}\right ] d\mathbf{y}\right|\nonumber\\ & & + \frac{1}{4\pi}\left| \int_{\mathbb{r}^3}\chi\left(\frac{|\mathbf{y}|}{\rho}\right ) \left(1-\chi\left(\frac{|\mathbf{y}|}{\delta}\right)\right ) \left[\left(\left(\phi\nabla \psi\right)({\mathbf{x}}+\mathbf{y},t)\times \nabla\right)\times \frac{\mathbf{y}}{|\mathbf{y}|^3}\right ] d\mathbf{y}\right|\nonumber\\ & \equiv & c+d+e,\hspace{1 cm } \forall { \mathbf{x}}\in l^t.\end{aligned}\ ] ] by a direct calculation , we get by taking and using the assumption and the fact that , are bounded , we prove that for some constant as long as .
by performing estimates on the integral of the absolute value of vorticity along a local vortex line segment , we establish a relatively sharp dynamic growth estimate of maximum vorticity under some assumptions on the local geometric regularity of the vorticity vector . our analysis applies to both the 3d incompressible euler equations and the surface quasi - geostrophic model ( sqg ) . as an application of our vorticity growth estimate , we apply our result to the 3d euler equation with the two anti - parallel vortex tubes initial data considered by hou - li . under some additional assumption on the vorticity field , which seems to be consistent with the computational results of , we show that the maximum vorticity can not grow faster than double exponential in time . our analysis extends the earlier results by cordoba - fefferman and deng - hou - yu . + + * keywords * 3d euler equations ; sqg equation ; finite time blow - up ; growth rate of maximum vorticity ; geometric properties . + + * mathematics subject classification * primary 76b03 ; secondary 35l60 , 35m10
particle detectors based on condensed noble gases have found wide application in high energy physics , astro - particle physics , and nuclear physics .noble liquids are attractive candidates for particle detectors due to their ease of purification , good charge transport properties , high scintillation efficiency , and in the case of xenon , high density and short radiation length .examples of recent rare - event searches based on cryogenic noble gases include clean / deap , xmass , zeplin , xenon , lux , warp , ardm , exo , icarus , and meg .the experimental methods of liquid noble gas detectors have been developed in small prototype instruments whose liquid volumes range from a few cubic centimeters to tens of liters . in these prototypes ,the system is typically cooled either by liquid nitrogen or by directly coupling the storage vessel to the cold head of a refrigerator .these techniques have several attractive features , including simplicity , robustness , and the availability of great cooling power . however , both of these cooling strategies typically require a significant amount of space above or below the storage vessel be devoted to the cryogenics .in contrast , room temperature detector technologies , such as gas proportional counters or plastic scintillator , are not burdened by a cryogenic system , which is particularly advantageous for prototyping work where the freedom to make maximal use of the space around the detector provides valuable flexibility . in this articlewe describe a system for condensing and storing a noble gas ( xenon ) where the cooling system is spatially separate from the liquid storage vessel , leaving only the cryostat in the space surrounding the vessel .this configuration facilitates many common laboratory operations , particularly those which require access to the upper or lower flanges of the storage vessel .examples include the installation of detector structures and the insertion and removal of material samples and radioactive sources .this setup also provides direct optical access to the interior of the vessel for viewing or for laser injection , and it simplifies the construction of a lead shield . our primary motivation for pursuing the remote cooling method discussed here is to allow the space above the liquid xenon vessel to be used for ion tagging and retrieval experiments in the context of the exo double beta decay search .exo has proposed to eliminate radioactive backgrounds by identifying the barium ion produced in the double beta decay of .the identification method may require that a device be inserted into the active volume of the double beta decay detector to retrieve the final state nucleus .the condenser and liquid xenon vessel described in this article will allow both a barium ion calibration source and an insertion and retrieval device to be coupled to the xenon vessel from above . in the last decadepulse tube cryocoolers have attracted attention as convenient and reliable means to liquefy noble gases .for example , technology has been developed for the meg and xenon experiments using a modified cryocooler integrated into a liquid xenon storage vessel .several other recent articles have reported on the development of a small - scale helium condenser based on a pulse tube cryocooler where the condenser is located directly in the neck of a liquid helium storage dewar . in our system, we also employ a pulse tube cryocooler , and our condenser is conceptually similar to the helium condenser described in references - , although in our case the remote storage vessel represents an additional complication .note that we report here the results of cooling and liquefaction tests carried out with xenon .similar results could likely be obtained with other heavy noble gases having lower saturation temperature , such as argon or krypton , provided that the heat leaks in the system are minimized .the system consists of two units : a helical copper condenser and a stainless steel liquid xenon ( lxe ) storage vessel .a system drawing and plumbing schematic are shown in figures [ fig : system ] and [ fig : schematic ] , respectively .the condenser is located above the vessel , and condensed liquid flows downward through a 1/4 " stainless steel ( ss ) tube to the top of the vessel .the condenser is cooled by the cold head of a cryocooler , while the vessel is cooled only by the xenon .the entire system is enclosed in a vacuum - insulated cryostat composed of two ss cylinders connected by a bellows .the two cylinders and bellows form one vacuum volume for pump - out purposes .the radiative heat leak is reduced by wrapping the system components in super - insulation consisting of 10 - 15 alternating layers of aluminized mylar and fabric .the condenser is a helical coil of 1/4 `` diameter oxygen free high conductivity copper ( ofhc ) tube , which was chosen for its purity and thermal conductivity .the tubing is partially annealed ; non - annealed tube was found to kink when coiled .the coil is brazed to a 2 '' diameter cylindrical shank of ofhc copper , which is mechanically attached to the coldhead of a pulse tube cryocooler .the total length of the cooled portion of the tube is 30 inches ( five turns ) .as shown in figure [ fig : condenser ] , the cold head has two stages , with base temperatures of 22 k and 8 k for the first and second stages , respectively .these base temperatures are far below what is required to liquefy xenon ( triple point of 161 k ) , but should allow the system to condense the lighter noble gases . to ensure temperature uniformity in the condenser , and to achieve good thermal control , each end of the condenser is coupled to one of the two stages of the cold head .we refer to the upper half of the condenser ( coupled to the first stage of the cold head ) as the pre - cooler " , and the lower half ( second stage ) as the post - cooler " .the function of the pre - cooler is to cool the room temperature xenon gas to the saturation temperature , while the function of the post - cooler is to remove the latent heat of vaporization , thereby effecting the phase change .the pre - cooler and the post - cooler are independently temperature controlled by trim heaters .the heaters are driven by a common 77 w regulated dc power supply , and each heater circuit is controlled by a pid temperature feedback unit .the controllers adjust the current in each heater via the gate voltage on two power fets .the pre - cooler trim heaters are mounted on flats in the cylindrical shank of the condenser .these flats make a narrow neck " between the coil and the upper cold stage , thus reducing the cooling power and allowing for more precise temperature control .a stainless steel shim between the condenser shank and the cold head further reduces the cooling power of the pre - cooler , which was found to be excessive for our purposes .the post - cooler trim heaters are attached to a plate on the bottom of the shank , which is itself attached to the second stage of the cold head via a flexible copper braid .the braid was chosen to provide a flexible thermal bridge so that the condenser is mechanically constrained at its upper end only .the control temperatures for the two pid feedback loops are measured by thermocouples located on the neck and on the plate for the pre - cooler and post - cooler , respectively . in typical operation, the pre - cooler set point temperature is chosen to be 179.5 k , and the post - cooler temperature is chosen to be a few degrees cooler . in practice , however , the post - cooler temperature tends to stabilize around 193 k during condensation , and thus the post - cooler heater does not power on .. ] this reflects the fact that the thermal coupling between the post - cooler and the second stage of the cold head has too much thermal resistance .it is likely that this resistance limits the power and efficiency of the condenser , so we intend to modify this arrangement in the near future .currently , the condensation that does occur is sufficient for our purposes .the trim heaters on the pre - cooler allow the condenser to adjust for the effects of changes in the gas flow rate . at high flow rates ,significant heat is delivered to the condenser by incoming room temperature xenon gas , so the pre - cooler trim heater reduces its power output to maintain the set point temperature . in some cases ,the incoming heat was found to be sufficient to warm the condenser to one or two degrees above the pid set point . at zero flow or low flow rates , the heat delivered to the condenser bythe gas is small , and the pre - cooler trim heater provides compensation to keep the condenser temperature above the freezing point of xenon . consequently , one benefit of the pre - cooler pid feedback loop is that it prevents over - cooling in the event that a large gas flow suddenly stops .we present some measurements of the cooling power of the second stage of the pt805 near lxe temperature .( cryomech , the cryocooler manufacturer , does not report this data at such high temperatures . )we attached a known thermal resistance to the second stage of the cold head , applied heat from the opposite end , and measured the temperature at each end .once thermal equilibrium is reached , the cooling power is equal to the power delivered by the heater , and can also be inferred from the temperature gradient across the known thermal resistance .these two methods give results in agreement with each other .we find that the cooling power is 25 w at 65 k , 26.2 w at 75 k , and 32.2 w at 152 k. the xenon vessel , seen in figure [ fig : vessel ] , is a 6 `` x 6 '' od cylinder constructed from stainless steel , which was chosen for its purity and thermal mass .this volume is sufficient to hold 10 kg , or about 3 liters , of lxe , as well as a particle detector .the top and bottom of the vessel are 8 `` conflat flanges .the vessel is suspended from a vertical 3 '' od stainless steel tube , 6 `` in length , which penetrates its top flange . at its upper end ,the 3 '' tube is welded to a 4 5/8 `` conflat flange and a large stainless steel plate which provides mechanical support .the plate also acts as part of the cryostat .the 4 5/8 '' flange and tube allows direct access to the interior of the xenon vessel from the laboratory . with a glass viewport attached to this flange, the lxe can be seen inside the vessel .this access port can also be used to introduce detectors , materials , or radioactive sources .a smaller , fused - silica viewport is welded into the bottom of the vessel ; it is paired with an identical viewport in the cryostat to admit laser light for use in lxe purity tests .the vessel has three plumbing connections for xenon flow : a 1/4 `` ss liquid supply line that enters the vessel at the top , a 3/8 '' liquid return line that drains the vessel from the bottom , and a 1/4 " gas return line that exits the vessel at the top ; see figures [ fig : schematic ] and [ fig : vessel ] .lxe from the condenser runs down the liquid supply line and collects in the bottom of the vessel and in the liquid return line .a heater on the liquid return line is used to boil the liquid for re - circulation or for recovery .the gas return line at the top of the vessel allows gas to circulate freely through the vessel , especially when the vessel is filled with liquid . as discussed in section [ sec : gasflow ] below , free recirculation of gas is important for system operation .all three plumbing lines have vcr fittings so they can be disconnected from the rest of the system , thus allowing the condenser to be removed for servicing . an external , custom gas recirculation pump is used to force xenon flow through the condenser and the xenon vessel . it can achieve controllable xenon flow rates of up to 10 slpm .the pump is a bellows - type , made entirely of stainless steel , except for a teflon sleeve .the pump is driven by a 1/3-hp , three - phase motor ; the motor itself is controlled by an inverter , which allows flow control via adjustment of the repetition rate of the pump .a capacitive level sensor is integrated into the liquid return line .it has a co - axial cylindrical geometry formed by suspending an 11.5 `` x 1/4 '' od stainless steel tube inside the 3/8 " od liquid return line .the inner conductor is vented to allow the liquid to flow unimpeded .it is wrapped at the top and bottom with a small amount of kapton tape to prevent electrical shorting .the sensor has a capacitance of 104 3 pf in vacuum ( 224 3 pf including cabling ) . because the level meter is in the liquid return line , rather than the vessel itself, it will only accurately read the liquid level in the vessel when the pressures are equal .this can be ensured by keeping the gas return valve in figure [ fig : schematic ] open .changes in capacitance are measured with a custom circuit .the level sensor is in series with a resistor , forming a low - pass rc filter .an ac voltage of amplitude 0.15 v and frequency 8 khz is input to the filter , and the voltage across the capacitor is amplified and rectified for computer readout by a data acquisition board . to maximize sensitivity and dynamic range ,the resistor in the filter was chosen such that 8 khz is near the knee frequency .the response of the circuit output was calibrated with a set of known capacitances . a quadratic fit of this datais used to interpolate for future measurements .the capacitance can then be converted to a liquid level measurement based on the known dielectric constant of lxe .changes in capacitance as small as 1 pf can be measured , corresponding to a height sensitivity of about 1 mm .we have constructed an alarm system to notify lab personnel in the event of a serious system failure , such as a power outage .three alarm conditions are considered : loss of electrical power to the cryocooler , loss of electrical power to the trim heater power supply and/or pid controllers , and an overpressure alarm .each alarm is represented by a switch which closes if the alarm condition is present .the switch then activates a commerical pager unit , which dials a list of phone numbers until the alarm is acknowledged .the pager unit derives power from a ups to ensure that it remains active in the event that electrical power is lost throughout the lab .test alarms can be generated with a push button to confirm that the system is active .during normal operation , the recirculation pump is used to force gas into the condenser , where some fraction of it condenses and travels with the remaining gas flow down into the vessel .the liquid either cools the vessel through evaporation or collects in the vessel , depending on the vessel temperature .the gas usually exits the system through the gas return line , although it can return through the liquid return line as well if no liquid is present .cryo - pumping can also be used as a source of gas flow , but this is less convenient due to the consumption of liquid nitrogen .we find that forced gas flow significantly increases the cooling power delivered from the condenser to the vessel .one possible explanation for this effect is that gas flow is necessary to transport the xenon dew " from the cold surfaces of the condenser to the remote storage vessel .regardless of the origin of the effect , however , the importance of gas flow for cooling purposes magnifies the role of the gas return line in the system . during the initial cool down, it makes no difference if the gas return valve is open or closed , because the liquid return line provides an alternate return path .but once liquid has collected , the valve must remain open , or else the liquid will block the flow of gas , preventing cooling power from being delivered to the vessel . starting with the system at room temperature and under vacuum ,i.e. , with no xenon gas present , the vessel temperature drops by only a few degrees when the cryocooler is activated , demonstrating that conductive cooling through the ss plumbing is negligible .once xenon gas is introduced , the xenon vessel cools through convective heat exchange with the condenser , but the vessel temperature levels out at roughly 200 k in the absence of forced gas flow .we find that a minimum gas flow rate of about 1.2 slpm is typically necessary to cool the xenon vessel from room temperature to the saturation temperature . at higher flow rates , from 2 to 4 slpm, the vessel cools more quickly , as shown in figure [ fig : temp_v_flow ] .the effects of increasing the flow rate even further are unclear . in one instance , increasing the flow rate to 5 slpm decreased the vessel cooling rate , since the large amount of incoming warm gas heated the condenser above lxe temperature . however, in another instance , flows of 6.8 slpm led to greater cooling rates in the vessel , so there may be other effects , such as pressure , that play a role .cooling rates are discussed further in section [ sec : coolingrates ] . forced gas flow may only be critical during the initial phase of vessel cooling .during one cool - down , gas flow and condensation were established , but the recirculation pump was unexpectedly halted , interrupting flow for hour .condensation continued during this time . upon restarting the pump , the xenon gas pressure rose and would not stabilize until the pump was shut off again .however , condensation continued and the liquid level in the vessel rose without the forced flow .it is therefore not clear if the 1.2 slpm flow rate is an absolute limit for vessel cooling ; it may only be required to initialize condensation .once the vessel is cold and filled with liquid , it is easy to maintain at constant temperature and pressure for indefinite periods of time by forcing gas flow through the condenser and liquid vessel , using the gas return line as the exhaust . in this arrangementthe gas circulates in a closed loop .this is generally our default configuration during liquid maintenance , and it allows for continuous purification of the xenon gas with a gas phase purifier .however , we have also studied the possibility of maintaining the liquid at constant temperature and pressure with the recirculation pump turned off .this situation could be important , for example , if the pump were to lose power unexpectedly .as detailed below , we find that it is possible , but more difficult , to achieve stability with the recirculation pump turned off .note that the xenon will thermodynamically recirculate at a small rate ( a few sccm ) if the external gas plumbing allows gas to flow from the system output to the condenser input .this thermal recirculation " is driven by the system heat leak , and it can be augmented by warming the liquid return line with its heater .we achieve temperature and pressure stability most easily in the absence of forced gas flow by closing a valve in the external plumbing which prevents thermal recirculation . under these conditions, we expect that no gas will enter the condenser input from the external system .our flowmeter confirms this expectation .nevertheless , we see through the large viewport that a steady stream of liquid drops falls into the vessel from above .this indicates that gas is counter - flowing up the liquid supply line from the vessel , liquefying , and falling back down , creating a closed heat exchange loop .this behavior is rather sensitive to the condenser temperature .for example , raising the pre - cooler temperature from 179.5 k to 180.5 k is enough to disturb the establishment of this heat exchange loop .if we turn off the external recirculation pump without preventing thermal recirculation , then we usually find that condensing slows or stops , and that the system temperature and pressure slowly rise .since the previous tests show that the system is able to condense the cold counter - flowing gas from the lxe vessel , this behavior could indicate that the extra heat load from the room temperature gas is too large .it is possible that this situation could be remedied by improving the cooling power of the condenser , particularly that of the post - cooler .also note that if we encourage thermal recirculation with the heater , this makes the situation worse , increasing the rate of temperature and pressure rise .on the other hand , if we establish the internal heat exchange loop by preventing external thermal recirculation , and we allow the system to run in this mode for several hours , we find that opening the external valve to allow thermal recirculation does not disturb the system stability .that is , the system behaves as if the valve were still closed .the operation of the system is divided into four stages : preparation , vessel cool - down and filling , liquid maintenance , and recovery . to remove impurities in the system , the condenser and vesselare evacuated to torr using a turbomolecular pump backed by a dry scroll pump .the cryostat is evacuated to torr using a similar configuration , and it is pumped continuously while the cryocooler is running . a xenon gas supply bottle with a regulator is opened , and the regulator is set to introduce gas at a pressure between 1000 and 1400 torr . the recirculation pump is turned on and set to a flow rate of at least 1.2 slpm .the gas circulates though the condenser , vessel , and external plumbing , and optionally through a zirconium getter for purification .the pressure regulator on the xenon gas supply bottle is left open to allow additional gas to enter the system as needed .the pre - cooler set point temperature is set to 179.5 k. once this is done , the system is prepared for cool - down . during the vessel cool - down and filling phase, the gas continues to circulate through the condenser , vessel , and external plumbing .as the gas cools and becomes more dense , and later as the gas liquefies , additional gas is delivered from the supply bottle to the system as needed to maintain a constant pressure .the total amount of xenon in the liquid system can be monitored by measuring the remaining pressure in the supply bottle .figure [ fig : cooldown ] shows a temperature history of a typical cooldown as recorded by thermocouples at the output of the condenser and at the liquid return line near the bottom of the vessel .the cryocooler is activated to begin cooling the system , and the condenser temperature immediately drops .when the temperature reaches 260 k , the trim heaters turn on for the first time , momentarily warming the condenser .( the heaters are programmed to stay off when the condenser is above this temperature .this avoids overheating the cold head ) .after hour , the temperature of the output of the condenser falls sharply , indicating lxe has formed and is pouring down into the vessel . after this sharp temperature decrease ,drops of lxe can be seen via the viewport falling onto the bottom of the vessel and boiling away , and the temperature of the lxe return line decreases at about six times the previous rate .note that this indicates that most of the cooling power is delivered to the vessel in the form of liquid xenon , rather than gaseous xenon .the vessel cools this way for some time , typically about 8 hours , but it can be more or less depending on flow and pressure conditions . when the vessel temperature is low enough , liquid begins to collect in the bottom of the vessel .when enough liquid has collected , it overflows a small lip at the drain of the vessel that leads to the liquid return line .this `` splash '' brings the cold liquid into direct contact with the return line , causing the temperature there to quickly drop , as can be seen in figure [ fig : cooldown ] around 16:00 on 12/29 .this drop is correlated with a quick rise in the measured liquid level as the level meter is filled for the first time .after the `` splash '' of liquid fills the return line , lxe continues to collect in the vessel . at this stage, the gas return bypass valve must be open to ensure that a high rate of gas flow can continue .( we typically keep this valve open through the entire process , from initial preparation through xenon recovery . )condensing 1 kg of xenon takes 2 - 3 hours , depending on pressure and flow conditions .liquid can be maintained indefinitely in the vessel at constant temperature and pressure by keeping the pre - cooler set point temperature at 179.5 k , and by maintaining a nominal gas flow rate of 1 - 2 slpm with the recirculation pump . as described in section [ sec : gasflowliquidmaintenance ] , the recirculation pump can also be turned off under certain conditions .the xenon is recovered by cryopumping .the room temperature gas supply bottle , which is made of aluminum , is placed in a liquid nitrogen bath , freezing any xenon inside .the supply bottle pressure regulator is fully opened to allow xenon gas to flow backwards through it .in addition , a bypass valve which is connected in parallel with the pressure regulator can be opened to reduce the pumping impedance further , but this is usually not necessary . finally the cryocooler is turned off and the lxe is allowed to warm , raising the vapor pressure .a hand valve between the liquid system and the cold supply bottle is used to manually regulate the gas flow rate .care should be taken to avoid forming xenon ice by pumping too quickly .recovery can be made quicker by heating the liquid return line directly with its integrated heater , or by filling the cryostat insulating vacuum with a dry gas such as nitrogen or helium to provide a thermal connection to the room temperature lab .figure [ fig : level ] shows a sample plot of the height of lxe in the vessel during a cool - down .the gas pressure was 1000 torr .the recirculation pump forced a constant flow rate of 2.8 slpm from 11:00 to 18:45 pm , after which it was turned off , and the xenon supply bottle was valved off . the initial rapid rise from -2.7 cm to 0.6cm corresponds to the `` splash '' when liquid first overflows from the vessel into the return line .after that , a roughly linear increase in the height of the xenon can be seen as the storage vessel fills with liquid .a surprising feature is that from 6:45 pm until 9:00 am the next day , the liquid level is seen to be rising slowly .this could indicate that the level sensor circuit has a slowly drifting systematic error .the cooling rate of the stainless steel vessel was measured on two separate trials . during the first run, the flow rate was maintained at about 1.4 slpm , with a xenon pressure of 1500 torr .the vessel cooled at 5.6 k / hr . during another run ,the vessel cooled at 6.5 k / hr , with a much higher gas flow rate of 6.8 slpm and pressure of 1500 torr . using a specific heat for stainless steel of 500j / kg / k , and estimating the vessel weight at 20 kg , we find that these cooling rates correspond to 15.5 w and 18.9 w of cooling power transferred to the vessel .two trials are presented in which the condensation rate is measured at various pressure and flow parameters .condensation rate is measured during vessel filling in two ways . 1 . )the mass of xenon remaining in the gas supply bottle is calculated from the known volume of the bottle , the measured pressure , and the density of xenon gas at that pressure .( the density of xenon gas has a non - linear relationship to the pressure at typical bottle pressures , and this dependence must be taken into account . )the rate of decrease of source mass was then calculated using central differencing and equated to the rate of condensation in the liquid system .the height of xenon in the vessel is measured , and converted into a volume and mass using the known cylindrical geometry of the system and the density of lxe .the rate of change in liquid height then equates to the condensation rate .these two methods give results in good agreement with each other , although the liquid height method is complicated by the volume displacement of the irregularly shaped particle detector in the lxe vessel . in the following , we quote results based on the bottle pressure method . in the first trial ,the xenon gas pressure was set to 1550 torr , and the gas flow rate was varied .the condensation rates were 0.54kg / hr for a flow of 2.65 slpm and 0.61kg / hr for a flow 3.89 slpm . in a second trial ,the gas pressure was set to 1000 torr .the recirculation flow rate was set to 2.8 slpm for the bulk of the trial .condensation rates between 0.36 and 0.40 kg / hr were observed . the largest rate of condensation , 0.61kg / hr , was observed at a high pressure and large recirculation rate , 3.89 slpm and 1550 torr .this implies that 44% of the circulating gas is condensed in a single pass . a condensation fraction of 58%was also achieved , but only at the cost of a lower condensation rate of 0.54 kg / hr .tests at a lower pressure of 1000 torr indicate both lower condensation rate , 0.36kg / hr , and a lower efficiency of 36% .thus , the condensation fraction depends moderately on pressure and flow . using the latent heat of vaporization of xenon , 12.64 kj / mol , and the specific heat of xenon gas , 20.8 j / mol / k , we can calculate the cooling power implied by our condensing rates . for the largest condensation rate , 0.61 kg / hr, we find a cooling requirement of 3.5 w and a latent heat removal of 16.4 w , for a total of 19.9 w. this is similar to the 18 w of cooling power we estimated is delivered to the stainless steel vessel during cooling .we find that for reliable operation of our condenser it is essential to control the temperature at both the top and bottom .initial tests in which the condenser was cooled only at the top showed that a large temperature gradient would appear along its length . in this arrangement , the top of the condenser must be over - cooled to allow liquefaction to occur in the lower portion of the condenser .this leaves the condenser prone to ice formation , particularly if the gas flow rate suddenly decreases , removing a heat source . with a dual control mechanism ,both ends of the condenser are cooled , ensuring that temperature gradients are small .in addition , the two thermodynamic functions of the condenser ( cooling warm gas and liquefaction ) are separated spatially in the condenser , allowing for semi - independent temperature regulation with a pre - cooler and a post - cooler .this gives the condenser the flexibility to adjust to changing conditions , such as a change in the gas flow rate . in practice ,our post - cooler temperature is usually above its set point temperature during condensation , and therefore its trim heater plays little role .nevertheless , the additional cooling provided by the second stage of the cold head through the post - cooler improves temperature uniformity in the condenser , leading to more robust operation . to achieve greater control in the future, we intend to increase the cooling power of the post - cooler by reducing its thermal resistance .xenon ice formation is a dangerous problem for external condensers such as the one described here .ice can block the flow of liquid and gas to the xenon vessel .since the xenon flow is the only cooling mechanism for the vessel , its interruption can lead to a dangerous rise in system temperature and pressure .our design is particularly sensitive to this problem , because the helical coil of the condenser has a cross - sectional outer diameter of only 1/4 " .therefore even a small amount of ice can lead to flow blockage .a condenser coil made from larger diameter tubing may improve the situation , or perhaps a condenser with an altogether different geometry may be better . for our purposes , however , the current design has proved to be adequate with appropriate safeguards . gas flow is essential for transferring the cooling power from the condenser to the xenon vessel , and a xenon gas return line from the top of the vessel greatly improves the effectiveness .there is a minimum flow rate necessary to cool the vessel to lxe temperature , and the condensation rate increases , albeit only slightly , with increasing flow .cryopumping can serve as the method for forcing flow , but this is an awkward process that consumes large amounts of liquid nitrogen , and flow must be interrupted frequently to warm and cool the supply and recovery bottles. therefore we find it is very useful to have a recirculation pump .using the pump , gas flow can be easily maintained for days at a time , and with an inverter controller , the flow rate can be dialed to a desired value for easy testing .the main drawback to the pump is a loss of purity : our custom pump contains a teflon sleeve which is a source of outgassing and teflon debris in the system .these problems can be solved with purifiers and filters , however .we have described a system for condensing and storing xenon where the source of cooling power has been removed from the vicinity of the liquid storage vessel , facilitating the introduction of instruments and materials to the vessel .condensation rates as high as 0.61 kg / hr were achieved , after an initial cool - down period of 8 - 10 hours .this corresponds to a condensation fraction of 44% and a cooling power of about 20 w. changes in condenser design may be able to improve the condensation fraction and cooling power .our design includes a two - stage cooling system for improved temperature uniformity and control .we find that a nominal gas flow rate is important for delivering cooling power to the vessel , and that a dedicated gas return line is useful for maintaining this flow when the vessel is filled with liquid .we thank john carriker for his many contributions to the system described in this article . this work was supported by award number 0652690 from the national science foundation .p. benetti , _ et .( warp collaboration ) , `` first results from a dark matter search with liquid argon at 87 k in the gran sasso underground laboratory '' , astropart . phys . * 28*:495 - 507 ( 2008 ) ,arxiv : astro - ph/0701286v2 .c. hall , `` searching for double beta decay with the enriched xenon observatory '' , proceedings of the 9th conference on the intersections of particle and nuclear physics ( cipanp 2006 ) , aip conf .proc . 870:532 - 535 , 2006 .
we describe the design and operation of a system for xenon liquefaction in which the condenser is separated from the liquid storage vessel . the condenser is cooled by a pulse tube cryocooler , while the vessel is cooled only by the liquid xenon itself . this arrangement facilitates liquid particle detector research by allowing easy access to the upper and lower flanges of the vessel . we find that an external xenon gas pump is useful for increasing the rate at which cooling power is delivered to the vessel , and we present measurements of the power and efficiency of the apparatus . xenon , condenser , recirculation pump
the impressive active and passive mechanical performance of a cell acting as the smallest biological unit able to move , survive and replicate independently has been of much interest .most importantly , the motility of malignant cells is an important prognosticator in cancer} ] .actin filaments , with a persistence length of about 15 pm , are responsible for the movement of the cell , whereas the stiffer microtubuli with a persistence length of several millimetres} ] ( keratin 8/18 , unpublished data ) , their function appears to be to provide the mechanical stiffness and stability of the cell .thus they are hypothesized to be the `` stress - buffering - system''} ] , little is known about intermediate filaments .we have recently demonstrated that the intermediate filament network in pancreatic cancer cells , mostly consisting of keratins 8 and 18 , is a crucial determinator for cancer cell motility} ] . herethe original architecture of the keratin network is conserved while other parts of the cell , like membrane , organels , actin , microtubuli etc .are removed .the advantage is obvious : the mechanical properties of the original keratin cytoskeleton can be investigated without any environmental influence while the local network topology is easily measured by sem afterwards} ] .hence it is quite useful to analyse the mechanical properties of parts of the cytoskeleton as isolated functional modules .additionally it is possible to create well defined networks in vitro without any undesirable components . determining the viscoelastic and mechanical properties of these in vitro assembled networks should show comparable results to extracted cell networks .microrheology is a suitable tool for measurements of the mechanical properties of both the extracted cytoskeleton and of in vitro assembled keratin networks .the possibility to measure over an extended range only by observing thermal fluctuation is an advantage over the traditional measurements where mechanical stress is applied and the frequency range is limited} ] .typical afm} ] .the model is based on the generalized stokes - einstein equation and provides a good way to determine the linear viscoelasticity of complex fluids . to derive the moduli we assume that the filament network is treated as a viscoelastic , incompressible , isotropic fluid .it therefore is assumed to be a continuum and the beads inertia was neglected , too . around the beads surfacethere are noslip boundary conditions which are a good approximation in the frequency range covered by our experiment .a brief sketch of the derivation of the shear moduli is given below} ] : with and being the mass of the bead and its acceleration .the memory function describes the response of the incompressible complex fluid .the function contains the stochastic forces of the viscous fluid .causality and the usage of the equipartition theorem of thermal energy relates the local memory function to the velocity of the bead . the complex viscosity can be related to the memory function by using the stokes relation : with the complex shear modulus can be calculated in terms of a unilateral fourier transform this equation represents a generalization of the stokes - einstein equation in the fourier domain .the evaluation of the fourier term can be done by an estimation of the transforms by expanding , the mean square displacement , locally around the frequency of interest using the power law expansion .this evaluation leads to a relation which is suitable for analytic computation .hence , we find }\ ] ] with the power law exponent which depends on the logarithmic slope of the mean square displacement ( msd ) .since in general is a complex function , it can be split into the real and the imaginary part .they are related by the kramers - kronig - relation . , the so called storage modulus , is the real part and describes the dissipation - free spring - like behaviour . , the loss modulus , is the imaginary part and gives information about the liquid - like dissipative behaviour of the material . for the shear moduliwe achieve in simple viscous fluids the observed msd becomes proportional to and the complex modulus purely imaginary with proportional to the macroscopic shear viscosity . for a simple elastic solidthe msd is lag - time independent .a viscoelastic material shows an intermediate form , where the storage modulus dominates at low frequencies and the loss modulus at high ones .the human keratin 8/18 network was analysed in vitro to compare the mechanical properties with those from the extracted keratin cytoskleton of a cell .figure 2 shows storage and loss moduli for a fixed frequency .one can clearly see the dependency between the distance rim of the nucleus - bead and the moduli .the farther away the bead is from the nucleus rim the weaker is the surrounding network . compared to the extracted cytoskeleton the storage modulus of the in vitro assembled network is about one order of magnitude lower whereas the loss modulus of the in vitro assembled network is higher than that of the extracted keratin cytoskeleton .possible reasons are mainly the differences in the construction of the two networks and the differences in the bead interaction .on the one hand the extracted cytoskeleton is a dense crosslinked network with chemical bondings at the branching points as displayed in figure 1 on the right side . on the other handthe in vitro polymerised network is entangled with at most few intersections ( figure 1 left side ) .the force connecting filaments to an entanglement originates from friction forces and not from chemical bonding and leads therefore to a weaker connection point .this difference can be clearly seen in figure 3 .two point clouds from typical measurements are displayed as iso surface representations of the 3d position distribution .figure 3a shows the density cloud of a bead embedded in the extracted cytoskeleton .the bead is fluctuating almost homogeneously in all directions .by contrast figure 3b shows the iso surface representation of the 3d position distribution of a bead in the in vitro assembled network .the bead is fluctuating more freely .it can circle a filament ( small picture ) or jump from one mesh to another ( supporting data ) . from this structural difference the characteristics of the moduli can be explained with a possible reptation - like behaviour in the entangled network} ] .additionally there may be a preferred direction of assembly because of cell movement or a prestress from the membrane .all these factors indicate a different topology compared to the in vitro polymerised network . herethe assembly is a stochastic temperature driven process .there is no influence from the surrounding environment or a directed development of the network .just two filament ends meeting coincidentally lead to a junction and two crossing filaments can lead to an entanglement .the interaction of the bead with the network is strongly dependent on the polymerisation process . during the self assembly of the in vitro polymerised networkthe bead is continuously fluctuating .this movement leads to the formation of a cavity around the bead where no polymerisation takes place .thus the bead is not as strongly embedded in the network as in case of the cytoskeleton .there the bead is integrated in the network by strong connections of single filaments with the bead .another aspect is the keratin concentration of 1 mg / ml ( see experimental section ) in the network .this leads to an estimated mesh size of 400 nm in the in vitro polymerised network and is comparable to the mesh size of the extracted cytoskeleton} ] .the assembly protocol was as follows} ] .the protein concentration of the assembly for the microrheology measurements was 1 mg / ml . additionally 1 m spheres were added in a concentration of 1 % by weight in millipore water to protein solution before starting the assembly . for the measurement of the brownian motion beads of a diameter of 500 nm and m were used in an extracted keratin network .living carcinoma cells incorporate the beads and embed them into their cytoskeleton .after the extraction of the cell remains the keratin network , the nucleus and the incorporated beads .for further details on cell extraction we refer to beil et al .2003}$ ] .a microscope setup with additional high - speed camera was used to track the embedded beads with a frequency of 5000 hz at a time window of 3.2 sec and a resolution of 256x256 pixels .the usage of the camera allows the tracking of several beads simultaneously in three dimensions ( manuscript in preparation ) and therefore has the advantage of the measurement of beads at the same conditions .we thank the dfg sfb518 ( tp ) and project ma1297/10 - 1 ( al ) for financial support , andreas huler , carlo di giambattista and sarah pomiersky for discussion and measurements and all other members of the institute of experimental physics , ulm univerity , paul walther and the ze elektronenmikroskopie , ulm university , and the members of the division of molecular genetics from the dkfz heidelberg .special thanks to prof .adler and the department of internal medicine i , university hospital ulm , and to prof .mizaikoff and the institute of analytical and bioanalytical chemistry , ulm university , for sharing laboratories during constructions .* references * 1. a. r. bausch , k. kroy , nature physics 2006 , 2 , 231 - 238 ; 2 . h. herrmann , u. aebi , annu .biochem.2004 , 73 , 749 - 789 ; 3 .h. herrmann , t. wedig , r. m. porter , e. b. lane , u. aebi , j. struct .2002 , 137 , 82 - 96 ; 4 .e. fuchs , k. weber , annu .1994 , 63 , 345 - 382 ; 5 .t. yanagida , m. nakase , k. nishiyama , f. oosawa , nature 1984 , 301 , 58 ; 6 .f. gittes , b. mickey , j. nettleton , j. howard , j. cell biol .1993 , 120 , 923 ; 7 .t. g. mason , rheol acta 2000 , 39 , 371 - 378 ; 8 .f. gittes , b. schnurr , p.d .olmsted , f.c .mackintosh , c. f. schmidt , phys .rev . letter .1997 , 79 , 3286 - 3289 ; 9 .t. g. mason , d. a. weitz , phys .rev . letter .1995 , 74 , 1250 - 1253 ; 10 .r. b. bird , r. c. armstrong , o. hassager , dynamics of polymer liquids 1977 , wiley , new york ; 11 .p hansen , i. r. mcdonald . ,theory of simple liquids 1986 , academic press , london ; 12 .n. mcke , l. kreplak , r. kirmse , t wedig h. herrmann , u. aebi , j. langowski , j. mol .2004 , 335 , 1241 - 1250 ; 13. m. beil , a. micoulet , g. v. wichert , s. paschke , p. walther , m. bishr omary , p.p .van veldhoven , u. gern , e. wolff - hieber , j. eggermann , j. waltenberger , g. adler , j. spatz , t. seufferlein , nat .cell biol .2003 , 5 , 803 - 811 ; 14 .i. horcas , r. fernandez , j. m. gomez - rodriguez , j. colchero , j. gomez - herrero , a. m. baro , rev .2007 , 78 , 013705 ; 15 .m. beil , h. braxmeier , f. fleischer , v. schmidt , p. walther , journal of microscopy 2005 , 220 , 84 - 95 ; 16. p. g. de gennes , j. chem . phys .1971 , 55 , 572 - 579 ; 17 .h. herrmann , m. hner , m. brettel , n. ku , u. aebi , j. mol .1999 , 286 , 1403 - 1420 ; 18 .r. windoffer , s. wll , p. strnad , r. e. leube , mol .biol . cell .2004 , 15 , 2436 - 2448 ; 19 . j. c. crocker , b. d. hoffman , meth .cell biol .2007 , 83 , 141 - 178 ; 20 . for an overviewsee b. bhushan , o. marti , handbook of nanotechnology , 2007 , springer , new york , and references there in ;\a ) afm picture of the in vitro assembled keratin network , 7.5 m x 7.5 m topography b ) afm picture of the extracted cytoskeleton , m x 2 m topography .2 . , as function of distance to the rim of the nucleus for a fixed frequency hz . ( empty squares ) and ( empty circles ) of the extracted cytoskeleton and ( full squares ) and ( full circles ) of the in vitro assembled network . in this sketch the shear moduli of the in vitro assembled network are constant , because of the lack of a nucleus .\a ) iso surface representation of 3d position distribution of a bead in the in vitro assembled network .b ) iso surface representation of 3d position distribution of a bead in the extracted cytoskeleton. iso surface representation of 3d position distribution shows all point with an equal probability to find the particle .
in our work we compare the mechanical properties of the extracted keratin cytoskeleton of pancreatic carcinoma cells with the mechanical properties of in vitro assembled keratin 8/18 networks . for this purpose we use microrheology measurements with embedded tracer beads . this method is a suitable tool , because the size of the beads compared to the meshsize of the network allows us to treat the network as a continuum . observing the beads motion with a ccd - high - speed - camera then leads to the dynamic shear modulus . our measurements show lower storage moduli with increasing distance between the rim of the nucleus and the bead , but no clear tendency for the loss modulus . the same measurement method applied to in vitro assembled keratin 8/18 networks shows different characteristics of storage and loss moduli . the storage modulus is one order of magnitude lower than that of the extracted cytoskeleton and the loss modulus is higher . we draw conclusions on the network topology of both keratin network types based on the mechanical behaviour . * keywords * : afm , cytoskeleton , dynamic shear modulus , keratin 8/18 , particle tracking microrheology
roughly speaking , randomness is the fact that , even using all the information that we have about a physical system , in some situations it is impossible , or unfeasible , for us to predict exactly what will be the future state of that system .randomness is a facet of nature that is ubiquitous and very influential in our and other societies . as a consequence , it is also an essential aspect of our science and technology .the related research theme , that was motivated initially mainly by gambling and led eventually to probability theory , is nowadays a crucial part of many different fields of study such as computational simulations , information theory , cryptography , statistical estimation , system identification , and many others .one rapid - growing area of research for which randomness is a key concept is the maturing field of quantum information science ( qis ) .our main aim in this interdisciplinary field is understanding how quantum systems can be harnessed in order to use all nature s potentialities for information storage , processing , transmission , and protection .quantum mechanics is one of the present fundamental theories of nature .the essential mathematical object in this theory is the density operator ( or density matrix ) .it embodies all our knowledge about the preparation of the system , i.e. , about its state . from the mathematical point of view , a density matrix is simply a positive semi - definite matrix ( notation : ) with trace equal to one ( ) . such kind of matrix can be written as , which is known as the spectral decomposition of . in the last equation is the projector ( and , where is the identity matrix ) on the vector subspace corresponding to the eigenvalue of . from the positivity of that , besides it being hermitian and hence having real eigenvalues , its eigenvalues are also non - negative , .once the trace function is base independent , the eigenvalues of must sum up to one , .thus we see that the set possesses all the features that define a probability distribution ( see e.g. ref . ) .the generation of pseudo - random quantum states is an essential tool for inquires in qis ( see e.g. refs . ) and involves two parts .the first one is the generation of pseudo - random sets of projectors , that can be cast in terms of the creation of pseudo - random unitary matrices .there are several methods for accomplishing this last task , whose details shall not be discussed here . herewe will address the second part , which is the generation of pseudo - random discrete probability distributions , dubbed here as pseudo - random probability vectors ( prpv ) . in this articlewe go into the details of three methods for generating numerically prpv .we present the problem details in sec .[ sec_problem ] . the sec .[ sec_iid ] is devoted to present a simple method , the iid method , and to show that it is not useful for the task regarded in this article . in sec .[ sec_norm ] the standard normalization method is discussed .the bias of the prpv appearing in its naive - direct implementation is highlighted . a simple solution to this problem via random shuffling of the prpv componentsis then presented . in sec .[ sec_trigo ] we consider the trigonometric method .after discussing some issues regarding its biasing and numerical implementation , we study and compare the probability distribution generated and the computer time required by the last two methods when the dimension of the prpv is varied .the conclusions and prospects are presented in sec .[ conclusions ] .by definition , a discrete probability distribution is a set of non - negative real numbers , that sum up to one , in this article we will utilize the numbers as the components of a probability vector despite the nonexistence of consensus regarding the meaning of probabilities , here we can consider simply as the relative frequency with which a particular value of a physical observable modeled by a random variable is obtained in measurements of that observable under appropriate conditions .we would like to generate numerically a pseudo - random probability vector whose components form a probability distribution , i.e. , respect eqs .( [ eq : prob1 ] ) and ( [ eq : prob2 ] ) .in addition we would like the prpv to be unbiased , i.e. , the components of must have similar probability distributions . a necessary condition for fulfilling this last requisiteis that the average value of ( notation : ) becomes closer to as the number of prpv generated becomes large . at the outsetwe will need a pseudo - random number generator ( prng ) . in this articlewe use the mersenne twister prng , that yields pseudo - random numbers ( prn ) with uniform distribution in the interval ] ( so the name of the method ) and set we will obtain a well defined discrete probability distribution , i.e. , and .besides , as is unbiased , the mean value of approaches as the number of samples grows .for one million random samples generated using the iid method . in the insetis shown the probability distribution for the components of the probability vector . ] nevertheless , we should note that the sum shall be typically greater than one .this in turn will lead to the impossibility for the occurrence of prpvs with one of its components equal ( or even close ) to one . as can be seem in fig .[ fig_stat_prob_dist_iid ] , this problem becomes more and more important as increases .therefore , this kind of drawback totally precludes the applicability of the iid method for the task regarded in this article .lets us begin our discussion of this method by considering a probability vector with dimension , i.e. , .if the prng is used to obtain ] for both and . if then and the prng is used again ( two times ) to obtain ] .note that the interval for was changed because of the _ normalization _ of the probability distribution , which is also used to write . as is equiprobable in $ ] ,for a large number of samples of the prpv , its mean value will be .this shall restrict the values of the other components of , shifting the `` center '' of their probability distributions to , biasing thus the prpv .of course , if one increases the dimension of the prpv , the same effect continues to be observed , as is illustrated in the table below for prpv generated for each value of . [ cols="^,^,^,^,^,^,^",options="header " , ] as was the case with the normalization method , for the trigonometric method we need to generate prn per prpv , for the angles and for the random permutation . nevertheless , because of the additional multiplications in eq .( [ eq : trig_param ] ) , the computation time for the last method is in general a little greater than that for the former , as is shown in fig .[ fig_time_d ] . .the calculations were realized using a processor 1.3 ghz intel core i5 . ]one may wonder if the normalization and trigonometric methods , that are at first sight distinct , lead to the same probability distributions for the prpv s components and also if they produce an uniform distribution for the generated points in the probability hyperplane .we provide some evidences for positive answers for both questions in figs .[ fig_comparison ] and [ fig_sample_prob_space ] , respectively . .we see that the two methods yield , for all practical purposes , the same probability distributions for the components of the prpv . ] , with exception of the slight overpopulated corners , we get a fairly uniform distribution of points in the probability space . ]in this article we discussed thoroughly three methods for generating pseudo - random discrete probability distributions .we showed that the iid method is not a suitable choice for the problem studied here and identified some difficulties for the numerical implementation of the trigonometric method .the fact that in a direct application of both the normalization and trigonometric methods one shall generate biased probability vectors was emphasized .then the shuffling of the pseudo - random probability vector components was shown to solve this problem at the cost of the generation of additional pseudo - random numbers for each prpv .it is worthwhile recalling that pure quantum states in can be written in terms of the computational basis as follows : where and .the normalization of implies that the set is a probability distribution .thus the results reported in this article are seem to have a rather direct application for the generation of _ unbiased pseudo - random state vectors_. we observe however that the content presented in this article can be useful not only for the generation of pseudo - random quantum states in quantum information science , but also for stochastic numerical simulations in other areas of science .an interesting problem for future investigations is with regard to the possibility of decreasing the number of prn , and thus the computer time , required for generating an unbiased prpv .this work was supported by the brazilian funding agencies : conselho nacional de desenvolvimento cientfico e tecnolgico ( cnpq ) and instituto nacional de cincia e tecnologia de informao quntica ( inct - iq ) .we thank the group of quantum information and emergent phenomena and the group of condensed matter theory at universidade federal de santa maria for stimulating discussions .we also thank the referee for his(her ) constructive comments .j. maziero , r. auccaise , l.c .celeri , d.o .soares - pinto , e.r .deazevedo , t.j .bonagamba , r.s .sarthour , i.s .oliveira , r.m .serra , quantum discord in nuclear magnetic resonance systems at room temperature , braz .* 43 * , 86 ( 2013 ) m. wahl , m. leifgen , m. berlin , t. rhlicke , h .- j .rahn , o. benson , an ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements , appl .. lett . * 98 * , 171105 ( 2011 )
the generation of pseudo - random discrete probability distributions is of paramount importance for a wide range of stochastic simulations spanning from monte carlo methods to the random sampling of quantum states for investigations in quantum information science . in spite of its significance , a thorough exposition of such a procedure is lacking in the literature . in this article we present relevant details concerning the numerical implementation and applicability of what we call the iid , normalization , and trigonometric methods for generating an unbiased probability vector . an immediate application of these results regarding the generation of pseudo - random pure quantum states is also described .
do events have already been simulated by a variety of models , ranging from simple conceptual ones to earth system models of intermediate complexity ( emics ) .conceptual models are most suitable to perform large numbers of long - term investigations because they require very little computational cost .however , they are often based on ad - hoc assumptions and only consider processes in a highly simplified way .in addition to that , the number of adjustable parameters is typically large compared to the degrees of freedom in those models .this implies that seemingly good results can often be obtained merely by excessive tuning .nevertheless , conceptual models can provide important help for the interpretation of complex climatic processes .the gap between conceptual models and the most comprehensive general circulation models ( gcms ) , which are not yet applicable for millennial - scale simulations because of their large computational cost , is bridged by emics .emics include most of the processes described in comprehensive models ( in a more reduced form ) , and interactions between different components of the earth system ( atmosphere , hydrosphere , cryosphere , biosphere , etc . )are simulated .the number of degrees of freedom typically exceeds the number of adjustable parameters by orders of magnitude . sincemany emics are fast enough for studies on the multi - millennial time scale , they are the most adequate tools for the simulation of do events .the simple conceptual model which we use here is an extended version of the model described by ( in the supplementary material of that publication ) . herewe use the model to demonstrate and analyse two apparently counterintuitive resonance phenomena ( _ stochastic resonance _ and _ ghost resonance _ ) that can exist in a large class of highly nonlinear systems .due to the complexity of many of those systems it is often impossible to precisely identify the reasons for the occurrence of these resonance phenomena .our conceptual model , in contrast , has a very clear forcing - response relation as well as a very low computational cost and thus provides a powerful tool to explore these phenomena and to test their robustness .furthermore , we describe and discuss the applicability of the model for improved statistical analyses ( i.e. monte - carlo simulations ) on the regularity of do events . in the following the key assumptions of the conceptual model are first discussed . in the supplementary materialwe then compare the model performance under a number of systematic forcing scenarios with the performance of a more comprehensive model ( the emic climber-2 ) , compare supplementary information file and supplementary figs . 1 - 6 . in the framework of the conceptual model we finally demonstrate and interpret two hypotheses that were previously suggested in order to explain the recurrence time of do events , and we discuss how these hypotheses could be testedour conceptual model is based on three key assumptions : 1 .do events represent repeated transitions between two different climate states , corresponding to warm and cold conditions in the north atlantic region .these transitions are rapid compared to the characteristic life - time of the two climate states ( i.e. in first order approximation they occur instantaneously ) and take place each time a certain threshold is crossed .3 . with every transition between the two states the threshold overshoots and afterwards approaches equilibrium following a millennial - scale relaxation process .this implies that the conditions for a switch between both states ameliorate with increasing duration of the cold and warm intervals .our three assumptions are supported by paleoclimatic evidence and/or by simulations with a climate model : 1 . since long , do events have been regarded as repeated oscillations between two different climate states . it has been suggested that these states are linked with different modes of operation of the thc .this seminal interpretation has since then influenced numerous studies and is now generally accepted .indirect data indicate that the glacial thc indeed switched between different modes of operation which , according to their occurrence during cold and warm intervals in the north atlantic , were labelled stadial and interstadial modes . a third mode named heinrich mode ( because of its presence during the so - called heinrich events ) is not relevant here .high - resolution paleoclimatic data show that transitions from cold conditions in the north atlantic region to warm ones often happened very quickly , i.e. on the decadal - scale or even faster .the opposite transitions were slower , i.e. on the century - scale , but nevertheless still faster that the characteristic life - time of the cold and warm intervals ( which is on the centennial to multi - millennial time scale , compare fig .the abruptness of the shifts from cold conditions to warm ones has commonly been interpreted as evidence for the existence of a critical threshold in the climate system that needs to be crossed in order to trigger a shift between stadial and interstadial conditions .such a threshold could be provided by the thc ( more precisely , by the process of deep - water formation ) : when warm and salty surface water from lower latitudes cools on its way to the north atlantic , its density increases .if the density increase is large enough ( i.e. if the surface gets denser than the deeper ocean water ) , surface water starts to sink .otherwise , surface water can freeze instead of sinking .the onset of deep - water formation can thus hinder sea - ice formation and facilitate sea - ice melting ( due to the vertical heat transfer between the surface and the deeper ocean ) .a switch between two fundamentally different modes of deep - water formation can thus dramatically change sea ice cover and can cause large - scale climate shifts .such nonlinear , threshold - like transitions between different modes of deep - water formation are at present considered as the most likely explanation for do events .3 . the time - evolution of greenland temperature during the warm phase of do events has a characteristic saw - tooth shape ( fig .[ fig : figure1 ] ) .highest temperatures typically occur during the first decades of the events .these temperature maxima are followed by a gradual cooling trend over centuries / millennia , before the system returns to cold conditions at the end of the events .this asymmetry supports the idea that the system overshoots in some way during the abrupt warming at the beginning of the events and that the subsequent cooling trend represents a millennial - scale relaxation towards a new equilibrium .we note that the time - evolution of greenland temperature provides no clear evidence for an overshooting during the opposite transitions ( i.e. from the warm state back to the cold one ) .this , however , is not necessarily in contradiction to our assumption : this lack of an overshooting in the temperature fields does not necessarily mean that the ocean - atmosphere system did not overshoot , since greenland temperature evolution in the stadial state might have been dominated by factors other than the thc ( respectively its stability ) , e.g. by greenland ice accumulation , which would mask the signature of the thc in the ice core data .+ we will show later ( in sect . 3.3 . )that the assumption of an overshooting in the stability of the system is in fact strengthened by the analysis of model results obtained with the coupled model climber-2 . in that model the overshooting results from the dynamics of the transitions between the two climate states : in the stadial state deep convection occurs south of the greenland - scotland ridge ( i.e. at about 50 ) . in the interstadial state, however , deep convection takes place north of the ridge ( i.e. at about 65 ) .the onset of deep convection north of the greenland - scotland ridge , which releases accumulated energy to the atmosphere ( i.e. heat that is stored mainly in the deep ocean ) , in first place starts do events in the model .this heat release leads to a reduction of sea ice , which in turn further enhances sea surface densities between 50 and 65 ( e.g. by increased local evaporation and reduced sea ice transport into that area ) .as a result deep convection also starts between 50 and 65 , and much more heat can be released to the atmosphere . without a further response of the thcthe system would return quickly ( within years or decades , i.e. with the convective time scale ) to its original state . in climber-2 , however , the changes in deep convection trigger a northward extension and also an intensification of the ocean circulation ( i.e. an overshooting of the atlantic meridional overturning circulation ; compare ) , which maintains the interstadial climate state since it is accompanied by an increase in the salinity and heat flux to the new deep convection area ( at about 65 ) . in response to the overshooting of the overturning circulation , the system relaxes slowly ( within about 1000 years , i.e. with the advective time scale ) towards the stadial state .we note that the advective time scale corresponds to the millennial relaxation time in our conceptual model .the model climber-2 also supports the validity of our overshooting assumption during the opposite transition ( from the warm state back to the cold one ) , as we will show in sect .we would like to stress that our interpretation of the processes during do events is , of course , not necessarily true since we can only speculate that the underlying mechanism of the events is correctly captured by climber-2 .we implement the above assumptions in the following way ( compare fig .[ fig : figure2 ] ) : first we define a discrete index s(t ) that indicates the state of the system at time t ( in years ) . since we postulate the existence of two states , s can only take two values ( s=1 : warm state , s=0 : cold state ) .we further define a threshold function t(t ) that describes the stability of the system at time t ( i.e. the stability of the current model state ) .dynamics of our conceptual model .shown is the time evolution of the model , in response to a forcing that is large enough to trigger switches between both model states .top : forcing f ( black ) and threshold function t ( red ) .bottom : model state s ( grey ; s=0 corresponds to the cold state , s=1 to the warm one ) and state variable s ( green ) . at time the forcing falls below the threshold function and a shift from the cold state into the warm one is triggered . with this transition ,the threshold function switches to a non - equilibrium value ( representing an overshooting of the system ) and afterwards approaches equilibrium following a millennial - scale relaxation . at time the forcing exceeds the threshold function , and a transition from the warm state back into the cold one is triggered . with this transition, the threshold function switches to another non - equilibrium value and approaches equilibrium following another millennial - scale relaxation , until the forcing again falls below the threshold function and the next switch into the warm state is triggered .note that the state variable s is chosen to be identical to the threshold function t. for convenience , discontinuities in t and s are eliminated by linear interpolation .t , s and f are normalised in the figure . ,width=302 ] second we define rules for the time evolution of the threshold function t. when the system shifts its state , we assume a discontinuity in the threshold function : with the switch from the warm state to the cold one ( at time t in fig .[ fig : figure2 ] ) t takes the value a . likewise , with the switch from the cold state into the warm one ( at time t in fig .[ fig : figure2 ] ) t takes the value a .as long as the system does not change its state the evolution of t is assumed to be given by a relaxation process : ( s labels the current model state , denotes the relaxation time in that state , b is a state - dependent constant that labels the equilibrium value of t in each model state ) .these assumptions result in the following expression for the threshold function t : note that in the above expression the index s again denotes the current state of the model ( i.e. s=0 stands for the cold state and s=1 for the warm one ) , labels the time of the last switch from the warm state into the cold one , and indicates the time of the last switch from the cold state into the warm one .third we assume that transitions from one state to the other are triggered each time a given forcing function f(t ) crosses the threshold function t. more precisely , we assume that when the system is in the cold state ( = 0 ] ) the system switches into the warm state ( = 1 ] ) and the forcing is larger than the threshold value ( > t[t'+1] ] ) .that shift represents the termination of a do event .if none of these conditions is fulfilled , the system remains in its present state ( i.e. = s[t]$ ] ) . to simplify the comparison of the model output with paleoclimatic recordswe further define a state variable s , which represents anomalies in greenland temperature during do events . for simplicitywe assume that the state variable is equal to the threshold function : ( i.e. we assume that greenland temperature evolution during do events is closely related to the current state of the thc , respectively to its stability ) .we stress that this assumption is of course highly simplified , because greenland temperature is certainly not only influenced by the thc but also by other processes such as changes in ice accumulation during do oscillation . however , this assumption is not crucial for the dynamics of our model , since the timing of the switches between both model states is solely determined by the relation between the forcing function f and the threshold function t. this means that even if we included a more realistic relation between t and s , the timing of the simulated climate shifts would be unchanged and the model dynamics would thus essentially be invariant . notethat and are not adjustable ; they rather represent internal time markers .thus , six adjustable parameters exist in our model as described here , namely , , , , and .our choice for these parameters is shown in table [ tab : table1 ] . with these parameter valuesthe system is bistable ( i.e. no transition is ever triggered in the absence of any forcing , since and ) and almost symmetric .that means that the average duration of the simulated warm and cold intervals is almost equal . when compared with greenland paleotemperature records this situation most likely corresponds to the time interval between about 27000 and 45000 years before present , during which the duration of the cold and warm intervals in do oscillations was also comparable ( fig .1 ) . the model can , however , also represent an unstable ( for and ) or a mono - stable system , in which the stable state is either the warm one ( for and ) or the cold one ( for and ) ; when compared to the ice core data this situation is closer to the time interval between 15000 and 27000 years before present , since during that time the system was preferably in its cold state and the forcing apparently crossed the threshold only infrequently and during short periods of time . [tab : table1 ] cll parameter & chosen value + a & -27 + a & 27 + b & -9.7 + b & 11.2 + & 1200 + & 800 + in order to test our conceptual model we compare its performance under a number of systematic forcing scenarios with the performance of the far more comprehensive model climber-2 ( a short description of that model is given in the appendix ; a detailed description exists in the publication of ) .analogous to , we investigate the response of both models to a forcing that consists of two century - scale sinusoidal cycles . in the conceptual model , the forcing is implemented as the forcing function f. in the emic , the forcing is added to the surface freshwater flux in the latitudinal belt 50 - 70 , following and .this anomaly changes the vertical density gradient in the ocean and can thus trigger do events .switches from the cold state into the warm one are excited by sufficiently large ( order of magnitude : a few centimetre per year in the surface freshwater flux into the relevant area of the north atlantic ) negative freshwater anomalies ( i.e. by positive surface density anomalies that are strong enough to trigger buoyancy [ deep ] convection ) , and the opposite switches are triggered by sufficiently large positive freshwater anomalies ( i.e. by negative surface density anomalies that are strong enough to stop buoyancy [ deep ] convection ) .this justifies our choice for the logical relations that govern the dynamics of the transitions in the conceptual model ( i.e. as the condition for the switch from the cold state to the warm one , for the opposite switch ) .a detailed comparison between both model outputs is presented in the supplementary material .we here only summarise the main results : we find a general agreement between both models , which is robust when the forcing parameters are varied over some range ( supplementary figs . 1 - 6 ) .the conceptual model reproduces the existence of three different regimes ( _ cold _ , _ warm _ , _ oscillatory _ ) in the output of the emic and also their approximate position in the forcing parameter - space . by constructiononly the nonlinear component in the response of the emic to the forcing is reproduced by the conceptual model ( this component represents the saw - tooth shape of do events ) .a second , more linear component is not included in the conceptual model ( this component represents small - amplitude temperature anomalies which are superimposed on the saw - tooth shaped events in the emic ) . in particular , the conceptual model very well reproduces the timing of the onset of do events in the emic .the fact that our conceptual model , despite its simplicity , agrees in so many aspects with the much more detailed model climber-2 suggests that it indeed captures the key features in the dynamics of do events in that model .we would like to stress that the output of the emic indeed supports our assumption of an overshooting in the stability of the system during the transitions between both climate states : when driven by a periodic forcing ( with a period of 1470 years ) , the emic can show periodic oscillations during which it remains in either of its states for more than one forcing period ( i.e. for considerably more than 1470 years , compare supplementary fig .this implies that ( at least in the emic ) the conditions for a return to the opposite state indeed ameliorate with increased duration of the cold or warm intervals .if the thresholds in the model were constant ( or gradually increasing with increasing duration of the simulated cold / warm intervals ) , in contrast , the duration of the cold and warm intervals during the simulated oscillation could never be longer than 1470 years : if a periodic forcing does not cross a constant ( or gradually increasing ) threshold within its first period , it never crosses the threshold , due to the periodicity of the forcing .strongly nonlinear systems can show complex and apparently counterintuitive resonance phenomena that can not occur in simple linear systems . in this sectionwe use our conceptual model to demonstrate and to discuss two of these phenomena , i.e. stochastic resonance ( sr ) and ghost resonance ( gr ) . since the explanation of the 1470-year cycle ( and in fact even its significance ) is still an open question ,we further discuss how future tests could distinguish between the proposed mechanisms .stochastic resonance .the input consists of : 1 .a sub - threshold sinusoidal signal with a period of 1470 years and an amplitude of 4.5 msv ( about 40 percent of the threshold value b above which do events occur in the model ) , 2 . a random gaussian - distributed signal with white noise power signature ( standard deviation = 8 msv ) and a cutoff frequency of 1/(50 years ) .the cutoff is used since no damping exists in the model and it thus shows an unrealistically large sensitivity to high - frequency ( i.e. decadal - scale or faster ) forcing . a : total input ( black ) ,periodic input component ( grey ) , model output ( green ) .dashed lines are spaced by 1470 years .b : relative frequency to obtain a spacing of 1470 years % ( triangles ) respectively % ( squares ) between successive events , as a function of the noise level . , width=321 ] in linear systems that are driven by a periodic input , the existence of noise generally reduces the regularity of the output ( e.g. the coherence between the input and the output ) .this is not necessarily the case in nonlinear systems : excitable or bistable nonlinear systems with a threshold and with noise , which are driven by a sinusoidal sub - threshold input , can show maximum coherence between the input and the output for an intermediate noise level , for which the leading output frequency of the system is close to the input frequency .this phenomenon is called stochastic resonance ( sr ) .sr has been suggested to explain the characteristic timing of do events , i.e. the apparent tendency of the events to occur with a spacing of about 1470 years or integer multiples thereof .it has further been demonstrated that do events in the model climber-2 can be subject to sr .here we apply our conceptual model to reproduce these results and to reanalyse the underlying mechanism .we use an input that is composed of : ( i ) a sinusoidal signal with a period of 1470 years , ( ii ) additional white noise .figures [ fig : figure3 ] and [ fig : figure4 ] show that for a suitable noise level the model can indeed show do events with a preferred spacing of about 1470 years or integer multiples thereof .the reason for this pattern in the output is easily understandable in the context of the model dynamics : do events in the model are triggered by pronounced minima of the total input ( total input = periodic signal plus noise ) .these minima generally cluster around the minima of the sinusoidal signal , and the start of the simulated events thus has a tendency to coincide with minima of the sinusoidal signal ( fig .[ fig : figure3]a ) .some minima of the sinusoidal signal , however , are not able to trigger an event , because the magnitude of the noise around these minima is too small so that the threshold function is not reached by the total input .consequently , a cycle is sometimes missed , and the spacing of successive events can change from about 1470 years to multiples thereof . unlike the model climber-2 ( which has a complex relationship between the input and the output and also a large computational cost ) our conceptual model can be used for a detailed investigation of the sr , e.g. because the dynamics of the model is simple and precisely known and because probability measures ( such as waiting time distributions ) can be explicitly computed .in fact , the resonant pattern in the conceptual model ( fig .[ fig : figure3 ] ) is due to two time - scale matching conditions : the noise level is such that the average waiting time between successive noise - induced transitions is comparable to _ half _ of the period of the periodic forcing , and also comparable to the relaxation times and of the threshold function ( compare fig . [fig : figure4]b ) .this situation is different from the usual sr , in which thresholds ( or potentials ) are constant in time ( apart from the influence of the periodic input ) . in the usual sr , only one time - scale matching condition exists , namely the one that the average waiting time between successive noise - induced transitions ( i.e. the inverse of the so - called kramers rate ) is comparable to _ half _ of the period of the periodic forcing . in order to investigate the implications of the second condition we simulate histograms for four different scenarios in the conceptual model ( fig .[ fig : figure4 ] ) : 1 .noise - only input , constant threshold ( fig .[ fig : figure4]a ) ; 2 .noise - only input , overshooting threshold ( fig .[ fig : figure4]b ) ; 3 .noise plus periodic input , constant threshold ( fig .[ fig : figure4]c ) ; 4 . noise plus periodic input ,overshooting threshold ( fig .[ fig : figure4]d ) .we note that 3. corresponds to the usual sr , while 4 . describes our _ overshooting stochastic resonance_.as can be seen from the histograms , the existence of the millennial - scale relaxation process leads to a synchronisation in the sense that the waiting times between successive events are confined within a much smaller time interval ( about 1000 - 4500 years with the overshooting , compared to about 0 - 10000 years without overshooting ) .this confinement is plausible since the transition probability between both model states strongly depends on the magnitude of the threshold , which declines with increasing duration of the cold or warm intervals : when the standard deviation of the noise level is chosen such that the average waiting time between successive noise - induced transitions is comparable to the relaxation times and , as in fig .[ fig : figure4 ] , the overshooting relaxation strongly reduces the transition probability for waiting times much smaller than the relaxation time ( since the corresponding values of the threshold function are large ) and increases the transition probability for waiting times of the order of the relaxation time or larger ( since the corresponding values of the threshold function are considerably smaller ) . the probability to find an only century - scale spacing between successive events is thus small , because the corresponding transition probabilities are small . on the other hand , the probability to find a multi - millennial spacing is also small , because the states are already depopulated before ( i.e. the probability to obtain lifetimes considerably larger than the relaxation time is almost zero ) .this explains why the possible values for the spacing between successive do events are restricted to a much smaller range than in the usual sr ( i.e. in the case with constant thresholds ) .this synchronisation effect is indeed not unique to the conceptual model : the output of the coupled model climber-2 shows a similar pattern ( with possible waiting times between successive do events of e.g. about 1500 - 5000 years or about 1000 - 3000 years , depending on the noise level ; compare fig .4a - d in the publication of ) .this similarity is of course not surprising , since the conceptual model is apparently able to mimic the events in the emic and since an overshooting in the stability of both states clearly also exists in climber-2 ( compare sect .we note that in the gisp2 ice core data , do events in the time interval 27000 - 45000 years before present ( which , as discussed in sect .3.2 , is the best analogue to the `` background climate state '' in our conceptional model , since the duration of the cold and warm intervals in the data is comparable in that interval ) have spacings of about 1000 - 3000 years ( compare fig . 1 ) and were reported to cluster around values of either about 1470 years or about 2940 years . because the sr mechanism could explain such a pattern ( compare fig . [fig : figure4]d ) it has originally been proposed .however , this mechanism requires a sinusoidal input with a period of about 1470 years , which has so far not been detected . in linear systems which are driven by a periodic input ,the frequencies of the output are always identical to the input frequencies .this is not necessarily the case in nonlinear systems .for example , nonlinear excitable ( or bistable ) systems that are driven by an input with frequencies corresponding to harmonics of a fundamental frequency ( which itself is not present in the input ) can show a resonance at the fundamental frequency , i.e. at a frequency with zero input power .this phenomenon , which was first described in order to explain the pitch of complex sounds and later observed experimentally e.g. in laser systems , is called ghost resonance ( gr ) .gr and sr can indeed occur in the same class of systems , e.g. in bistable or excitable systems with thresholds. however , unlike sr , gr requires a periodic driver with more than one frequency .although many geophysical systems might be subject to gr ( since the relevant processes often have thresholds ) , the occurrence of this mechanism has so far not expressly been demonstrated in geoscience . ghost resonance .top : forcing ( black ) and model response ( green ) . middle : amplitude spectrum of the forcing .bottom : amplitude spectrum of the model response .we use two sinusoidal forcing cycles , with frequencies of 7/(1470 years ) and 17/(1470 years ) , respectively , and with equal amplitudes .these two cycles coincide every 1470 years and create peaks of particularly pronounced magnitude , spaced by exactly that period .thus , despite the fact that there is no spectral power at the corresponding frequency ( see middle panel ) , the forcing repeatedly crosses the threshold at those intervals .consequently , the response of the conceptual model ( i.e. the time evolution of the state variable s ) shows strictly repetitive do events with a period of 1470 years ( as indicated by the dashed lines , which are spaced by 1470 years ) . despite the lack of a 1470-year spectral component in the forcing, the output shows a very prominent peak at the corresponding frequency ., width=321 ] here we discuss a hypothesis that was recently proposed to explain the 1/(1470 years ) leading frequency of do events . the underlying mechanism of the hypothesis is in fact the first reported manifestation of gr in a geophysical model system . according to that hypothesis the 1/(1470 years )frequency of do events could represent the nonlinear climate response to forcing cycles with frequencies close to harmonics of 1/(1470 years ) .our conceptual model illustrates the plausibility of this mechanism : we use a bi - sinusoidal input with frequencies of 7/(1470 years ) and 17/(1470 years ) , i.e. with frequencies corresponding to the 7th and the 17th harmonic of a 1/(1470 years ) fundamental frequency , and with equal amplitudes . a spectral component corresponding to the fundamental frequencyis not explicitly present in the input .since the two sinusoidal cycles correspond to harmonics of the missing fundamental , the input signal repeats with a period of 1470 years . for an appropriate range of input amplitudes, the output of the conceptual models shows periodic do events with a period of 1470 years ( fig .[ fig : figure5 ] ) .unlike the input , the model output exhibits a pronounced frequency of 1/(1470 years ) , corresponding to the leading frequency of do events and to the fundamental frequency that is absent in the input .this apparent paradox is explained by the fact that the two driving cycles enter in phase every 1470 years , thus creating pronounced peaks spaced by that period . because the magnitude of these peaks results from constructive interference of the two driving cycles , it is indeed robust that a threshold process can be much more sensitive to these peaks than to the two original driving cycles .the main strength of the gr mechanism is that unlike the sr mechanism it can relate the leading frequency of do events to a main driver of natural ( non - anthropogenic ) climate variability , since proxies of solar activity suggest the existence of solar cycles with periods close to 1470/7 ( = 210 ) years ( de vries or suess cycle ) and 1470/17 ( .5 ) years ( gleissberg cycle ) .so far , however , no empirical evidence for this mechanism has been found , nor has it been shown yet that changes in solar activity over the solar cycles are sufficiently strong to actually trigger do events . in order to investigate the stability of this mechanismwe further add a stochastic component ( i.e. white noise ) to the forcing .in this case the events are of course not strictly periodic anymore .similar to the sr case , an optimal ( i.e. intermediate ) noise level exists for which the waiting time distribution of the simulated events exhibits a maximum at a value of 1470 years , corresponding to the period of the fundamental frequency of the two input cycles ( fig .[ fig : figure6 ] ) .in contrast to the sr case , in which a fairly simple waiting time distribution with a few broad maxima of century scale width is obtained ( compare fig . [fig : figure4]d ) , we now find a much more complex pattern with a large number of very sharp lines of only decadal scale width . since the waiting time distributions of both mechanisms are considerably different , it could at least in principle be possible to distinguish between both mechanisms by analysing the distribution of the observed do events . in practise , however , this approach is complicated by the fact that only about ten events appear to be sufficiently well dated for this kind of analysis , and even their spacing has an uncertainty of about 50 years , which is already of the same order as the width of the peaks in fig .[ fig : figure6]b .we note that the mechanism that is described in fig .[ fig : figure6 ] is known as ghost stochastic resonance ( gsr ) , and its occurrence and robustness has already been reported before in other systems with thresholds and multiple states of operation . at least in our system , however , this mechanism is even more complex than the other two types of resonance ( sr , gr ) .it is beyond the scope of this paper to describe the gsr mechanism in more detail .ghost stochastic resonance .the input consists of : 1 .two sinusoidal forcing cycles with frequencies of 7/(1470 years ) and 17/(1470 years ) , respectively , and with an amplitude of 8 msv , 2 . a random gaussian - distributed signal with white noise power signature and a cutoff frequency of 1/(50 years ) , as in fig . [ fig : figure3 ] .in a , the relative frequency to obtain a spacing of 1470 years % ( triangles ) respectively % ( squares ) between successive events is shown as a function of the noise level .b shows the distribution of the spacing t between successive events ( standard deviation of the noise : 5.5 msv ) . , width=302 ] the most direct way to test which of the proposed mechanisms if any provides the correct explanation for the timing of do events would be to reconstruct decadal - scale density anomalies in the north atlantic in connection with the eventsthis is not possible since even the most highly resolved oceanic records do not allow to reconstruct the variability of the glacial ocean on that time scale .thus , only indirect tests can be performed .the identification of the postulated 1/(1470 years ) forcing frequency , which has so far not been detected , would certainly give further support for the sr mechanism . andin order to support the gr mechanism , it remains crucial to demonstrate that century - scale solar irradiance variations are indeed of sufficiently large amplitude to trigger repeated transitions ( with a preferred time scale of about 1470 years ) between the two glacial climate states .this could be tested with climate models .an elegant and simple test is to make use of the observation that do events in the earth system model climber-2 represent the nonlinear response to the forcing , and that an additional ( and much smaller ) linear response is superimposed on the events . in the absence of any threshold crossing ( e.g. in the holocene ,during which do events did not occur ) the response to the forcing , in contrast , does not show a strong nonlinear component .this suggests that holocene climate archives from the north atlantic region might be able to reveal what triggered glacial do events .this approach has two major advantages : first , more reliable ( e.g. better resolved and dated ) records are available to solve this issue .second , linear analysis methods can be used for that purpose , e.g. linear correlations . in the context of the gr mechanism , for example, the existence of a pronounced correlation between holocene climate indices from the north atlantic and solar activity proxies ( reconstructed e.g. from variations in precisely dated tree rings ) would be expected . up tonow at least one study exists that supports this prediction of a linear relationship between century - scale solar forcing and north atlantic climate variability throughout the holocene : proxies of drift ice anomalies in the north atlantic show a persistent correlation and a statistically significant coherency with `` rapid ( 100- to 200-year ) , conspicuously large - amplitude variations '' in solar activity proxies , in accordance with the proposed gr mechanism . the most challenging test , however , is the direct analysis of the glacial climate records .we are convinced that one of the main difficulties in this approach is the high degree of nonlinearity of the events , which according to our interpretation has so far not been adequately addressed in many previous studies . for example , several attempts have already been made in order to investigate the 1470-year cycle by means of linear spectral analysis methods , and significance levels have commonly been calculated by assuming a red noise background . to usthis assumption seems to be oversimplified , since the system responds at a preferred time scale even when driven by white noise ( compare fig .[ fig : figure4]b ) .we thus suspect that the significance levels obtained by this method are unrealistically high .we further think that the reported lack of a clear phase relation between solar activity proxies and do events can not rule out the idea that solar forcing synchronised do events , since in the case of an additional stochastic forcing component ( i.e. in the gsr case ) the events are triggered by the _ combined _ effect of solar forcing and noise .thus , the observed lack could also imply that only some of the events were in first place triggered by the sun , whereas others were caused mainly by random variability ( e.g. by noise ) .a new and promising approach , which is based on a monte - carlo method , has recently been proposed in order to test the significance of the glacial 1470-year climate cycle : define a certain measure in order to distinguish between different hypotheses for the timing of do events , and they explicitly calculate the value of this measure for the series of events observed in the ice core .they then compare the calculated value with the values obtained by several hypothetic processes , e.g. by a random process ( for which assumptions concerning the probability distribution of the recurrence times of the events have to be made ) .significance levels are obtained from the ( numerically estimated ) probability distributions of the measure as generated by the considered process .although we do not share their conclusions ( because we think that more adequate measures can be chosen , which give considerably different results ) we think that this approach is elegant because significance levels are not calculated based on linear theories .the method is thus also applicable to highly nonlinear time series .a major hurdle in this method is that for each considered process the probability distribution of the waiting times which is unknown for almost all processes somehow has to be specified .for example , ditlevsen et al .use a simple mathematical ( i.e. an exponential ) distribution in order to mimic random do events . in order to improve their novel approach ,some method is thus needed to calculate waiting time distributions in response to any possible input .comprehensive models are not applicable , due to their large computational cost . our conceptual model , in contrast , is well designed for that purpose because it is combines the ability to mimic the complex nonlinearity of do events as described by an accepted earth system model with the extremely low computational cost of a very simple ( zeroth order ) model .we thus think that our work is an important step in order to develop improved statistical analysis methods which are able to cope with the extreme nonlinearity of do events .[ sec : end ] we here discussed do events in the framework of a very simple conceptual model that is based on three key assumptions , namely ( i ) the existence of two different climate states , ( ii ) a threshold process and ( iii ) an overshooting in the stability of the system at the start and the end of the events , which is followed by a millennial - scale relaxation .these assumptions are in accordance with paleoclimatic records and/or with simulations performed with climber-2 , a more complex earth system model . in a couple of systematic tests we showed ( in the supplementary material )that despite its simplicity , our model very well reproduces do events as simulated with climber-2 , whose dynamics is based on a ( albeit reduced ) description of the underlying hydro-/thermodynamic processes .the correspondence between both models thus strengthens our interpretation that the conceptual model can successfully mimic key features of do events , and that these can be regarded as a new type of non - equilibrium oscillation ( i.e. as an _ overshooting relaxation oscillation _ ) between two states of a nonlinear system with a threshold .although we discussed our model dynamics in the context of the ( thermohaline ) ocean circulation , our model does not explicitly assume that do events are linked with changes in the ocean circulation : threshold behaviour and multiple states exist in many compartments of the climate system ( not only in the ocean , but e.g. also in the atmosphere and in the cryosphere ) .our model thus can not rule out a leading role of non - oceanic processes in do oscillations .the millennial time scale of the events ( which is represented in our model by the assumption of a millennial - scale relaxation ) , however , corresponds to the characteristic time scale of the thermohaline circulation and thus points to a key role of the ocean in do oscillations .the main strength of our model is its simplicity : due to the obvious relationship between forcing and response , the model can demonstrate why even a simple bistable ( or excitable ) system with a threshold can respond in a complex way to a simple forcing , which consists of only one or two sinusoidal inputs and noise .we applied our model to discuss two highly nonlinear and apparently counterintuitive resonance mechanisms , namely stochastic resonance and ghost resonance . in doingso we reported a new form of stochastic resonance ( i.e. an _ overshooting stochastic resonance _ ) , in which the overshooting of the system leads to a further synchronisation effect compared to the usual stochastic resonance .our study provides the first explicitly reported manifestation of ghost resonance in a geophysical ( model ) system .since threshold behaviour and multiple equilibria are not unique to do events but exist in many geophysical systems , we would indeed expect that ghost resonances could be inherent in many geosystems and not just in our model .in addition to its applicability to demonstrate and interpret nonlinear resonance mechanisms , and to test their stability , we further illustrated the ability of our conceptual model to simulate probability measures ( e.g. waiting time distributions , which are required in order to test the significance and the cause of the proposed glacial 1470-year climate cycle by means of monte - carlo simulations ) .because it combines the ability to reproduce essential aspects of do events with the extremely low computational cost of a conceptual model ( which is up to 10 times lower than in the earth system model climber-2 ) , we think that our model represents an important advance in order to develop adequate nonlinear methods for improved statistical analyses on do events .the earth system model climber-2 , which we used for our analysis , has dynamic components of the atmosphere , of the oceans ( including sea ice ) and the vegetation .dynamic ice sheets were not included in our study .climber-2 is a global model with coarse resolution : for the atmosphere and the continents the spatial resolution is 10 in latitude , and 7 sectors are considered in longitude . the ocean is zonally averaged with a latitudinal resolution of 2.5 for the three large ocean basins .a detailed description of the model is given in the publication of .do events in the model represent abrupt switches between two different climate states ( _ stadial _ [ i.e. cold ] and _ interstadial _ [ i.e. warm ] ) , corresponding to two different modes of the thc : in the interstadial mode , north atlantic deep water ( nadw ) forms at about 65 and much of the north atlantic is ice - free . in the stadial mode ,nadw forms at about 50 and a considerably larger area of the north atlantic is ice - covered .we note that for the climatic background conditions of the last glacial maximum ( lgm ) only the stadial mode is stable in the model whereas the interstadial mode is excitable but unstable . moreover , the stability of both modes depends on the actual climate state ( e.g. on the configuration of the laurentide ice sheet and on the freshwater input into the north atlantic ) , and the stability properties of the system change when the background conditions are modified ( more precisely , the system can be bistable or mono - stable ) .transitions between both modes can be triggered by anomalies in the density field of the north atlantic , for example by variations in the surface freshwater flux ( since the density of ocean water increases with increasing salinity ) . in our studywe thus implement the forcing as a perturbation in the freshwater flux ( in the latitudinal belt 50 - 70 ) : we start the model with the climatic background conditions of the last glacial maximum ( lgm ) .following earlier simulations we then add a small constant offset of 17 ( ) to the freshwater flux .for this climate state ( which we label perturbed lgm ) the thc is in fact bistable and do events can be triggered more easily than for lgm conditions .this perturbed lgm state gives us the background conditions for the model simulations as presented in this paper .the authors thank r. calov , a. mangini , s. rahmstorf , k. roth and a. witt for discussion , and p. ditlevsen ( in particular for observing the difference between the usual stochastic resonance and our overshooting stochastic resonance ) and two anonymous reviewers for helpful comments .h. braun was funded by deutsche forschungsgemeinschaft , dfg project number ma 821/33 .alley , r. b. and clark , p. u. , the deglaciation of the northern hemisphere : a global perspective , ann .earth planet ., 27 , 149182 , 1999 .alley , r. b. , clark , p. u. , keigwin , l. d. , and webb , r. s. , making sense of millennial - scale climate change , in : mechanisms of global climate change at millennial time scales , edited by : clark , p. u. , webb , r. s. , and keigwin , l. d. , pp .385394 , agu , washington , dc , 1999 .alley , r. b. , anandakrishnan , s. , and jung , p. , stochastic resonance in the north atlantic , paleoceanography , 16 , 190198 , 2001a .alley , r. b. , anandakrishnan , s. , jung , p. , and clough , a. , stochastic resonance in the north atlantic : further insights , in : the oceans and rapid climate change : past , present and future , edited by seidov , d. , maslin , m. , haupt , b. j. , pp . 5768 , agu , washington , dc , 2001b .alley , r. b. , marotzke , j. , nordhaus , w. d. , overpeck , j. t. , peteet , d. m. , pielke jr ., r. a. , pierrehumbert , r. t. , rhines , p. b. , stocker , t. f. , talley , l. d. , and wallace , j. m. , abrupt climate change , science , 299 , 20052010 , 2003 .benzi , r. , parisi , g. , sutera , a. , and vulpiani , a. , stochastic resonance in climatic change .tellus , 34 , 1016 , 1982 .bond , g. , kromer , b. , beer , j. , muscheler , r. , evans , m. n. , showers , w. , hoffmann , s. , lotti - bond , r. , hajdas , i. , and bonani , g. , persistent solar influence on north atlantic climate during the holocene .science , 294 , 21302136 , 2001 .braun , h. , christl , m. , rahmstorf , s. , ganopolski , a. , mangini , a. , kubatzki , c. , roth , k. , and kromer , b. , possible solar origin of the 1,470-year glacial climate cycle demonstrated in a coupled model , nature , 2005 , 438 , 208211 .broecker , w. s. , peteet , d. m. , and rind , d. , does the ocean - atmosphere system have more than one stable mode of operation ?nature , 315 , 2126 , 1985 .broecker , w. s. , bond , g. , klas , m. , bonani , g. , and wolfli , w. , a salt oscillator in the glacial atlantic ? 1 .the concept , paleoceanography , 5 , 469477 , 1990 .buldu , j. m. , chialvo , d. r. , mirasso , c. r. , torrent , m. c. , and garcia - ojalvo , j. , ghost resonance in a semiconductor laser with optical feedback , europhysics letters , 64 , 178184 , 2003 .centurelli , r. , musacchioa , s. , pasmanterc , r. a. , and vulpiani , a. , resemblances and differences in mechanisms of noise - induced resonance , physica a , 360 , 261273 , 2006 .chialvo , d. r. , calvo , o. , gonzalez , d. l. , piro , o. , and savino , g. v. , subharmonic stochastic synchronization and resonance in neuronal systems , physical review e , 65 , 050902 , 2002 .chialvo , d. r. , how we hear what is not there : a neuronal mechanism for the missing fundamental illusion , chaos , 13 , 1226 - 1230 , 2003 .claussen , m. , mysak , l. a. , weaver , a. j. , crucifix , m. , fichefet , t. , loutre , m .- f . , weber , s. l. , alcamo , j. , alexeev , v. a. , berger , a. , calov , r. , ganopolski , a. , goose , h. , lohmann , g. , lunkeit , f. , mokhov , i. i. , petoukhov , v. , stone , p. , wang , z. , earth system models of intermediate complexity : closing the gap in the spectrum of climate system models .dyn . , 18 , 579586 , 2002 .clemens , s. c. , millennial - band climate spectrum resolved and linked to centennial - scale solar cycles .rev . , 24 , 521531 , 2005 .dansgaard , w. , clausen , h. b. , gundestrup , n. , hammer , c. u. , johnsen , s. f. , kristinsdottir , p. m. , and reeh , n. , a new greenland deep ice core , science , 218 , 12731277 , 1982 .ditlevsen , p. d. , kristensen , m. s. , andersen , k. k. , the recurrence time of dansgaard - oeschger events and possible causes , j. clim . , 18 , 25942603 , 2005 .ditlevsen , p. d. , andersen , k. k. , svensson , a. , the do - climate events are probably noise induced : statistical investigation of the claimed 1470 years cycle , clim .past , 3 , 129134 , 2007 .gammaitoni , l. , hnggi , p. , jung , p. , and marchesoni , f. , stochastic resonance ., 70 , 223288 , 1998 .ganopolski , a. and rahmstorf , s. , simulation of rapid glacial climate changes in a coupled climate model , nature 409 , 153158 , 2001 .ganopolski , a. and rahmstorf , s. , abrupt glacial climate changes due to stochastic resonance , phys ., 88 , 038501 , 2002 .grootes , p. m. , stuiver , m. , white , j. w. c. , johnsen , s. , and jouzel , j. , comparison of oxygen isotope records from the gisp2 and grip greenland ice cores , nature , 366 , 552554 , 1993 .grootes , p. m. and stuiver , m. , oxygen 18/16 variability in greenland snow and ice with 10 to 10-year time resolution .j. geophys .102 , 2645526470 , 1997 .keeling , c. d. and whorf , t. p. , the 1,800-year oceanic tidal cycle : a possible cause of rapid climate change .usa , 97 , 38143819 , 2000 .leuenberger , m. , schwander , j. , and johnsen , s. , 16 rapid temperature variations in central greenland 70,000 years ago , science , 286 , 934937 , 1999 .muscheler , r. and beer , j. , solar forced dansgaard / oeschger events ?, 33 , l20706 , 2006 .oeschger , h. , beer , j. , siegenthaler , u. , stauffer , b. , dansgaard , w. , langway , jr ., c. c. , late glacial climate history from ice cores , in : climate processes and climate sensitivity , edited by hansen , j. e. and takahashi , t. , pp . 299306 , agu , washington , dc , 1984 .peristykh , a. n. and damon , p. e. , persistence of the gleissberg 88-yr solar cycle over the last 12,000 years : evidence from cosmogenic isotopes .j. geophys .res . , 108 ; 1003 , 2003 .petoukhov , v. , ganopolski , a. , brovkin , v. , claussen , m. , eliseev , a. , kubatzki , c. , and rahmstorf , s. , climber-2 : a climate system model of intermediate complexity .part i : model description and performance for present climate , clim .16 , 117 , 2000 .rahmstorf , s. , ocean circulation and climate during the past 120,000 years , nature , 419 , 207214 , 2002 .rahmstorf , s. , timing of abrupt climate change : a precise clock , geophys ., 30 , 15101514 , 2003 .rahmstorf , s. and alley , r. b. , stochastic resonance in glacial climate , eos , 83 , 129135 , 2002 .rial , j. a. , abrupt climate change : chaos and order at orbital and millennial scales , glob .change , 41 , 95109 , 2004 .sakai , k. and peltier , w. r. , dansgaard - oeschger oscillations in a coupled atmosphere - ocean climate model , j. clim ., 10 , 949970 , 1997 .sarnthein , m. , winn , k. , jung , s. j. a. , duplessy , j. c. , labeyrie , l. , erlenkeuser , h. , and ganssen , g. , changes in east atlantic deepwater circulation over the last 30,000 years : eight time slice reconstructions , paleoceanography , 9 , 209267 , 1994 .schulz , m. , on the 1470-year pacing of dansgaard - oeschger warm events .paleoceanography , 17 , 10141022 , 2002 .schulz , m. , paul , a. , and timmermann , a. , relaxation oscillators in concert : a framework for climate change at millennial timescales during the late pleistocene .lett . , 29 , 21932197 , 2002 .severinghaus , j. p. and brook , e. , abrupt climate change at the end of the last glacial period inferred from trapped air in polar ice , science , 286 , 930934 , 1999 .stuiver , m. and braziunas , t. f. , sun , ocean , climate and atmospheric co2 : an evaluation of causal and spectral relationships , holocene , 3 , 289305 , 1993 .taylor , k. c. , mayewski , p. a. , alley , r. b. , brook , e. j. , gow , a. j. , grootes , p. m. , meese , d. a. , saltzman , e. s. , severinghaus , j. p. , twickler , e. s. , white , j. w. c. , whitlow , s. , and zielinski , g. a. , the holocene - younger dryas transition recorded at summit , greenland , science , 278 , 825827 , 1997 .timmermann , a. , gildor , h. , schulz , m. , and tziperman , e. , coherent resonant millennial - scale climate oscillations triggered by massive meltwater pulses . j. clim . , 16 , 25692585 , 2003 .van kreveld , s. a. , sarnthein , m. , erlenkeuser , h. , grootes , p. , jung , s. , nadeau , m. j. , pflaumann , u. , and voelker , a. , potential links between surging ice sheets , circulation changes and the dansgaard - oeschger cycles in the irminger sea , 60 - 18 kyr , paleoceanography , 15 , 425442 , 2000 .wagner , g. , beer , j. , masarik , j. , muscheler , r. , kubik , p. w. , mende , w. , laj , c. , raisbeck , g. m. , and yiou , f. , presence of the solar de vries cycle ( years ) during the last ice age , geophys ., 28 , 303306 , 2001 .yiou , p. , fuhrer , k. , meeker , l. d. , jouzel , j. , johnsen , s. , and mayewski , p. a. , paleoclimatic variability inferred from the spectral analysis of greenland and antarctic ice - core data . j. geophys ., 102 , 2644126454 , 1997 .
here we use a very simple conceptual model in an attempt to reduce essential parts of the complex nonlinearity of abrupt glacial climate changes ( the so - called dansgaard - oeschger events ) to a few simple principles , namely ( i ) the existence of two different climate states , ( ii ) a threshold process and ( iii ) an overshooting in the stability of the system at the start and the end of the events , which is followed by a millennial - scale relaxation . by comparison with a so - called earth system model of intermediate complexity ( climber-2 ) , in which the events represent oscillations between two climate states corresponding to two fundamentally different modes of deep - water formation in the north atlantic , we demonstrate that the conceptual model captures fundamental aspects of the nonlinearity of the events in that model . we use the conceptual model in order to reproduce and reanalyse nonlinear resonance mechanisms that were already suggested in order to explain the characteristic time scale of dansgaard - oeschger events . in doing so we identify a new form of stochastic resonance ( i.e. an _ overshooting stochastic resonance _ ) and provide the first explicitly reported manifestation of _ ghost resonance _ in a geosystem , i.e. of a mechanism which could be relevant for other systems with thresholds and with multiple states of operation . our work enables us to explicitly simulate realistic probability measures of dansgaard - oeschger events ( e.g. waiting time distributions , which are a prerequisite for statistical analyses on the regularity of the events by means of monte - carlo simulations ) . we thus think that our study is an important advance in order to develop more adequate methods to test the statistical significance and the origin of the proposed glacial 1470-year climate cycle . [ sec : intro ] do events as seen in the gisp2 ice - core from greenland . the figure shows greenland temperature changes over the interval between 10000 and about 40000 years before present . do events ( 0 - 10 ) manifest themselves as saw - tooth shaped warm intervals . dashed lines are spaced by 1470 years . , width=302 ] time series of north atlantic atmospheric / sea surface temperatures during the last ice age reveal the existence of repeated large - scale warming events , the so - called dansgaard - oeschger ( do ) events . in climate records from the north atlantic region the events have a characteristic saw - tooth shape ( fig . [ fig : figure1 ] ) : they typically start with a warming by up to 10 - 15 k over only a few years / decades . temperatures remain high for centuries / millennia until they drop back to pre - events values over a century or so . a prominent feature of do events is their millennial time scale : during marine isotope stages ( mis ) 2 and 3 , successive events in the gisp2 ice core were reported to be often spaced by about 1470 years or multiples thereof , compare fig . [ fig : figure1 ] . a leading spectral peak corresponding to the 1470-year period was found , and this spectral component was reported to be significant at least over a certain time interval . we note , however , that the statistical significance of this pattern is still under debate , in particular because of the lack of adequate nonlinear analysis methods . it was proposed that do events represent rapid transitions between two fundamentally different modes of the thermohaline ocean circulation ( thc ) , most likely corresponding to different modes of deep - water formation . the origin of these transitions is also under debate : in principle they could have been caused by factors from outside of the earth system , but they could also represent internal oscillations of the climate system . several nonlinear resonance mechanisms have been suggested in order to explain the characteristic timing of do events , including coherence resonance and stochastic resonance .
gas - particle flows are commonly encountered in the chemical and energy conversion industries in the form of fluidized beds , risers , and other pneumatic conveying units .the flow behaviors of gas - particle systems are commonly analyzed using continuum models that treat the particle and fluid phases as interpenetrating continua .finite - volume simulations of these continuum , or ` two - fluid ' models , reveal that the scale of the gas - particle flow structures that are observed is a strong function of the grid size used in the simulation , with grid size independent results being obtained when the ratio between the grid size and particle size is cell sizes of ten particle diameters are not computationally affordable when performing two - fluid model simulations of large fluidized bed reactors that are often tens of meters tall and many meters in cross - section . as a result , coarse - grid simulations of large fluidized beds are common practice .however , these coarse - grid simulations effectively neglect the presence of fine scale gas - particle flow structures that are manifested by the two - fluid model . to enable accurate coarse - grid simulation of the two - fluid model equations one must account for the effect of the fine - scale gas - particle flow structures in the coarse - grid simulation .this approach is embodied by the recent development of a _ filtered _ two - fluid model approach for non - reacting gas - particle flows where effective closures for the _ meso - scale _ fluid - particle drag force , particle phase stress , and particle phase viscosity were extracted from fine - grid simulations of the two - fluid model equations .while _ filtered _ two - fluid models for non - reacting monodisperse gas - particle flows have been developed , verified against fine - grid simulations , and validated against experimental observations , the extension of these models to reacting and polydisperse gas - particle flows remains an open problem .fine - grid two - fluid model simulations of reacting gas - particle flow have been shown to yield quantitative agreement with experimental observations of conversion in solid - catalyzed ozone decomposition processes when grid - independent solutions are obtained . however , due to the computational expense of performing grid - independent simulations , coarse numerical resolution is often used to simulate reacting gas - particle flows in the continuum model framework . by not accounting for the fine - scale structure these coarse - grid simulationshave been shown to over - predict the conversion of ozone in a bubbling fluidized bed .in addition , a few studies have also demonstrated that small - scale gas - particle flow structures that form in devices like fluidized beds are responsible for the wide variability in mass transfer models that is present within the fluidization literature .the primary objective of this work is to demonstrate the need for _ filtered _models for reacting gas - particle flows by performing fine - grid simulations of a first - order , isothermal , solid catalyzed reaction in a periodic domain and filtering the results .it will be shown that the presence of particle clustering in fine - grid simulations leads to an effective reaction rate that is substantially smaller than what would be predicted via coarse - grid simulation .we define the ratio of the reaction rate in the fine - grid simulation to that in the coarse - grid simulation as the _ cluster - scale _ effectiveness factor .it will be shown that this _ cluster - scale _ effectiveness factor is a strong function of dimensionless filter size , and other model parameters . here, is the filter size , is the gravitational acceleration vector , and is the terminal settling velocity of an isolated particle .finally , it is shown that grid resolutions finer than those used to deduce _ filtered _models for non - reacting gas - particle flows are necessary in order to obtain grid - independent _ filtered _closures for reacting gas - particle flows .this observation is supported by a recent work by . due to this grid size dependencewe propose a _ filtered _ reaction rate model based upon extrapolated effectiveness factor data obtained from fine - grid simulations .the two - fluid model consists of balance equations for gas- and particle - phase mass and momentum with an additional transport equation governing evolution of the particle - phase granular temperature .the evolution equations that constitute this eulerian framework are presented in table 1 , in addition to the constitutive relations that close the balance equations .the constitutive relations given in table 1 were chosen to be consistent with earlier studies in filtered model development within our research group . herewe note that while we have restricted our attention to a specific set of constitutive relations , it has been shown that the clustering and bubbling phenomena manifested by the two - fluid model framework are robust to changes in constitutive relations .however , we do expect that the constitutive relations comprising the microscopic two - fluid models will have quantitative effects on the filtered models that arise from fine - grid simulations .therefore , while we have employed a certain set of constitutive relations in this study , filtered models can be developed from fine - grid , two - fluid model simulations using a different set of constitutive relations as well .the effective diffusion coefficient and effective gas - phase viscosity appearing in eqs .( [ eq : cont_species ] ) and ( [ eq : gas_stress ] ) will differ from the corresponding molecular properties as they also account for the enhanced scalar and momentum transport that occurs due to additional _ pseudo - turbulent _ transport occurring as a result of microscopic interactions between individual particles and the fluid .however , two - fluid model simulations have revealed that the solutions manifested by the two - fluid model are insensitive to the value of .therefore , we set equal to the molecular viscosity .in addition , the effective diffusion coefficient is kept on the order of the bulk molecular diffusivity , which is generally for many different gas species . herewe investigate the role of clustering on a model , isothermal , solid - catalyzed , first - order chemical reaction where is the particle volume fraction , and is the mass fraction of species in the gas phase . for first order reaction kinetics , the effective reaction rate constant based on the bulk concentration related to the intrinsic reaction rate constant via where is the _ intra - particle _ effectiveness in the absence of mass transport limitations , is the thiele modulus , is the volume to surface area ratio , and is the biot number for mass transport given as . here , is the convective mass transport coefficient , is the particle radius , and is the _ intra - particle _ diffusivity .it is important to note that the effective reaction rate constants used in two - fluid model simulations incorporate the combined effect of mass transport resistance and _ intra - particle _ effects as well . in this studyall gas - solid flow computations are performed for fcc catalyst particles fluidized by air the physical properties for solid and fluid phases are given in table 2 .5 inx =-\nabla\cdot \boldsymbol{q } -\boldsymbol{\sigma}_s:\nabla \boldsymbol{v}-j_\text{vis}-j_\text{coll}+\gamma_\text{slip } \label{eq : gran_temp}\ ] ] + 5 inx 6cmp2cmx & m + & + & + & + & +the _ filtered _ two - fluid models are obtained by performing a spatial average of the microscopic two - fluid model equations .the consequence of the filtering approach is that the fine - scale gas - particle flow structure that occurs on a length scale smaller than the filter size is accounted for through residual terms that must be constituted from theoretical considerations or fine - grid two - fluid model simulation results . in the case of non - reacting monodisperse gas - particle flows ,_ filtered _ models with accompanying constitutive relations for residual terms have been shown to yield quantitatively similar macroscopic behaviors to those observed in fine - grid simulations of the same flow problem , with a dramatic savings in computational time . herewe follow , the filtering procedure given in .the particle volume fraction and gas - species mass fractions obtained from fine - grid simulations can be represented as , where and _ t _ represent the location and time variables , respectively .the _ filtered _ particle volume fraction is given as where is a weight function , and is the spatial location of the filter center . in thiswork will be used to indicate filtered variables .we require that , and in this study we use a top hat filter for . _filtered _ gas - species mass fraction , particle phase velocity , and fluid phase velocity are given as upon filtering the microscopic species balance equation for component we obtain the following _ filtered _ species balance equation in eq .( [ eq : filt_chi_eq ] ) we invoke the definition of filtered variables given in eqs .( [ eq : filt_chi])([eq : filt_vel_g ] ) . due tothe fact that the reaction we are considering in this work is isothermal and produces no volume change , the _ filtered _ momentum balance equations remain unchanged from those derived in the work of , and as such , they will not be presented here for the sake of brevity . in eq .( [ eq : filt_chi_eq ] ) , there are several terms that appear as _ filtered _ products of microscopic variables that must be constituted to solve the _ filtered _ two - fluid model equations . to facilitate filtered model simulationswe must decompose the products of microscopic variables into mean and fluctuating parts . here , we define fluctuating variables as follows where , , and represent species mass fraction , gas velocity , and volume fraction fluctuations , respectively . inserting the decomposition of mean and fluctuating parts into eq .( [ eq : filt_chi_eq ] ) , we obtain the following filtered species balance equation the first term appearing on the right hand side of eq .( [ eq : filt_chi_eq2 ] ) can be modeled as a dispersive term given by the following constitutive equation where is a dispersion coefficient .dispersion coefficients defined in this way have been presented in the research literature . the final term on the right hand side of eq .( [ eq : filt_chi_eq2 ] ) represents a _ filtered _ reaction rate that must also be constituted in terms of filtered variables alone .a straightforward method for constituting this reaction rate is to define a _cluster - scale _ effectiveness factor which is given as however , in this paper we present _ filtered _ models and associated constitutive relations for the _ non - locally _ corrected _ cluster - scale _ effectiveness factor defined as where .the approach of removing _ non - local _ effects in the determination of filtered quantities was recently advanced by , and we apply the same procedure here for determining .the _ non - locally _ corrected effectiveness factor is constructed by removing the dominant gradient terms that contribute the evaluation of the filtered product , which is done to allow the construction of _ filtered _ constitutive relations in terms of local _ filtered _ variables alone .we present a derivation of the _ non - locally _ corrected effectiveness factor in [ sec : nonl_deriv ] .it will be shown below that both and depend on volume fraction in a qualitatively similar way , but differ quantitatively .all data presented in future sections and resulting _ filtered _ models will pertain to , thus necessitating one to track the gradients in filtered species mass fraction and volume fraction to enable the use of the resulting filtered models in coarse - grid simulations of reacting gas - particle flow .dimensional analysis of the parameters governing gas - particle flow suggests that the _ cluster - scale _ effectiveness factor is a function of five independent dimensionless quantities , which are one can also readily define additional dimensionless parameters like and the coefficient of restitution using dimensional analysis .however , earlier works of have shown that _ filtered _quantities do not display any significant dependence on or , and as such , they were not included in the dimensional analysis presented here .moreover , we found that and had a negligible effect on the _ cluster - scale _ effectiveness factor , while the _ meso - scale _ thiele modulus , schmidt number , and the dimensionless filter size substantially alter the _ cluster - scale _ effectiveness factor .therefore , all work presented below will only interrogate the effect of , , and on . here the term _ meso - scale _ thiele modulus is used to distinguish it from the thiele modulus based on _ intra - particle _diffusivity given in eq .( [ eq : micro_eta ] ) .when performing periodic domain simulations of reacting gas - particle flows , one is faced with the following dilemma : due to the imposition of periodic boundary conditions , the concentration of any reactant undergoing an irreversible reaction within the periodic domain decay to zero . in order to facilitate gas - particle flow simulations in periodic domains , an alternate simulation strategy must be developed .for the special case of a first - order reaction , it can readily be shown that one can track the evolution of rather than itself , and relate the _ filtered _ value of directly to the _ cluster - scale _ effectiveness factor .the benefit of tracking the evolution of lies in the fact that it has a non - zero statistically steady value , even though will decay to zero .consider taking the species balance equation given by eq .( [ eq : cont_species ] ) with the reaction rate expression given by eq .( [ eq : rate_law ] ) and averaging it over the entire periodic domain , resulting in the following equation where is the volume of the periodic domain .if we now define the variable and plug into the species balance equation given by eq .( [ eq : cont_species ] ) we obtain the following evolution equation due to the fact that the reaction rate expressions in this problem are first order , we are able to track the evolution of without considering the time progression of or . the presence of non - zero source and sink terms on the right hand side of eq .( [ eq : kappa_eq ] ) force statistically steady value of to be non - zero regardless of the reaction rate constant used .therefore , in all work presented in following sections for the evolution of is solved , and the _ cluster - scale _ effectiveness factor is redefined in terms of as all simulation results presented in this work were generated using the multiphase flow with interface exchanges ( mfix ) software that relies on a variable time step , staggered grid , finite - volume method for the solution of the two - fluid model equations .the iterative solution to the two - fluid model equations is obtained using the semi - implicit method with pressure linked equations , or the simple algorithm . due to the strong coupling between solid and fluid phases that arises due to the fluid - particle drag force, the partial elimination algorithm of spalding was used to effectively decouple the solution of the solid and fluid phase balance equations .in addition , a second order superbee discretization is employed for the convective terms that are present in the conservation equations for continuity , momentum and granular energy transport to limit the effects of numerical diffusion .to illustrate the behavior of the _ cluster - scale _ effectiveness factor as a function of , simulation results are presented that were obtained using grid sizes that are sixteen times as large as the particle diameter .this grid resolution was chosen to follow the simulations employed in the study of ._ filtered _ variables were obtained from these fine - grid simulations by moving filters of different sizes throughout the periodic domain . due to the statistical homogeneity of periodic domains we are able to collect thousands of samples and average them regardless of spatial position . moreover , by _ filtering _ these fine - grid simulation results for various domain - averaged volume fractions the filter - averaged volume fraction dependence of the _ cluster - scale _ effectiveness factor can be ascertained .this _ filtering _ procedure follows that outlined in the work of .the characteristic dependence of and is presented in figure [ fig : eta_phi_dep ] .it is clear from figure [ fig : eta_phi_dep ] that both and are strong functions of retaining an inverted bell shape that approaches unity in the limit of small and large particle volume fraction .however , there are noticeable quantitative differences between and in figure [ fig : eta_phi_dep ] with the maximum deviation being near the minimum in both curves . the minimum value of , while the minimum value of .therefore , at this resolution , the non - local correction to the cluster - scale effectiveness factor contributes as much as to the value of the _ cluster - scale _ effectiveness factor near the minimum in the curves given in figure [ fig : eta_phi_dep ] .while the quantitative differences between and are functions of and other model parameters , this example provides a scale of the difference between the two effectiveness factors . in this work we have decided to model because it is directly related the small scale fluctuations in and , while is dependent on non - local variations in and ( see appendix ) . physical intuition for the decrease in the _ cluster - scale _ effectiveness factor from unitycan be obtained by observing the characteristic clustering patterns that are observed at different domain - averaged volume fractions , which are superimposed in figure [ fig : eta_phi_dep ] . at low volume fractionsthere are a few small isolated clusters throughout the periodic domain , and as such , the cluster - scale effectiveness factor will begin to deviate from unity . as the volume fraction increases the frequency of these clusters increases thus making the effective contacting between gas and particle phases poor .near the minimum value in the curve , clusters begin to span the periodic domain , and the solid phase changes from the dispersed to the continuous phase . as volume fraction continues to increasethe gas - particle flow becomes more homogeneous , and due to this homogeneity the cluster - scale effectiveness factor increases toward unity in the limit of high particle volume fraction . versus is shown with snapshots of the particle volume fraction field different domain - averaged volume fractions superimposed .here , , , and .,width=5 ] in figure [ fig : eta_curves ] ( a ) the _ cluster - scale _ effectiveness factor is shown as a function of for four different values at fixed values of the and .the depth of the inverted bell shape curve in is an increasing function of .the values of given in figure [ fig : eta_curves ] ( a ) are substantially smaller than unity , but there is a marked change in from unity .this may seem peculiar , but the small values of arise as a result of the fact that the length scale used to determine is the particle diameter .a more fitting length scale is that associated with a cluster .however , the size of a cluster emerges as a result of fine - grid two - fluid model simulations , and can not be considered as an input parameter .it is for this reason that we choose the particle diameter as the relevant length scale for rather than the length scale of a cluster . in figure[ fig : eta_curves ] ( b ) the dependence of the _ cluster - scale _ effectiveness factor is given for four different values of the dimensionless filter size at fixed and .the departure of the effectiveness factor from unity is also an increasing function of .however , figure [ fig : eta_curves ] ( c ) shows that the change in the _ cluster - scale _ effectiveness factor from unity is a decreasing function of when keeping and constant . while one might expect that increasing the value should increase the departure in from unity , we observe a decrease in this departure due to the fact that we are demanding to remain constant in figure [ fig : eta_curves ] ( c ) . in these simulationsthe variation of is achieved by varying the value of . in order to maintain a constant the effective rate constant be changed .therefore , varying at constant requires the variation in the effective rate constant .it is this coupled variation that brings about the somewhat surprising the dependence of on .when deducing a _ filtered _ model from a fine - grid simulation , it is necessary to ensure that _ filtered _ statistics are independent of grid size . to that end , the dependence of on is presented for four different grid resolutions in figure [ fig : eta_grid_dep ] ( a ) at fixed , , and .the dependence of on grid size is substantial , and only begins to saturate when the grid size is around particle diameters .figure 2 ( b ) shows the variation of the dimensionless _ filtered _ drag coefficient as a function of for four different grid resolutions . herethe dimensionless _ filtered _ drag coefficient is defined consistent with the earlier work of where the subscript indicates that the drag coefficient is inferred from the drag force and pressure fluctuation terms directed parallel to gravity . at grid sizes of sixteenparticle diameters seems to exhibit grid independence for , with some grid dependence emerging at higher volume fractions .figures 3 ( a ) and ( b ) illustrate the sensitive grid dependence of the _ cluster - scale _ effectiveness factor when compared to other _ filtered _ quantities like the fluid - particle drag coefficient .consequently , even finer grid resolutions are required when simulating reacting gas - particle flow when compared to non - reacting systems .this observation further supports the need for the development of _ filtered _models for accurate coarse - grid simulation of reacting systems . due to the sensitivity of with respect to grid size, we seek to develop a model for the _ cluster - scale _ effectiveness factor extrapolated to the limit of infinite grid resolution . in figures [ fig : rescaled_plots ] ( a ) and ( c ) , plots of versus are given for the finest two grid resolutions presented in this study . here , is the maximum value of , and is the _ filtered _ volume fraction at which the maximum value of occurs . from inspection of figures [ fig : rescaled_plots ] ( a ) and ( c ) it is clear that the grid dependence of the volume fraction variation in can be removed by rescaling by and by for all .in addition , it is demonstrated in figures [ fig : rescaled_plots ] ( b ) and ( d ) that the grid dependence of the volume fraction variation in can be removed for all by plotting versus .here is the volume fraction at which the value of reaches zero . due tothe grid - independent behavior observed in figures [ fig : rescaled_plots ] ( a)(d ) a filtered model utilizing a piecewise description of the variation in with particle volume fraction about is developed , while extrapolating the values of , , and to infinite resolution . in figure[ fig : extrap_b ] ( a ) and ( b ) the grid size dependence of is presented for two different values of and a variety of different filter sizes .the values of are clearly saturating as the value of approaches zero . in order to provide an accurate value of to use in our _ filtered _ reaction rate model , the value of at infinite resolutionis determined via a richardson extrapolation .the values of and were observed to vary linearly with grid resolution , and a linear extrapolation was performed to ascertain the infinitely resolved estimates of and .in the previous section , grid independence of the _ cluster - scale _ effectiveness factor was observed by plotting against scaled volume fraction coordinates that differ depending on whether is greater than or less than .motivated by this observation , we seek to model the volume fraction variation of via a piecewise function about . in figures [ fig : vol_fr_scaling_delta_left ] ( a)(c ) plots of versus are presented for different values of , , and .the shape of the curve of is clearly altered by changes in and , with only mild changes in the volume fraction dependence of at different grid values of . in figure[ fig : vol_fr_scaling_delta_left ] ( d ) the variation in with is shown to collapse by plotting all data with the same value of together .figures [ fig : vol_fr_scaling_delta_right ] ( a)(c ) , show the variation of with for different values of , , and .the variation in the dependence of on is clear , while the and dependence is substantially weaker . in figure[ fig : vol_fr_scaling_delta_right ] ( d ) we illustrate that the dependence of on for different , , and values can be collapsed onto a single curve provided the value of is kept constant . from figures [ fig : vol_fr_scaling_delta_left ] ( d ) and [ fig : vol_fr_scaling_delta_right ] ( d ) it is clear that the relevant scaling parameter governing the shape of the dependence of on on both sides of is dictated by a single parameter given as .the following functional forms are used to describe the dependence of : here , and are least - squares fit parameters that depend only on , see figures [ fig : fit_params ] ( a ) and ( b ) .both model parameters are given by smooth functions of presented in table [ tab : table3 ] .it should be noted that the observed fluctuation in about the model curve in figure [ fig : fit_params ] ( b ) can be attributed to the uncertainty in determining from the simulation results for . in figure[ fig : b_dep_delta ] ( a ) is presented as a function of for a variety of different and values .the variation of with , , and is evident by inspection of figure [ fig : b_dep_delta ] ( a ) .however , by replotting all data as a function of alone one can collapse all values onto a single master curve in figure [ fig : b_dep_delta ] ( b ) .a curve fit is presented for in terms of in table 3 .linear extrapolation of to infinite resolution reveals that varies between , with no discernable trend in , , or .this variation in arises as a result of the fact that the _ cluster - scale _ effectiveness factor is a weak function of in the region around and thus determining the value of the is subject to error . since no discernable trend in was observed , we recommend the use of because it represents the ensemble average of the different values obtained .moreover , since only varies slightly in the vicinity of , small errors in will not influence the quantitative behavior our _ filtered _ model substantially .in addition , the extrapolated value of was found to vary between with no systematic dependence on , , or . as a result of this fluctuationwe choose because this is consistent with earlier work in our group suggesting that _ filtered _ model corrections for gas - particle hydrodynamics become negligible at volume fractions of and higher . the _ cluster - scale _ effectiveness model developed in this work predicts that effective reaction rates observed in coarse - grid simulations be larger than those predicted in fine grid simulations if the effects of fine scale structure are not accounted for via the _ cluster - scale _ effectiveness factor .therefore , in order to accurately perform continuum model simulations of reacting gas - particle flows on coarse spatial grids , _ filtered _models for effective reactions rates are necessary . without these corrections , coarse - grid continuum model simulations will consistently over - estimate the conversion observed in a solid catalyzed reaction ( for example see ) . therefore , in order to perform accurate coarse - grid numerical simulations of a first - order , isothermal , solid - catalyzed gas - phase reaction we suggest the use of the following _ filtered _ species balance equation where the constitutive relation for is given in table [ tab : table3 ] , and a constitutive relation for the effective dispersion coefficient can be inferred from the work of . in section [ sec : prelim_study ] , it was shown in that the non - local correction to the effectiveness factor contribute much as to the observed value of . we have observed that this contribution decreases as a function of increasing grid resolution and filter size .therefore , in the limit of large filter sizes we expect that the contribution of the non - local term in eq .( [ eq : filtered_model ] ) will be weak , and can thus be neglected .however , for intermediate filter sizes we recommend the inclusion of this non - local correction . 5 inx while in this work the evolution of species mass fraction was solved in the gas phase alone , one could envision a model for a solid - catalyzed , gas phase reaction where the species mass fraction in both gas and solid phases is tracked separately and coupled through a mass transport term between particle and fluid - phases . in a model framework of this type, the effective interphase mass transfer rates will be decreased due to the presence of clustering in the gas - particle flow , and any reduction in reactant conversion that arises can be attributed to decreased mass transfer efficiency .we expect that such an observed decrease in mass transfer coefficient will be on the order of the _ cluster - scale _ effectiveness factor developed in this work . as an example , consider a mass transfer operation taking place between fcc particles and air , where the effective mass transfer coefficient is calculated to be using a model developed by for fixed beds. using one can determine the characteristic rate constant for mass transfer , where and are the surface area and volume of an fcc particle , respectively .for an fcc particle , and using the _ cluster - scale _ effectiveness factor model developed in eq .( [ eq : b_max ] ) for a filter size of we predict that . using this minimum value of the _ cluster - scale _effectiveness factor we predict that a nearly 20 fold decrease in the effective mass transfer coefficient ! indeed decreased effective mass transfer coefficients on this order have been found in energy minimization multi - scale model ( emms ) simulations of mass transport processes in gas - particle flows .the need for the development of _ filtered _ two - fluid models for reacting gas - particle flows is demonstrated by considering a model first - order , isothermal solid - catalyzed gas - phase reaction .it is shown that constitutive relations for the _ filtered _ reaction rate and _ filtered _ species dispersion must be postulated to close the _ filtered _ species balance equation . due to the fact that the model gas - phase reaction in this work is isothermal and produces no volume change , only _ filtered _ species balance equations must be developed , without the need to alter the existing _ filtered _ gas and solid momentum balance equations given in the earlier works of and for non - reacting , monodisperse gas - particle flows . using fine - grid continuum model simulations of reacting gas - particle flowswe extract the _ cluster - scale _ effectiveness factor , defined as the ratio of the fine - grid reaction rate to the reaction rate observed in a coarse grid simulation .cluster - scale _ effectiveness factor is observed to retain an inverted bell shaped dependence on volume fraction approaching unity in both the low and high volume fraction limits . at intermediate volume fractions a decrease in the _ cluster - scale _ effectiveness factor from unity is observed with the magnitude of this reduction being a strong function of _ meso - scale _ thiele modulus , and dimensionless filter size with only a weak dependence on schmidt number .due to the sensitivity of the _ cluster - scale _ effectiveness factor with grid size , we determine an asymptotic form for the cluster - scale effectiveness factor relying on a richardson extrapolation of the minimum in the cluster scale effectiveness factor .the extrapolated values of the cluster - scale effectiveness factor are shown to collapse when plotted as a function of , with the volume fraction dependence of the cluster - scale effectiveness factor collapsing for fixed .table [ tab : table3 ] presents a curve fit of our collapsed results for use in coarse - grid simulations of reacting gas - particle flows with first - order reaction kinetics .the reduction in effective reaction rates observed in this study can be used to rationalize the over - prediction in reactant conversion that was seen in the two - fluid model simulations of .finally , we note that the filtered model developed in this work is limited to first - order reaction kinetics , and was restricted to two - dimensional periodic domain simulations . future work should extend such analyses to other reaction kinetics , and three - dimensional bounded domains .however , we expect that the characteristic volume fraction , thiele modulus , filter size , and schmidt number dependence of will be qualitatively similar to the results presented here . fluid - particle drag coefficient particle radius + fit parameter for plot of versus + fit parameter for plot of versus + + biot number for mass transport + maximum value of particle diameter + effective diffusivity of reacting species in continuum model + molecular diffusivity of reacting species + _ intra - particle _ species diffusivity + _ meso - scale _ dispersion coefficient + coefficient of restitution of the particle phase + fluid - particle drag force experienced by the fluid + gravitational acceleration vector + radial distribution function at contact + weight function for filtering + collisional dissipation of granular energy + viscous dissipation of granular energy + intrinsic reaction rate constant + effective rate constant for mass transfer + effective reaction rate constant + _ meso - scale _ effective rate constant for mass transfer + convective mass transport coefficient + + gas - phase pressure + granular energy conduction vector + rate of production of species _i _ + reynolds number for the gas - phase + surface area of a particle + rate of deformation tensor + _ meso - scale _ schmidt number + granular temperature + gas velocity + particle velocity + terminal settling velocity of an isolated particle + volume of a periodic domain + volume of a particle + generic spatial position vector + spatial position of filter center + generic spatial position vector + _ greek letters : _+ fluid - particle friction coefficient + production of granular energy through interphase slip + dimensional grid size + dimensional filter size + dimensionless filter size + gas volume fraction + + intraparticle effectiveness factor + _ cluster - scale _ effectiveness factor + _ non - locally corrected __ cluster - scale _ effectiveness factor + ratio of to its domain - averaged value + conductivity of granular energy + bulk viscosity of particle phase + molecular gas - phase shear viscosity + effective gas - phase shear viscosity + shear viscosity of particle phase + gas density + particle density + gas - phase stress tensor + particle - phase stress tensor + particle volume fraction + _ meso - scale _ thiele modulus + thiele modulus + mass fraction of gas species _the authors would like to acknowledge the financial support from exxonmobil research engineering co. and the u.s .department of energy , office of fossil energy s carbon capture simulation initiative through the national energy technology laboratory .in section [ sec : filt_two_fluid ] the cluster scale effectiveness factor is defined in two ways given by eqs .( [ eq : effect_fac])([eq : effect_fac2 ] ) . here , the effectiveness factor given in eq .( [ eq : effect_fac2 ] ) is derived relying on the definition of the effectiveness factor given in eq .( [ eq : effect_fac ] ) .let the particle phase volume fraction and species mass fraction at any point be represented as follows where is a spatial variable associated with _ filtered _ and microscopic variables , respectively , neither of which are located at the filter center . assuming the _ filtered _variables can be given by smooth functions of space the _ filtered _ value of and can be approximated via the following taylor series the taylor expansions given in eqs .( [ eq : phi_tayl_filt ] ) and ( [ eq : tayl_filt ] ) can be plugged into eq .( [ eq : decomp ] ) to yield expressions for and that depend only on the location of the filter center and the microscopic spatial variable .utilizing eqs .( [ eq : decomp])([eq : tayl_filt ] ) the _ filtered _ product of and can be expressed as where . removing the gradient term that appears on the right hand side of eq .( [ eq : non_l_rem ] ) provides a method to define _ filtered _ constitutive relations for that depend on local _ filtered _ quantities alone , without considering gradients in local _ filtered _ variables .this approach of removing non - local effects in the process of _ filtered _ model development has recently been advanced when considering the development of _ filtered _ fluid - particle drag models for monodisperse gas - particle flows , and this formulation is extended here to reacting gas - particle flows . combining the definition of and given in eqs .( [ eq : effect_fac ] ) and ( [ eq : effect_fac2 ] ) with eq .( [ eq : non_l_rem ] ) one can arrive at the following two expressions therefore , provides a direct measure of the product of local fluctuations in and , while the value of is influenced by gradients in local filtered quantities . in order to interrogate the effect of local fluctuations alone , we choose to model in this study .igci , y. , pannala , s. , benyahia , s. , sundaresan , s. , 2011 .validation studies on filtered model equations for gas - particle flows in risers .industrial engineering chemistry research , in press .kashyap , m. , gidaspow , d. , oct .computation and measurements of mass transfer and dispersion coefficients in fluidized beds .powder technology 203 ( 1 ) , 4056 .http://linkinghub.elsevier.com / retrieve / pii / s0032591010% 001531[http://linkinghub.elsevier.com / retrieve / pii / s0032591010% 001531 ] zimmermann , s. , taghipour , f. , dec . 2005 .cfd modeling of the hydrodynamics and reaction kinetics of fcc fluidized - bed reactors .industrial engineering chemistry research 44 ( 26 ) , 98189827 .
using the kinetic - theory - based two - fluid models as a starting point , we develop _ filtered _ two - fluid models for a gas - particle flow in the presence of an isothermal , first - order , solid - catalyzed reaction of a gaseous species . as a consequence of the _ filtering _ procedure , terms describing the _ filtered _ reaction rate and _ filtered _ reactant dispersion need to be constituted in order to close the _ filtered _ species balance equation . in this work , a constitutive relation for _ filtered _ reaction rate is developed by performing fine - grid , two - fluid model simulations of an isothermal , solid - catalyzed , first - order reaction in a periodic domain . it is observed that the _ cluster - scale _ effectiveness factor , defined as the ratio between the reaction rate observed in a fine - grid simulation to that observed in a coarse - grid simulation , can be substantially smaller than unity , and it manifests an inverted bell shape dependence on _ filtered _ particle volume fraction in all simulation cases . moreover , the magnitude of the deviation in the _ cluster - scale _ effectiveness factor from unity is a strong function of the _ meso - scale _ thiele modulus and dimensionless filter size . thus coarse - grid simulations of a reacting gas - particle flow will overestimate the reaction rate if the _ cluster - scale _ effectiveness factor is not accounted for . reactive flows , fluidization , fluid mechanics , two - fluid model
the spin glass theory of infinite - ranged models has inspired a generation of physicists to study many theoretically challenging and practically important problems in physics and information processing .these problems share a common feature , in that the disordered interactions among their elements cause frustration and non - ergodic behaviour .the replica method has been useful in explaining their macroscopic behaviour . at the same time , based on the microscopic descriptions of the models , the cavity method resulted in many computationally efficient schemes .these approaches have laid the foundation for the study of many problems in complex optimisation using statistical mechanics , such as graph partitioning , travelling salesman , -satisfiability , and graph colouring .not only the graph colouring problem is among the most basic np - complete problems , but it also has direct relevance to a variety of applications in scheduling , distributed storage , content distribution and distributed computing . in the original problem , oneis given a graph and a number of colours , and the task is to find a colouring solution such that any two connected vertices are assigned different colours .this is equivalent to the potts glass with nearest neighbouring interactions in statistical physics .the problem has been studied by physicists using the cavity method . for a given number of colours, a phase transition takes place when the connectivity increases , changing from a colourable to an uncolourable phase .one of the statistical physics approaches was based on the replica symmetric ( rs ) ansatz .it gave an over - estimate of the threshold connectivity of this phase transition .the one - step replica symmetry - breaking ( 1rsb ) approach takes into account the possibility that the solution space can be fragmented . besides giving an estimate of the threshold connectivity within the mathematical bounds, it correctly predicts the existence of a clustering phase below the threshold , in which the solution space spontaneously divides into an exponential number of clusters .this is called the hard colourable phase , in which local search algorithms are rendered ineffective , and is a feature shared by other constraint satisfaction problems .the sequence of phase transitions in the graph colouring problem , and their algorithmic implications , were further refined recently .these advances in the spin glass theory stimulated the development of efficient algorithms .the cavity method gave rise to equations identical to those of belief propagation ( bp ) algorithm for graphical models .inspired by the 1rsb solution , survey propagation ( sp ) algorithms were subsequently developed to cope with situations with fragmented solution space , and they work well even in the hard phase of the graph colouring problem .in this paper , we study a variant of the graph colouring problem , namely , the colour diversity problem . in this problem, the aim is to maximise the number of colours within one link distance of any node .this is equivalent to the potts glass with second nearest neighbouring interactions in statistical physics , and hence is more complex than the original graph colouring problem in terms of the increased number of frustrated links .indeed , this variant of the colouring problem has been shown to be np - complete .this optimisation problem is directly related to various application areas and in particular to the problem of distributed data storage where files are divided to a number of segments , which are then distributed over a graph representing the network .nodes requesting a particular file collect the required number of file segments from neighbouring nodes to retrieve the original information .distributed storage is used in many real world applications such as oceanstore .compared with the original graph colouring problem , work done on the colour diversity problem mainly focused on algorithms .belief propagation ( bp ) and walksat algorithms for solving the problem have been presented in .both algorithms revealed a transition from incomplete to complete colouring , and the possibility of a region of hard colouring immediately below the transition point .approximate connectivity regimes for the solvable case have been found , given the number of colours .however , since the algorithms are based on simplifying approximations ( bp ) and heuristics ( walksat ) , both algorithms provide only upper bounds to the true critical values .the current study aims at providing a more principled approach to study the problem , a theoretical estimate of the transition point , and more insights on the nature of the transition itself .the method employed is based on a tree approximation , which is equivalent to the rs ansatz of the replica method or the cavity method .it results in a set of recursive equations which can be solved analytically .the connectivity values for which the tree approximation is valid and the types of phases present at each value are also investigated at both zero and finite temperatures . in section [ sec : model ] we introduce the model , followed by section [ sec : macro ] that explains briefly the derivation and how the macroscopic behaviour can be studied . in section [ sec : results ] we present the results obtained via population dynamics .discussions on the behaviour at finite temperatures are presented in section [ sec : finitetemp ] followed by a concluding section .the appendices contain further mathematical details .consider a sparsely connected graph with connectivity and colour for node .the connectivities are drawn from a distribution with mean . in this paperwe consider the case of _ linear connectivity _ , that is , the nodes have connectivities or , with probabilities and respectively .the colour can take the values .the colour diversity problem is trivial for the case , in which colour schemes with complete sets of colours available to all nodes can be found easily .hence we will focus on the more interesting case , in which a transition between complete and incomplete colouring exists , as shown in previous work .the set of colours available at the node and its local neighbourhood is where is the set of nearest neighbours of node . to find a colour scheme that maximises the number of different colours in and averaged over all nodes , we consider minimising the energy ( cost function ) of the form since the objective is equivalent to minimising the number of identical colours in the set , an appropriate form of the function is where for , and 0 otherwise . can be rewritten as } ^2.\ ] ] the quadratic nature of confirms that it is an appropriate cost function for diversifying the colours in the neighbourhood of each node . due to the convexity of its quadratic form , its minimum solution tends to equalise the numbers of all colours in the neighbourhood of a node .thus , besides maximising colour diversity , our choice of the cost function has an additional advantage for the distributed storage optimisation task , which has motivated the current study , where an even distribution of segments ( colours ) in a neighbourhood is also a secondary objective , offering greater resilience .the need for an even distribution of colours is especially important when the total number of colours is less than the connectivity of a node .consider the contribution from the function centred on a node in such a case .some colours can appear more than once .then the exact form of the function determines the selection of these extra colours . in general ,two types of selection can be made . in the first type, one may still use all colours , but they may be less evenly distributed than in the ground state . in the second type , one may use fewer colours .the former maximises the number of available colours , but the latter does not . in this case, an inappropriate choice of the cost function will mix these two cases assigning the same energies , rendering it impossible to distinguish optimal and suboptimal colour choices . on the other hand , eq .( [ eq : phi ] ) does not suffer from this shortcoming in the topology considered here .a geometric interpretation is able to illustrate this point .let be the number of times colour appears in .then the minimisation of eq .( [ eq : phi ] ) reduces to the minimisation of subject to the constraint that .note that the constraint defines a hyperplane in the -dimensional space of , and the problem is equivalent to finding the point with integer coordinates on the hyperplane such that its distance from the origin is minimised .the optimal solution is the point on the hyperplane closest to the normal , and no components should be zero when .in fact , the optimal solution is ] otherwise ( or its permutations ) .we have also considered a worst case analysis of the change in the total cost due to colour changes in neighbouring nodes when the function , centred on a node , is minimised .it shows that for networks with linear connectivities and , the ground states consist of all satisfied nodes only , if they exist .we note that second nearest neighbour interactions are present in this cost function .this is different from that of the original graph colouring problem , where the cost function involves only nearest neighbour interactions .as we shall see , the messages in the resultant message - passing algorithm will be characterised by two components , instead of the single components in the case of the original graph colouring problem .analysis of the problem is done by writing the free energy of the system at a temperature , given by where is the partition function given by ,\ ] ] being the inverse temperature . in the zero temperature limit , the free energy approaches the minimum cost function .several methods exist for deriving the free energy based on the replica and tree - based approximations . here, the analysis adopts a tree - based approximation , which is valid for sparse graphs .when the connectivity of the graph is low , the probability of finding a loop of finite length on the graph is low , and the tree approximation well describes the local environment of a node . in the approximation , node is connected to branches in a tree structure , and the correlations among the branches of the tree are neglected . in each branch , nodes are arranged in generations .node is connected to an ancestor node of the previous generation , and another descendent nodes of the next generation .consider the free energy of the tree terminated at node with colour , given its ancestor node of colour . in the tree approximation ,one notes that this free energy can be written as , where is the number of nodes in the tree terminated at node , and is referred to as the _ vertex free energy _ .that is , the vertex free energy represents the contribution of the free energy extra to the average free energy due to the presence of the vertex . in the language of the cavity method , are equivalent to the _ cavity fields _ , since they describe the state of the system when node is absent .the recursion relation of the vertex free energy of a node can be obtained by considering the contributions due to its descendent trees and the energy centred at itself .using notations described in fig .[ fig : vertex ] , the vertex free energy obeys the recursion relation -f_{\rm av } \ . \nonumber\end{aligned}\ ] ] in the above expression , the subtraction of is due to the incorporation of node with the descendent trees to form the tree terminated at node . for brevity , we will use the alternative simplified notation -f_{\rm av } \ , \ ] ] where the vector refers to the colours of all descendants in fig .[ fig : vertex ] . to find the average free energy ,one considers the contribution to a node due to all its neighbours , that is , \right\rangle_{\rm node } \ , \label{eq : fav}\ ] ] where the average denotes sampling of nodes with connectivity being drawn with probability .however , since the probability of finding a descendant node connecting to it is proportional to the number of links the descendant has , descendants are drawn with the _ excess probability _ .equations ( [ eq : vertexfreerecursion_al ] ) and ( [ eq : fav ] ) can also be derived using the replica method as presented in appendix a. we remark that both the derivation and the results are very similar to those in the problem of resource allocation on sparse networks , where the dynamical variables are the real - valued currents on the links of the networks .the parallelism between resource allocation and colour diversity is apparent when one notes that the currents in resource allocation can be expressed as the differences between current potentials defined on the nodes of the networks .hence the vertex free energies in both problems can be considered as functions of two variables .another useful relation can be obtained by substituting eq .( [ eq : vertexfreerecursion_al ] ) into eq .( [ eq : fav ] ) , \right\rangle_{\rm link}=0 \ , \label{eq : link}\ ] ] where the average denotes sampling of link vertices with connectivity with the excess probability .this relation can be interpreted by considering the free energy of forming a link between vertices and . sinceno extra nodes are added in this process , the extra free energy should average to zero .the average of a function is given by \ca(\cl_i ) } { \mbox{tr}_{\{\cl_i\ } } \exp\left[-\beta\sum\limits_{j\in n_i } { f^{v}_{ij}(q_i , q_j ) } -\beta\phi\left(\cl_i\right)\right ] } \right\rangle_{\rm node}.\ ] ] hence the average energy is given by the edwards - anderson order parameter , whose nonzero value characterises the potts glass phase , is given by the performance measure of interest is the _ incomplete fraction _ , which is defined as the average fraction of nodes with an incomplete set of colours available at the node and its nearest neighbours , \right\rangle_{\rm node } , \end{aligned}\ ] ] where for , and 0 otherwise .this performance measure is similar to the one used in , which we refer to as the _ unsatisfied fraction _ , and is defined as the average fraction of colours unavailable at the node and its nearest neighbours ( for the case that is not greater than the number of nearest neighbours plus 1 ) , \right\rangle_{\rm node}.\ ] ] one might consider using eq .( [ eq : incomplete ] ) or ( [ eq : unsat ] ) to define the cost function to be minimised , instead of eq .( [ eq : phi ] ) .this is indeed possible and we expect that zero - energy ground states can be obtained when the condition of full colour diversity for each node is satisfiable . in the unsatisfiable case , no zero - energy ground states can be found , but one might still be interested in finding states that minimise the average number of colours unavailable to a node . in this case , might not be an appropriate choice , since it mixes up the energies of selecting more ( but unevenly distributed ) colours , and fewer colours .the second measure favours those states with higher colour diversity , but for the same number of available colours , it does not distinguish states with different homogeneity of colour distribution . by comparison , the cost function in eq .( [ eq : phi ] ) has the additional advantage of favouring homogeneous colour distributions in the neighbourhood of the nodes .solutions to the recursive equation ( [ eq : vertexfreerecursion ] ) are obtained by population dynamics .we start with samples of nodes , each with one of colours randomly assigned as the initial condition . at each time step of the population dynamics ,all the nodes are updated once in random order . at the instant we update node , we select nodes to be its descendants , where is drawn from the distribution .descendants with connectivities are randomly selected with excess probabilities .the vertex free energy is then updated for all pairs _ before _ another node is updated .we have also computed the solutions using _ layered _ dynamics . at each time step of the layered dynamics ,the new vertex free energies of all the nodes are calculated , but are temporarily reserved until the end of the time step . hence at the instant we renew node , we select nodes to be its descendants , whose vertex free energies were computed in the _ previous _ time step . descendants with connectivities are randomly selected with excess probabilities . after the new vertex free energies of all the nodes have been computed , they are then updated synchronously and ready for the computation in the next time step .we observe that a _ modulation instability _ is present in layered dynamics .this means that after sufficient layers of computation , the colour distribution no longer remains uniform .rather , each layer is dominated by a particular colour , and the dominant colour alternates from layer to layer .this modulation is expected to be suppressed in random graphs due to the presence of loops of incommensurate lengths . furthermore .the average free energy computed by the layered dynamics has variances increasing rapidly with layers .hence the layered dynamics is not adopted in our studies . to avoid growing fluctuations of the vertex free energies in the population dynamics ,their constant components are subtracted off immediately after each update , = 3pc where is a constant bias independent of colours and .the recursion relation of the vertex free energy then becomes +\mbox{constant}\ ] ] after every time step , we measure the average free energy .this is done by repeatedly creating a test node and randomly selecting nodes to connect with the test node .the average free energy is then given by \right\rangle _{ \rm node } \!\!\!\!\!\!+\langle c\rangle \langle g\rangle_{\rm link } \ .\ ] ] note that is averaged over links , since the descendants are drawn with excess probabilities . to calculate employ the consistency condition ( [ eq : link ] ) for the average free energy of a link , which requires } \right\rangle _ { \rm link } \!\ !+ 2\left\langle g \right\rangle _ { \rm link } = 0.\ ] ] the node and link samplings are identical for graphs with uniform connectivity .this allows us to eliminate in eqs .( [ eq : fav1 ] ) and ( [ eq : link_normalization ] ) , and thus obtain . to tackle the case of non - uniform connectivities , we need to generalise the consistency condition ( [ eq : link_normalization ] ) .this can be done by restricting our consideration to links with vertices of given connectivities and , and consider the free energy due to the link connecting the trees on both sides of such links = 3pc }\right\rangle _ { c_i = a , c_j = b } = 0 \ .\ ] ] the derivation is analogous to that of eq .( [ eq : link_normalization ] ) , resulting in } \right\rangle _ { c_i = a , c_j = b } + \left\langle g \right\rangle _ a + \left\langle g \right\rangle _ b = 0,\ ] ] which facilitates the elimination of the biases in eq .( [ eq : fav1 ] ) , resulting in an expression for the average free energy = 1 pc \right\rangle _ { \rm node } \\ & & + \frac{\langle c\rangle}{2}\sum_{a , b } \frac{ap(a)}{\langle c\rangle}\frac{bp(b)}{\langle c\rangle } \left\langle t\ln \mbox{tr}_{a , b } \exp \left [ -\beta f_{ij}^v ( a , b)-\beta f_{ji}^v ( b , a ) \right ] \right\rangle _ { c_i = a , c_j = b}.\nonumber\end{aligned}\ ] ] = 6pc to evaluate one first performs the node average in the first term of eq .( [ eq : fav ] ) , keeping a record of the number of times each node is sampled .then one performs the average in the second term , randomly drawing the vertices and of the links from nodes with exactly the same number of times they appear in the first term .hence in this procedure , the descendants in both terms are drawn from the excess distribution .furthermore , it ensures that the s appearing in the first term are exactly cancelled by those appearing in the second term , thus eliminating a source of possible fluctuations .we also note that there can be a variety of choices of s to be subtracted from the vertex free energies in eq .( [ eq : gdef ] ) .for example , one may choose to be and arrive at the same result eq .( [ eq : fav ] ) .in fact , this computationally simple choice is adopted in our computation .expressions for the energy and entropy follow immediately using the identity and the averaging of eq .( [ eq : avfree ] ) , = 1pc \left[\sum_{j\in n_i } e_{ij}^v ( q_i , q_j ) + \phi(\cl_i)\right ] } { { \rm tr}_{\{\cl_i\ } } \exp \left[-\beta \sum_{j\in n_i } f_{ij}^v ( q_i , q_j ) -\beta\phi(\cl_i)\right ] } \right\rangle_{\rm node } , \nonumber\\\end{aligned}\ ] ] where is the vertex energy with the recursion relation \left[\sum_{k=1}^{c_j-1 } e_{jk}^v ( b , q_k ) + \phi(a , b,{\mathbf q})\right ] } { \mbox{tr}_{\mathbf q } \exp \left[-\beta \sum_{k=1}^{c_j-1 } f_{jk}^v(b , q_k ) -\beta\phi(a , b,{\mathbf q})\right ] } \nonumber\\ & & -e_{\rm av } \ , \end{aligned}\ ] ] and = 6pc compared with the previous equation ( [ eq : locale ] ) for the average energy , eq .( [ eq : eav ] ) includes the vertex energies of the descendants .these vertex energies transmit the energy deviations from the average energy , from the descendants to the ancestors .hence eq .( [ eq : eav ] ) can be regarded as a global estimate of the average energy , and eq .( [ eq : locale ] ) is a local estimate .theoretically , one expects that both estimates should yield the same result .numerically , however , we found that this is only valid in the paramagnetic phase . in the potts glass phase, the discrepancy between the two estimates can be very significant .this shows that in the paramagnetic phase , memories about the initial conditions are lost easily .in contrast , in the potts glass phase , memories about the initial conditions can propagate for a long time through the vertex energies . to avoid propagating fluctuations in the computation of the average energy, we subtract from all components immediately after each update , and find using = 1 pc \left[\sum_{j\in n_i } e_{ij}^v ( q_i , q_j ) + \phi(\cl_i)\right ] } { \mbox{tr}_{\{\cl_i\ } } \exp \left[-\beta \sum_{j\in n_i } f_{ij}^v ( q_i , q_j ) -\beta\phi(\cl_i)\right ] } \right\rangle_{\rm node } \nonumber\\ & & -\frac{\langle c\rangle}{2}\sum_{a , b } \frac{ap(a)}{\langle c\rangle}\frac{bp(b)}{\langle c\rangle } \nonumber\\ & & \times\left\langle\frac { \mbox{tr}_{a , b } \exp \left[-\beta f_{ij}^v(a , b)-\beta f_{ji}^v(b , a)\right ] \left [ e_{ij}^v(a , b)+e_{ji}^v(b , a)\right ] } { \mbox{tr}_{a , b } \exp \left[-\beta f_{ij}^v(a , b)-\beta f_{ji}^v(b , a)\right ] } \right\rangle_{c_i = a , c_j = b}. \label{eq : globale}\end{aligned}\ ] ] the derivation at zero temperature should be carried out with extra care due to possible degeneracy in the solutions . in the zero temperature limit ,( [ eq : vertexfreerecursion_al ] ) reduces to = 6 pc -f_{\rm av } \ .\ ] ] the expression of the entropy at zero temperature can be computed directly from the _ vertex entropies_. differentiating eq .( [ eq : vertexfreerecursion_al ] ) with respect to , and taking the zero temperature limit , one obtains -s_{\rm av},\ ] ] where is the set of colours minimising the free energy at node .similarly , differentiating eq .( [ eq : fav ] ) with respect to and taking the zero temperature limit , one obtains = 1 pc \right\rangle _ { \rm node } \\ & -&\frac{\langle c\rangle}{2}\sum_{a , b } \frac{ap(a)}{\langle c\rangle}\frac{bp(b)}{\langle c\rangle } \left\langle { \ln \left [ { \sum\limits_{\{a^ * , b^ * \ } } { \exp\left ( { s_{ij}^v ( a^ * , b^ * ) + s_{ji}^v ( b^ * , a^ * ) } \right ) } } \right ] } \right\rangle _ { c_i = a , c_j = b } \nonumber \ , \end{aligned}\ ] ] where are the set of colours minimising the free energy at node , and are the set of the pair of colours minimising the free energy at link _the performance measures are now weighted by the entropies , and eq .( [ eq : avfree ] ) is replaced by the expression = 6 pc \ca(\cl_i^ * ) } { \mbox{tr}_{\{\cl_i^*\ } } \exp\left[\sum\limits_{j\in n_i } { s_{ij}(q_i^*,q_j^*)}\right ] } \right\rangle_{\rm node}.\ ] ] in the paramagnetic state , the vertex free energies are symmetric with respect to permutation of colours at each node .hence there are only two distinct values of the vertex free energy for each node , corresponding to the cases that the colours of the node and its ancestor are the same or different .hence , we can derive the recursion relation for the single variable ] .hence at an intermediate value of , the entropy changes sign . thus there is a range of negative entropy for below 4 where the rs ansatz is unstable .numerical solutions to the equations are obtained using population dynamics in the manner explained in subsection [ subsec : popdyn ] .results are obtained for and ensembles of graphs with linear connectivity , mixing nodes with connectivities 3 and 4 in varying proportions . after every time step ,we measure the following measures : the local estimate of the average energy , the incomplete fraction , and the edwards - anderson order parameter .this is done by creating a test node and randomly selecting nodes to connect with the test node .the node contributions to the average free energy , the global estimate of the average energy , and ( for zero temperature ) the entropy are also computed .the computed measures are repeated for nodes for each sample .the set of descendant nodes of these test nodes is recorded .then , pairs of nodes are randomly drawn this set to form links , and the link contributions to the average free energy , the global estimate of the average energy , and ( for zero temperature ) the entropy are computed .figure [ fig : qea ] shows the edwards - anderson order parameter as a function of .it can be seen that the value of is 0 in the paramagnetic phase , which spans the region . in this phase ,all nodes have free choices of colours .the potts glass phase spans the region , where remains at a value around 0.7 , and its transition to the paramagnetic phase is of the first order .figure [ fig : fincom ] shows incomplete fraction obtained from the steady state solution of the population dynamics at fixed values .it remains nonzero in the potts glass phase , and vanishes discontinuously above in the paramagnetic phase . to find the stable as well as the unstable solutions of the population dynamics , which correspond to multiple solutions at fixed , we may run the population dynamics at fixed nonzero .this can be done by monitoring conditionally averaged on the nodes with and at each step , and adjusting the value of to approach its targeted value , which is related to the targeted value of estimated at each time step by .the population dynamics at fixed yields both stable and unstable solutions of the potts glass state below , confirming that the transition to the paramagnetic phase is discontinuous , and that corresponds to the spinodal point .the edwards - anderson order parameter for both stable and unstable potts glass states are also shown in fig .[ fig : qea ] , bearing features similar to those in fig .[ fig : fincom ] .figure [ fig : fav ] shows the average free energy .the paramagnetic free energy of 3 provides a baseline for comparing the energy and free energy of the different phases . below the spinodal point , the paramagnetic state continues to exist .it is not accessible by the population dynamics , but one can find the paramagnetic free energy by first finding a paramagnetic state at , and then gradually reducing the connectivity to the desired value .the resultant paramagnetic free energy is identical to that found directly in subsection [ subsec : para ] . as shown in fig .[ fig : fav ] , the potts glass free energy becomes lower than the paramagnetic free energy near the spinodal point .a first order transition appears to take place at , where the free energies of the two states cross each other .the subscript zic refers to the zero initial condition used here , as distinguished from the random initial condition ( subscript ric ) to be discussed in the next subsection .however , since the potts glass energy equals the free energy at zero temperature , this implies that the average energy is below the lowest possible energy of in the range ! similar observations of contradictory results have been observed in the rs ansatz of the original graph colouring problem and the 3-sat problem , this indicates that the rs ansatz in the present analysis is insufficient , and has to be improved by including further steps of replica symmetry - breaking .furthermore , the solution of the population dynamics is insensitive to this transition point in the large limit .instead , it yields the potts glass state above this transition point right up to the spinodal point .( for smaller values of , say , , the discontinuous transition takes place below the spinodal point . )thus , the transition at looks like a zeroth order one , with a discontinuous jump of the average free energy from the potts glass phase below to the paramagnetic phase above .as mentioned in subsection [ subsec : esfinite ] , the local and global estimates of the average energy are different and are given by eqs .( [ eq : locale ] ) and ( [ eq : globale ] ) respectively .the global estimate yields results identical to the average free energy , showing that memories about initial conditions in both variables have been compensated .however , we observe that the global average energy is numerically unstable in the potts glass phase . for , it diverges from the average free energy after about 100 steps in the population dynamics . as shown in fig .[ fig : fav ] , the local estimate of the average energy is indistinguishable from the global estimate in the paramagnetic phase .however , the local estimate is significantly higher than the global estimate in the potts glass phase .unlike the global estimate which contradicts the lowest possible energy , the local estimate remains above it .next , we consider the entropy .the entropy of the paramagnetic state obtained from the theoretical prediction of eq .( [ eq : sav ] ) agrees well with the results of population dynamics . as shown in fig .[ fig : ent ] , the entropy of the paramagnetic state becomes negative for , while the entropy of the potts glass state is negative throughout . at the spinodal point , the entropy exhibits a small discontinuous jump .clearly , results for should be investigated using a replica symmetry - breaking ansatz to identify the exact transition point , which is beyond the scope of this paper .one puzzle of our results is that the edwards - anderson order parameter remains at a level around 0.7 in the entire potts glass phase .this implies that a considerable fraction of nodes have free choices of colours even in the potts glass phase .this is illustrated by the distribution of colour moments in fig .[ fig : ric](a ) , which consists of a continuous background with peaks at simple rational numbers ( 1/5 , 1/4 , 1/3 , 2/5 etc . ) .in fact , the existence of free spins at zero temperature has been considered an indication of broken replica symmetry .however , this is apparently inconsistent with extrapolations from finite temperatures , which will be discussed in the next section . as will be seen, approaches 1 in the limit of low but finite temperature , implying that all nodes lose the freedom of choosing more than one colour . to resolve this inconsistency, we consider the effects of introducing a small randomness in the initial condition , that is , a small random bias is added to the initial values of the vertex free energies , which take integer values otherwise .such randomness were known to cause significant changes in the optimal solution in the graph bipartitioning problem , where the field distribution is initialised to a rectangular distribution .figure [ fig : ric](b ) shows that when a very small randomness is introduced in the initial condition , the final values of the edwards - anderson order parameter remain around 1 in both the paramagnetic and potts glass phase .this means that effectively all spins are frozen due to the randomness in the initial condition .the distribution of colour moments consists of two delta function peaks , located at and respectively .this is consistent with the extrapolation of finite temperature results .the difference between zero temperature and low but finite temperature distributions was also observed in the rs approximation of the original graph colouring problem .randomness in the initial condition causes a significant change in the transition point between the potts glass and paramagnetic states .figure [ fig : ric](c ) shows that the average free energy of the potts glass state crosses that of the paramagnetic state at and for the zero and random initial conditions , respectively .as far as we can tell from our numerical precision , is effectively the same as the spinodal point .as will be seen in the next section , the transition point is consistent with the phase transition line at finite temperatures .the effects of randomness in the initial condition on the performance are shown in fig .[ fig : ric](d ) . for the random initial condition ,the incomplete fraction in the potts glass phase vanishes effectively continuously to 0 at .this is in contrast with the incomplete fraction for the zero initial condition , which is much higher , and vanishes discontinuously at the spinodal point .the entropy is effectively zero in both the potts glass phase and the paramagnetic phase in the case of random initial conditions .this is different from the case of zero initial conditions shown in fig .[ fig : ent ] , in which the entropy is negative in the entire potts glass phase and part of the paramagnetic phase .( 380,360 ) ( 12,170 ) ( 210,170 ) ( 0,0 ) ( 210,0 ) to illustrate the difference between the paramagnetic and potts glass phases , we consider the evolution of damages for different average connectivities . the damaged configuration , with colours , is initialised identically to , except that the colours of the descendants of one randomly chosen node have been inverted , that is , where are the descendants of node .we define the _ distance measure _ between \{ } and \{ } as the distance between the colour moments we monitor the population dynamics of the colour configuration \{ } and its _ damaged _ configuration \{}. they evolve with the same sequence of updates and choice of descendants . as shown in fig .[ fig : damage ] , the distance is nonzero in the potts glass phase , but vanishes in the paramagnetic phase .this shows that multiple solutions of the saddle point equation exist in the potts glass phase , but the solution is unique in the paramagnetic phase .the spread of damage is consistent with the instability of the replica symmetric solution in the potts glass phase .further insights about the thermodynamic behaviour can be obtained by considering the finite temperature behaviour .let us first study the example of .figure [ fig : qea3](a ) shows that of the thermodynamic state vanishes at temperatures above 0.575 . to verify that this phase transition is discontinuous, we look for solutions of the population dynamics with variable for given values of , which yield the potts glass state . as shown in fig .[ fig : qea3](b ) , the potts glass phase with positive does not vanish continuously into the paramagnetic phase .rather , its stable and unstable branches merge at the temperature 0.575 , which is therefore identified to be the spinodal temperature .( 380,180 ) ( 0,0 ) ( 200,0 ) figure [ fig : fav3](a ) shows the free energies of the paramagnetic state and the results of the population dynamics .the free energy at the paramagnetic state reaches a maximum at .below this temperature , the entropy becomes negative .the population dynamics is in good agreement with the paramagnetic state down to the spinodal temperature , below which the population dynamics deviates from the paramagnetic state .figure [ fig : fav3](b ) shows the free energies in the neighbourhood of the spinodal temperature , including the stable and unstable branches of the potts glass state .the free energies of the potts glass and paramagnetic states become equal at .while this can be interpreted as the thermodynamic transition temperature , we observe that it is not relevant to the population dynamics , in which the jump of , as shown in figs .[ fig : qea3](a ) and ( b ) , takes place at the spinodal temperature instead .this behaviour is consistent with the irrelevance of the first order transition point at zero temperature , as described in subsection [ subsec : para - glass ] .( 380,180 ) ( 0,0 ) ( 200,0 ) the behaviour of the entropy is shown in fig .[ fig : ent3](a ) .the entropy of the paramagnetic state becomes negative below .the stable and unstable branches of the potts glass state are shown in fig .[ fig : ent3](b ) , and the population dynamics yields results jumping discontinuously from the stable branch of the potts glass state to the paramagnetic state at the spinodal temperature .( 380,180 ) ( 0,0 ) ( 200,0 ) regions of negative entropy are often found in spin glasses .they usually signal that the rs ansatz is unstable .however , in the original sherrington - kirkpatrick model , the region of negative entropy is restricted to the low temperature regime deep inside the spin glass phase .in contrast , the region of negative entropy at spans the entire potts glass phase and even covers part of the paramagnetic phase .this indicates that frustration effects in the present model is unusually strong .we propose that this increased frustration effect is a consequence of the second nearest neighbouring interactions present in the colour diversity problem , and does not exist in most models investigated so far . to verify this, we consider the model .\ ] ] the cases and 1 correspond to the graph colouring and colour diversity problems respectively , we will consider the range . in the paramagnetic phase ,expressions for the entropy can be derived analogously to appendix b. as shown in fig .[ fig : ent_pm ] , the region of negative entropy of the paramagnetic state shrinks when the second nearest neighbouring interaction is reduced .thus , in the absence of second nearest neighbouring interaction , the region of paramagnetic phase with negative entropy is preempted by the potts glass phase . for general values of we will consider three transition lines in the space of and : the zero entropy line in the paramagnetic phase , the spinodal line of the glassy state , and the paramagnetic - glass transition line .the transition lines are plotted in fig .[ fig : transition ] .when extrapolated to , the zero entropy , spinodal and free - energy crossing lines pass through the points , and , respectively , in full agreement with the results obtained for the zero temperature case . in summary ,the system has a paramagnetic phase at high temperature or high connectivity . inferring from the studies of the graph colouring problem , we expect that a phase transition to replica symmetry - breaking states takes place at the high temperature ( and high connectivity ) side of the zero entropy line , even when the system is still in the paramagnetic state .however , the location of this transition can not be found in the present framework of replica symmetry .nevertheless , the replica symmetric solution has provided us insights on the full solution , suggesting the following picture .one expects the existence of the spinodal line , where the potts glass state with a nonzero edwards - anderson order parameter exists in its low temperature ( and low connectivity ) side .the potts glass state exists as a metastable state in the vicinity of the spinodal line .then , at the low temperature ( and low connectivity ) side of the paramagnetic - glass transition line , the potts glass state becomes thermodynamically stable .we have studied the macroscopic behaviour in the colour diversity problem , a variant of the graph colouring problem of significant practical relevance , especially in the area of distributed storage and content distribution . to cope with the presence of second nearest neighbouring interactions , the analysis makes use of vertex free energies of two arguments , which enable us to study the behaviour in the rs analysis , and lays the foundation for future analyses incorporating replica symmetry - breaking effects .the analysis is successfully applied to graphs with mixed connectivities . for and graphs with linear connectivity ,the rs analysis identifies three transition lines according to : ( 1 ) when the entropy becomes negative ( ending at when ) , signalling the breakdown of the rs ansatz ; ( 2 ) when becomes multiple - valued function of the spinodal point ( ending at when ) ; and ( 3 ) the free - energy crossing point between the paramagnetic and potts glass state ( ending at when approaches 0 ) . the regime of negative entropy is so extensive that it covers the entire potts glass phase as well as part of the paramagnetic phase , and can be attributed to the increased frustration due to the presence of second nearest neighbouring interactions .the picture that emerges is that the system is in a paramagnetic state at high temperature or high connectivity ; the rs ansatz breaks down prior to the temperature that identifies the zero entropy transition point .the potts glass state exists first as a metastable state but becomes dominant at a lower temperature ( connectivity ) .evidence from the population dynamics shows that the discontinuous transition takes place at the spinodal point rather than the crossing point .however , the rs analysis results in the average energy falling below the lowest possible energy for , and a region of negative entropy .since the entropy remains positive at the colourable - uncolourable transition , we conjecture that if replica symmetry - breaking is taken into account , the potts glass - paramagnetic transition should take place at the higher temperature ( and high connectivity ) side of the zero entropy line . for the optimisation of the colour diversity, one should consider , implying that the incomplete - complete transition should take place at beyond .this estimate of the transition point seems to be supported by simulation results using the walksat and bp algorithms . in summary, we have demonstrated the value of different analytical approaches and the use of population dynamics in elucidating the system behaviour of the colour diversity problem on a sparse graph .they provide insights on the estimates of the transition points , the existence of metastable states , and the nature of phase transitions .we thank lenka zdeborov , david sherrington , bill yeung , edmund chiang for meaningful discussions , and stephan mertens for drawing our attention to .this work is partially supported by research grants dag04/05.sc25 , dag05/06.sc36 , hkust603606 and hkust603607 of the research grant council of hong kong , by evergrow , ip no .1935 in the complex systems initiative of the fet directorate of the ist priority , eu fp6 and epsrc grant ep / e049516/1 .consider the minimisation of the energy ( cost function ) on a graph of connectivity : where is symmetric with respect to the permutation of the neighbours , , and if nodes and are connected on the graph , and 0 otherwise .since there are values of the function , one can write the partition function is .\ ] ] the replicated partition function , averaged over all graph configurations with connectivity , is given by ,\end{aligned}\ ] ] where is the total number of graph representations with connectivity .it is convenient to express the exponential argument as an unrestricted sum over the nodes , where are integers accounting for the over - counting in rewriting the summations in terms of equal indices .their precise values are not required in our final result .this allows us to factorise the expression into \cdots \left[\sum_{j_c}a_{ij_c}(q_{j_c}^\alpha)^{m_c}\right]\right .\nonumber\\ & & -b_2\left[\sum_{j_1}a_{ij_1}(q_{j_1}^\alpha)^{m_1+m_2}\right ] \left[\sum_{j_c}a_{ij_3}(q_{j_3}^\alpha)^{m_3}\right ] \cdots \left[\sum_{j_c}a_{ij_c}(q_{j_c}^\alpha)^{m_c}\right ] \nonumber\\ & & \left.+\cdots+(-)^{c-1}b_c \left[\sum_{j_1}a_{ij_1}(q_{j_1}^\alpha)^{m_1+\cdots+m_c}\right ] \right\}. \label{eq : factorsum}\end{aligned}\ ] ] following steps similar to those in , one gets = 1 pc \right)\right .\nonumber\\ & & \times\left[\sum_{r_m^\alpha , s_m^\alpha } \hat q_{{\mathbf r},{\mathbf s}}\prod_{m,\alpha } ( -i\hat h_m^\alpha)^{r_m^\alpha}(q^\alpha)^{ms_m^\alpha } + \frac{1}{2}\sum_{r_m^\alpha , s_m^\alpha}\prod_{m,\alpha } \frac{(-i\hat h_m^\alpha)^{s_m^\alpha}}{r_m^\alpha!s_m^\alpha ! } ( q^\alpha)^{mr_m^\alpha}\right]^c \nonumber\\ & & \times\exp\left\{-\frac{\beta}{c!}\sum_{{\mathbf m},\alpha } \phi_{\mathbf m}(q^\alpha)^{m_0}\left [ h_{m_1}^\alpha\cdots h_{m_c}^\alpha -b_2 h_{m_1+m_2}^\alpha h_{m_3}^\alpha\cdots h_{m_c}^\alpha + \cdots\right.\right . \nonumber\\ & & \left.+(-)^{c-1}b_c h_{m_1+\cdots+m_c}^\alpha \right]\biggr\}\biggr\ } , \label{eq : partn}\end{aligned}\ ] ] where and are given by the saddle point equations of eq .( [ eq : partn ] ) .consider the generating function in the replica symmetric ansatz , we consider functions of the form substituting the saddle point equation for into eq .( [ eq : gen ] ) , one finds where \prod_m(q^\alpha)^{ms_m^\alpha } \nonumber\\ & & \times\exp\biggl[-\frac{\beta}{c!}\sum_{{\mathbf m},\alpha } \phi_{\mathbf m}(q^\alpha)^{m_0}\biggl ( h_{m_1}^\alpha\cdots h_{m_c}^\alpha -b_2 h_{m_1+m_2}^\alpha h_{m_3}^\alpha\cdots h_{m_c}^\alpha+\cdots \nonumber\\ & & + ( -)^{c-1}b_c h_{m_1+\cdots+m_c}^\alpha\biggr ) \biggr|_{h_m^\alpha=(z_\alpha)^m + \sum_{k=1}^{c-1}(\mu_k^\alpha)^m } \biggr]\biggr\}\biggr\rangle,\end{aligned}\ ] ] and is a constant having the same expression as that of , except that runs from 1 to and are set to 0 .the expression in the exponential argument of can be further simplified . rewriting as unrestricted sums over the neighbours analogously to eq .( [ eq : factorsum ] ) , \cdots \left[\sum_{k=1}^c(\mu_k^\alpha)^{m_c}\right]\right . \nonumber\\ & & -b_2\left[\sum_{k=1}^c(\mu_k^\alpha)^{m_1+m_2}\right ] \left[\sum_{k=1}^c(\mu_k^\alpha)^{m_3}\right ] \cdots \left[\sum_{k=1}^c(\mu_k^\alpha)^{m_c}\right]+\cdots \nonumber\\ & & \left.+(-)^{c-1}b_c \left[\sum_{k=1}^c(\mu_k^\alpha)^{m_1+\cdots+m_c}\right ] \right\}.\end{aligned}\ ] ] identifying each term in the square bracket as , we recognise the exponential argument as . we can now identify a recursion relation for the function which does not involve replica indices , \exp[-\beta\phi(q , z,\mu_1,\cdots,\mu_{c-1})].\ ] ] the denominator is given , in the limit approaching 0 , \exp[-\beta\phi(q,\mu_1,\cdots,\mu_c)]\right\}\right\rangle.\ ] ] letting the vertex free energy be defined by , we arrive at the recursion relation ( [ eq : vertexfreerecursion_al ] ) and the average free energy ( [ eq : fav ] ) .the average free energy is given by where (z_{j1 } + z_{j2}+z_{j3 } ) \nonumber\\ & & \left.\left.+q_1z^6(z_{j1}z_{j2}+z_{j2}z_{j3}+z_{j1}z_{j3 } ) + z^{12}z_{j1}z_{j2}z_{j3}\right\}\right\rangle \ , \nonumber \\\left . { f_{\rm av } } \right|_{c=4 } & = & 5-\left\langle t\ln q\left\ { { q_1q_2q_3q_4 } + 6q_1q_2q_3z^2 + 3q_1q_2z^4 + 4q_1q_2z^6+q_1z^{12}\right.\right .\nonumber\\ & & + [ q_1q_2q_3z^2 + 3q_1q_2z^4 + q_1z^8](z_{j1 } + z_{j2 } + z_{j3 } + z_{j4 } ) \nonumber \\ & & + [ q_1q_2z^6+q_1z^8](z_{j1}z_{j2}+z_{j1}z_{j3}+z_{j1}z_{j4 } + z_{j2}z_{j3}+z_{j2}z_{j4}+z_{j3}z_{j4 } ) \nonumber \\ & & + \left.\left . q_1z^{12}(z_{j1 } z_{j2 } z_{j3 } + z_{j1 } z_{j2 } z_{j4 } + z_{j1 } z_{j3 } z_{j4 } + z_{j2 } z_{j3 } z_{j4 } ) + z^{20}z_{j1 } z_{j2 } z_{j3 } z_{j4 } \right\ } \right\rangle , \nonumber\\ \left .{ f_{\rm link } } \right|_{c_i c_j } & = & \left .{ \left\langle { -t\ln q\left [ { q-1+z_{ij } z_{ji } } \right ] } \right\rangle } \right|_{c_i c_j } \ .\label{eq : para4fav}\end{aligned}\ ] ] the average energy is given by the components of which take the form where (z_{j1 } + z_{j2 } + z_{j3 } ) \nonumber \\ & & + q_1z^6(z_{j1 } z_{j2 } + z_{j2 } z_{j3 } + z_{j1 } z_{j3 } ) + z^{12}z_{j1 } z_{j2 } z_{j3 } , \nonumber\\ e^{(3)}_n & = & 4q_1q_2q_3 + 18q_1q_2z^2 + 10q_1z^6 + [ 6q_1q_2z^2 + 8q_1z^4](z_{j1 } + z_{j2 } + z_{j3 } ) \nonumber \\ & & + 10q_1z^6(z_{j1 } z_{j2 } + z_{j2 } z_{j3 } + z_{j1 } z_{j3 } ) + 16z^{12}z_{j1 } z_{j2 } z_{j3 } \ , \nonumber \\ e^{(4)}_d&= & q_1q_2q_3q_4 + 6q_1q_2q_3z^2 + q_1q_2z^4 + 4q_1q_2z^6+q_1z^{12 } \nonumber \\ & & + [ q_1q_2q_3z^2 + 3q_1q_2z^4 + q_1z^8](z_{j1 } + z_{j2 } + z_{j3 } + z_{j4 } ) \nonumber \\ & & + [ q_1q_2z^6 + q_1z^8](z_{j1 } z_{j2 } + z_{j1 } z_{j3 } + z_{j1 } z_{j4 } + z_{j2 } z_{j3 } + z_{j2 } z_{j4 } + z_{j3 } z_{j4 } ) \nonumber \\ & & + q_1z^{12}(z_{j1 } z_{j2 } z_{j3 } + z_{j1 } z_{j2 } z_{j4 } + z_{j1 } z_{j3 } z_{j4 } + z_{j2 } z_{j3 } z_{j4 } ) + z^{20}z_{j1 } z_{j2 } z_{j3 } z_{j4 } , \nonumber \\ e^{(4)}_n & = & 5q_1q_2q_3q_4 + 42q_1q_2q_3z^2 + 27q_1q_2z^4 + 44q_1q_2z^6 + 17q_1z^{12 } \nonumber \\ & & + [ 7q_1q_2q_3z^2 + 27q_1q_2z^4 + 13q_1z^8](z_{j1 } + z_{j2 } + z_{j3 } + z_{j4 } ) \nonumber\\ & & + [ 11q_1q_2z^6 + 13q_1z^8](z_{j1 } z_{j2 } + z_{j1 } z_{j3 } + z_{j1 } z_{j4 } + z_{j2 } z_{j3 } + z_{j2 }z_{j4 } + z_{j3 } z_{j4 } ) \nonumber \\ & & + 17q_1z^{12}(z_{j1 } z_{j2 } z_{j3 } + z_{j1 } z_{j2 } z_{j4 } + z_{j1 } z_{j3 } z_{j4 } + z_{j2 } z_{j3 } z_{j4 } ) + 25z^{20}z_{j1 } z_{j2 } z_{j3 } z_{j4 } \ .\nonumber\end{aligned}\ ] ]99 j. kubiatowicz , d. bindel , y. chen , p. eaton , d. geels , r. gummadi , s. rhea , h. weatherspoon , w. weimer , c. wells and b. zhao , oceanstore : an extremely wide - area storage system , u. c. berkeley technical report no .ucb//csd-00 - 1102 ( 1999 ) .
colouring sparse graphs under various restrictions is a theoretical problem of significant practical relevance . here we consider the problem of maximising the number of different colours available at the nodes and their neighbourhoods , given a predetermined number of colours . in the analytical framework of a tree approximation , carried out at both zero and finite temperatures , solutions obtained by population dynamics give rise to estimates of the threshold connectivity for the incomplete to complete transition , which are consistent with those of existing algorithms . the nature of the transition as well as the validity of the tree approximation are investigated .
differential equations are extensively used in modeling dynamical systems in science and engineering . when dynamical systems are under random influences , stochastic differential equations ( sdes ) may be more appropriate for modeling .the solutions of sdes are interpreted in terms of stochastic integrals .dynamical systems subject to gaussian white noise are often modeled by sdes with brownian motion , and the solutions are in terms of the ito integral .although the ito integral is self - consistent mathematically , it is not the only type of stochastic integrals that can be constructed to interpret an sde .other stochastic integrals , such as the stratonovich integral , have also been used to interpret an sde as a stochastic integral equation .there is no right or wrong choice when choosing either ito or stratonovich integrals in interpreting sdes mathematically , since the two integrals are equivalent and can be converted into each other , provided that the integrand satisfies certain smoothness conditions .however , these stochastic integrals have different definitions , and one may be more directly related to a practical situation than the other .while ito integral is a reasonable choice in many applications including finance and biology , stratonovich integral is believed to be more appropriate in physical and engineering applications .stratonovich integral has an extra term comparing with the corresponding ito integral : the so - called correction term .some authors attribute this correction term to the conversion from physical white noise to ideal white noise .this explanation is not necessarily convincing .dynamical systems driven by non - gaussian white noise , especially poisson white noise , have attracted a lot of attention recently .correction terms for converting ito sdes to stratonovich sdes with poisson white noise are presented in .although these correction terms have been accepted widely , there are some confusions . in this paper , we consider nonlinear random vibration under excitations of either gaussian or poisson white noises , modeled by appropriate stochastic differential equations .the main objective of this paper is to explain the correction terms in both gaussian and poisson white noise cases , from a physical perspective .we will show that the correction terms are natural consequences of fundamental physical laws satisfied by the vibration system .note that conventional spectral analysis methods , which have found extensive applications in random vibration analysis , are not applicable in this case due to the nonlinearity of the system . to this end, we consider a vibration system as a mass - spring - damping oscillator with random excitation where represents the mass , is the stiffness coefficient of the spring , is the displacement depending on time , and is the velocity . and represent the generalized force terms , which may originate from external or parametric excitations . is a noise term defined as the formal derivative of some stochastic process where and are constants , is a gaussian process , and is some compound poisson process , which is expressed as in eq .( [ section1_tmpp1 ] ) , is a poisson process with intensity parameter , is a unit step function ( a heaviside function ) at , is a random variable representing the -th impulse .it follows from ( [ impulsivemodel ] ) that where is the gaussian white noise , and is the poisson white noise expressed as note that ( [ impulsivemodel_1 ] ) expresses a general noise model including the gaussian white noise ( ) , the poisson white noise ( ) , and the combined gaussian and poisson white noise ( and ) .the second - order equation ( [ no ] ) can be rewritten as a system of sdes since is non - differentiable almost everywhere , ( [ solutionofito_1 ] ) can not be interpreted in the framework of classical calculus .thus the solution of ( [ solutionofito_1 ] ) is interpreted with a stochastic integral , defining , and using the variation of parameters formula , the solution to eq .( [ no ] ) can also be rewritten as , where is the initial condition .it can be shown that ( [ section1_tmp1 ] ) and ( [ solutionofito ] ) are equivalent .it is straightforward to verify that where . substituting ( [ matrixa ] ) into ( [ solutionofito ] ) , we get note that the stochastic integrals in eqs .( [ section1_tmp1 ] ) and ( [ solutionstobe ] ) are yet to be defined .as stated earlier , the stochastic integrals which can be used to interpreted sdes may not be unique .the question is that which stochastic integral will lead to the solution that is consistent with the physics of the system .one possible answer is to compare solutions to the sdes with the corresponding experimental results .however , this method may be impractical in many cases due to the high cost of performing the experiments , as one needs highly accurate data from sufficiently large number of samples in order to resolve the subtle difference in the theory .in this paper , to construct a sde model that is physically relevant to the real system , we propose to apply a stochastic integral such that the fundamental physical law ( e.g. , energy conservation ) is satisfied .this paper is organized as follows . in sec .[ sec.2 ] , starting from the energy conservation law , we define the stochastic integral that is suitable for the sde model of the nonlinear random oscillators .the relationship between the proposed models and the existing models is discussed in sec .[ sec.rel ] .numerical methods with an illustrative example are presented in sec .[ sec.nm ] .since there are multiple forms of the stochastic integrals that can be constructed from the sde , we define the stochastic integral such that the fundamental physical laws are satisfied . as for the nonlinear oscillators described by the sde ( [ no ] ), we expect the energy - work conservation be satisfied - \left[\frac{1}{2}m\dot x^2(0 ) + \frac{1}{2}k x^2(0 ) \right]\nonumber\\ & \quad = \int_0^t \left [ g(x(s ) , \dot x(s ) ) + f(x(s ) , \dot x(s))\dot l(s)\right ] \,{\rm d}x(s),\end{aligned}\ ] ] where represents the total mechanical energy of the system at time , and the integrand in the right hand side is the forcing term of ( [ no ] ) . equation ( [ energyprincipal_0 ] ) expresses that the change in the total mechanical energy is equal to the work done by the external forces .writing ( [ energyprincipal_0 ] ) in the form of stochastic integral , we have \nonumber- \left[\frac{1}{2}m\dot x^2(0 ) + \frac{1}{2}k x^2(0 ) \right]\nonumber\\ & \quad = \int_0^t g(x(s ) , \dot x(s))\dot x(s ) \,{\rm d}s + \int_0^t f(x(s ) , \dot x(s ) ) \dot l ( s ) \dot x(s ) \,{\rm d}s\nonumber\\ & \quad = \int_0^t g(x(s ) , \dot x(s))\dot x(s ) \,{\rm d}s + \int_0^t f(x(s ) , \dot x(s ) ) \dot x(s ) \,{\rm d}l(s).\end{aligned}\ ] ] as stated before , the stochastic integral with respect to should be defined such that the solution of ( [ no ] ) satisfies the energy conservation law ( [ energyprincipal_1 ] ) .it follows from ( [ impulsivemodel ] ) that a stochastic integral with respect to can be decomposed into two terms : stochastic integral with respect to and stochastic integral with respect to .we define the two terms in the next two subsections .assume and , then it follows from ( [ impulsivemodel ] ) that the stochastic process reduces to a brownian motion , and ( [ solutionstobe ] ) and ( [ energyprincipal_1 ] ) become and - \left[\frac{1}{2}m\dot x^2(0 ) + \frac{1}{2}k x^2(0 ) \right]\nonumber\\ & = \int_0^t g(x(s ) , \dot x(s ) ) \dot x(s ) \,{\rmd}s + \int_0^t f(x(s ) , \dot x(s ) ) \dot x(s ) \,{\rm d}b(s),\end{aligned}\ ] ] respectively .there are two types of stochastic integral extensively used for sdes driven by brownian motions : ito integral and stratonovich integral . throughout this paper, we use to denote ito calculus , and for stratonovich calculus . in the sense of ito ,( [ solutions_ttmmpp ] ) and ( [ energy_tmp1 ] ) can be written as and - \left[\frac{1}{2}m\dot x^2(0 ) + \frac{1}{2}kx^2(0 ) \right]\nonumber\\ & = \int_0^t g(x(s ) , \dot x(s ) ) \dot x(s ) \,{\rm d}s + \int_0^t f(x(s ) , \dot x(s ) ) \dot x(s)\star \,{\rm d}b(s),\end{aligned}\ ] ] respectively . in the sense of stratonovich ,( [ solutions_ttmmpp ] ) and ( [ energy_tmp1 ] ) can be written as and - \left[\frac{1}{2}m\dot x^2(0 ) + \frac{1}{2}k x^2(0 ) \right]\nonumber\\ & = \int_0^t g(x(s ) , \dot x(s ) ) \dot x(s ) \,{\rmd}s + \int_0^t f(x(s ) , \dot x(s ) ) \dot x(s)\circ \,{\rm d}b(s),\end{aligned}\ ] ] respectively .provided that the function is sufficient smooth , the solutions in stratonovich integrals , ( [ solutions_stratonovich ] ) and ( [ energy_stratonovich ] ) , can be converted into the following forms with ito integrals \,{\rm d}s \\ & + \bigints_0^t \dfrac{\sin \omega(t - s)}{m\omega } f(x(s ) , \dot x(s))\star { \rm d}b(s),\\ \dot x(t ) & = -\omega \sin(\omega t ) x_0 + \cos(\omega t ) \dotx_0 + \bigints_0^t \dfrac{\cos \omega ( t - s)}{m } \left[g(x(s),\dot x(s ) ) + \dfrac{1}{2 m } f(x(s ) , \dot x(s ) ) f_{\dot x } ( x(s ) , \dot x(s))\right ] \,{\rm d}s\\ & + \bigints_0^t \dfrac{\cos \omega ( t - s)}{m } f(x(s ) , \dot x(s ) ) \star { \rm d}b(s ) , \end{cases}\end{aligned}\ ] ] and - \left[\frac{1}{2}m\dot x^2(0 ) + \frac{1}{2}k x^2(0 ) \right]\nonumber\\ & = \int_0^t \left[\left(g(x(s),\dot x(s ) ) + \frac{1}{2m}f(x(s ) , \dot x(s))f_{\dot x } ( x(s ) , \dot x(s ) ) \right)\dot x(s)+\frac{1}{2m}f^2(x(s ) , \dot x(s ) ) \right ] \,{\rm d}s\nonumber\\ & \quad + \int_0^t f(x(s ) , \dot x(s ) ) \dot x(s)\star { \rm d}b(s).\end{aligned}\ ] ]as shown in the appendix , the solution ( [ solutions_stratonovich_2 ] ) satisfies the energy - work relation , suggesting that when the randomness is modeled in sense of stratonovich , the energy - work conservation law is satisfied . on the other hand , in a similar procedure as in the appendix , it can be shown that the energy - work law ( [ energyprincipal_ito ] ) contradicts with the solution ( [ solutions_ito ] ) .therefore , stratonovich integral instead of ito integral should be used so that this nonlinear random oscillator model satisfies the energy conservation law .this implies that when gaussian noise is present in this nonlinear vibration system , the sde model should be interpreted in the sense of stratonovich stochastic integral , but not in the sense of ito stochastic integral .when and , the stochastic process as expressed in ( [ impulsivemodel ] ) reduces to a compound poisson process .note that the jump size of at time can be expressed as , where is the left limit of at .suppose jumps at times ( ) , then the solution ( [ section1_tmp1 ] ) can be written as where , as shown in ( [ section1_tmpp1 ] ) , represents the number of jumps upto time . in the following, we shall derive the stochastic integral with respect to jumps such that the energy conservation law is satisfied .first , let s examine the changes in the system at -th jump occured at time ( ) . from ( [ section22_tmp2 ] ) ,the displacement is continuous while the velocity undergoes an jump given by the change in the total energy ( [ energyprincipal_1 ] ) due to the -th jump is that in the kinetic energy given by due to the continuity of the displacement across an jump .if the integrals with respect to jumps are defined in sense of ito , then ( [ section22_tt2 ] ) and ( [ section22_tmp3 ] ) becomes and respectively .since , it is obvious that ( [ 1-may152012 ] ) contradict with ( [ 2-may152012 ] ) , which indicates that the energy conservation law can not be satisfied when the integrals with respect to jumps are interpreted in sense of ito . in the following , we shall show that the integrals should be interpreted as some kind of riemann integral on the imaginary path along the jump to satisfy the energy conservation law .let be the value of at time if jumped from to . then and . with the integrals being interpreted as the riemann integral on the imaginary path along the jump, the energy - work law ( [ section22_tmp3 ] ) can be written as and the solution ( [ section22_tt2 ] ) becomes since the jump size can be any value , it follows from ( [ section22_tt1 ] ) and ( [ 3-may152012])that for any , it is true that and taking derivatives of both sides of ( [ section22_tmp4 ] ) and ( [ 4-may152012 ] ) with respect to , respectively , we get the identical ordinary differential equation(ode ) therefore , the energy conservation law is satisfied .using the fact that and , it follows from ( [ section22_tmp7 ] ) that where is determined by the initial or terminal value problem of the ode note that in ( [ section22_tt3 ] ) , for or for . comparing the original solution expression ( [ section22_tt2 ] ) with the new formula ( [ section22_tmp5 ] ), it can be seen that the last term in ( [ section22_tt2 ] ) should be defined as , and hence ( [ section22_tmp2 ] ) should be interpreted as where is the solution to the ode ( [ section22_tt3 ] ) . as it will become clear in sec .[ sec.rel ] , this implies that when pure jump noise , such as poisson white noise , is present in this nonlinear vibration system , the sde model should be interpreted in the sense of di paola - falsone stochastic integral . when both and , the excitation is a combined gaussian and poisson white noise . combining the results in the subsections [ sec.gwn ] and [ sec.pwn ] , we find that , in order to satisfy the energy - work conservtion law , one has to interpret the stochastic integrals with respect to brownian motions as stratonovich integrals , and the integrals with respect to jumps as dipaola - falsone integrals .therefore , the solution to is given by the expression , where the stochastic integral is defined as where =c \delta c(t_i)} ] , denote integrals in the stratonovich sense , is the number of jumps upto time , and is the solution to the following ode , with taking value of for or for .it can be seen clearly from fig .[ fig3 ] that the energy increment of the system agrees with the work very well , indicating that the energy conservation law is satisfied .* case 2 * in this case , ( [ example_e1 ] ) ( or ( [ no ] ) is interpreted by using ito stochastic integrals .now ( [ section3_tmp1 ] ) and ( [ section3_tmp2 ] ) become and respectively . in the simulation , the driving process and all the simulation parametersare taken the same as in case 1 .figures [ fig5 ] and [ fig6 ] present the corresponding numerical solutions of the displacement and velocity , respectively .comparison of the energy increment , defined by ( [ may222012 - 1 ] ) , and the work done , now defined by is presented in fig .we can see clearly from fig .[ fig7 ] that there is significant difference between the energy increment and the work done by the force .note that all the curves in fig .[ fig7 ] tend to have a very small variance in the time span .this is the consequence of the fact that the velocity is very small for , as shown in fig .[ fig6 ] . since , it follows from ( [ may222012 - 1 ] ) , ( [ may222012 - 4 ] ) and ( [ may222012 - 6 ] ) that both the energy increment and the work done change slowly for very small velocity . by comparing fig .[ fig7 ] with fig .[ fig3 ] , we can see that stratonovich integral and di paola - falsone integral should be used for excitations of gaussian and poisson white noises , respectively , in order for the model to satisfy the underlining physical laws .for cosmetic purpose , we introduce the following simplified notations : , , . moreover , in this appendix, all the stochastic integrals with respect to brownian motions are in sense of ito ( we have dropped notation ) . then the energy - work law ( [ energy_stratonovich_2 ] ) is equivalent to - \frac{1}{2 } \left[\ \dot x^2(0 ) + \omega^2 x^2(0 ) \right]\nonumber \\ & = \int_0^t \left[\left(g(s ) + \dfrac{1}{2 } f(s)f_{\dot x } ( s ) \right)\dot x(s)+\frac{1}{2 } f^2(s ) \right ] \,{\rm d}s + \int_0^t f(s ) \dot x(s)\ , { \rm d}b(s).\end{aligned}\ ] ] next , we show the solution given in ( [ solutions_stratonovich_2 ] ) satisfies the energy - work law ( [ 6-may152012 ] ) .denote the right and left hand sides of ( [ 6-may152012 ] ) as and , respectively .substitute ( [ solutions_stratonovich_2 ] ) into the left hand side of ( [ 6-may152012 ] ) , we get \,{\rm d}s \right)^2+\frac{1}{2 } \left ( \int_0^t \sin(\omega ( t - s ) )f(s ) \,{\rm d}b(s ) \right)^2\nonumber \\ & \quad + \frac{1}{2 } \left ( \int_0^t \cos \omega ( t - s)\left[g(s ) + \frac{1}{2}f(s ) f_{\dot x } ( s ) \right ] \,{\rm d}s \right)^2+\frac{1}{2}\left ( \int_0^t \cos(\omega ( t - s ) ) f(s ) \,{\rm d}b(s ) \right)^2 \nonumber\\ & \quad+ ( \omega \cos(\omega t ) x_0 + \sin(\omega t ) \dot x_0)\int_0^t \sin \omega ( t - s)\left[g(s ) + \frac{1}{2}f(s ) f_{\dot x } ( s ) \right ] \,{\rm d}s\nonumber\\ & \quad + ( \omega \cos(\omega t ) x_0 + \sin(\omega t ) \dotx_0)\int_0^t \sin(\omega ( t - s ) ) f(s)\,{\rm d}b(s)\nonumber\\ & \quad+ \left(\int_0^t \sin \omega ( t - s ) \left[g(s ) + \frac{1}{2}f(s ) f_{\dot x } ( s ) \right ] \,{\rm d}s \right ) \left ( \int_0^t \sin(\omega(t - s ) ) f(s ) \,{\rm d}b(s)\right ) \nonumber\\ & \quad+ ( -\omega \sin(\omega t ) x_0 + \cos(\omega t ) \dotx_0)\int_0^t \cos \omega ( t - s)\left[g(s ) + \frac{1}{2}f(s ) f_{\dot x } ( s ) \right ] \,{\rm d}s\nonumber\\ & \quad + ( -\omega \sin(\omega t ) x_0 + \cos(\omega t ) \dotx_0)\int_0^t \cos(\omega ( t - s ) ) f(s ) \,{\rm d}b(s)\nonumber\\ & \quad+ \left ( \int_0^t \cos(\omega ( t - s ) \left[g(s ) + \frac{1}{2}f(s ) f_{\dot x } ( s ) \right ] \,{\rm d}s\right ) \left(\int_0^t \cos(\omega(t - s ) ) f(s ) \,{\rm d}b(s)\right).\end{aligned}\ ] ] substituting ( [ solutions_stratonovich_2 ] ) into the right - hand side of ( [ 6-may152012 ] ) , we get \,{\rm d}p\,{\rm d}s\notag\\ & \quad+ \int_0^t\int_0^s \left[\cos(\omega(s - p ) ) \left(g(s)+ \frac{f(s)f_{\dot x}(s)}{2}\right ) f(p)\right]\,{\rm d}b(p ) \,{\rm d}s + \frac{1}{2}\int_0^t f^2(s ) \,{\rm d}s\nonumber\\ & \quad + \int_0^t f(s ) \left(-\omega \sin(\omegas ) x_0 + \cos(\omega s ) \dot x_0 \right)\,{\rm d}b(s)\nonumber\\ &\quad + \int_0^t \int_0^s \cos(\omega(s - p ) ) f(s)\left(g(p)+ \frac{f(p)f_{\dot x}(p)}{2}\right ) \,{\rm d}p \,{\rm d}b(s)\notag\\ & \quad + \int_0^t \int_0^s \cos(\omega ( s - p ) ) f(s)f(p ) \,{\rm d}b(p)\,{\rm d}b(s).\end{aligned}\ ] ] to prove in is equal to in , we claim the following facts ^ 2 \notag\\ & \quad + \frac{1}{2 } \left [ \int_0^t \cos(\omega ( t - s ) )\left(g(s)+ \frac{f(s)f_{\dot x}(s)}{2}\right)\,{\rm d}s \right]^2,\end{aligned}\ ] ] f(p ) \,{\rm d}b(p ) \,{\rmd}s \nonumber\\ & \quad + \int_0^t\int_0^s \left[\cos(\omega(s - p)\left(g(p)+ \frac{f(p)f_{\dot x}(p)}{2}\right ) \right ] f(s ) \,{\rm d}p\,{\rm d}b(s)\notag\\ & = \int_0^t \cos(\omega ( t - s ) ) \left(g(s)+ \frac{f(s)f_{\dot x}(s)}{2}\right)\,{\rm d}s \int_0^t \cos(\omega(t - s ) ) f(s ) \,{\rm d}b(s ) \notag\\ & \quad + \int_0^t \sin(\omega ( t - s ) ) \left(g(s)+ \frac{f(s)f_{\dot x}(s)}{2}\right)\,{\rm d}s \int_0^t \sin(\omega(t - s))f(s ) \,{\rm d}b(s)\,\end{aligned}\ ] ] and one can easily see that ( [ a1_6 ] ) and ( [ a1_7 ] ) are true by using the trignometric identities and in the following , we give the proofs for ( [ a1_3 ] ) and ( [ a1_4 ] ) . the proof of ( [ a1_5 ] ) is similar to those for ( [ a1_3 ] ) and ( [ a1_4 ] ) and is not given here . to prove ( [ a1_3 ] ) is true , we rewrite the right - hand side of ( [ a1_3 ] ) as double integrals similarly , to prove ( [ a1_4 ] ) , we rewrite the right - hand side ( [ a1_4 ] ) as the integral domain for the right - hand side of ( [ a1_8 ] ) is a square given by , p\in [ 0 , t]\}$ ] . decompose the square into three parts : , , and , then the right - hand side of ( [ a1_8 ] ) becomes note that and it follows from ( [ a1_9 ] ) and ( [ a1_10 ] ) that ( [ a1_4 ] ) is true .
nonlinear random vibration under excitations of both gaussian and poisson white noises is considered . the model is based on stochastic differential equations , and the corresponding stochastic integrals are defined in such a way that the energy conservation law is satisfied . it is shown that stratonovich integral and di paola - falsone integral should be used for excitations of gaussian and poisson white noises , respectively , in order for the model to satisfy the underlining physical laws ( e.g. , energy conservation ) . numerical examples are presented to illustrate the theoretical results . * keywords * : random vibration , nonlinear systems , poisson noise , gaussian noise , stochastic differential equations , stochastic integrals .
dyadic data are central in social science applications ranging from international relations to `` speed dating . ''a challenge with dyadic data is to account for the complex dependency structure that arises due to the connections between dyad members .for instance , in a study of international conflict , a change in leadership in one country may affect relations with all countries with which that country is paired in the data , thereby inducing a correlation between these dyadic observations .this generates a cluster of dependent observations associated with that country .as leadership changes occur in multiple countries , the correlations emanating from each of these countries will overlap into a web of interwoven clusters .we refer to such interwoven dependency in dyadic data as `` dyadic clustering . '' by ignoring the dyadic clustering , the analysis would take the dyad - level changes emanating from a single leadership change as independently informative events , rather than a single , clustered event .an analysis that only accounts for correlations in repeated observations of dyads ( whether by clustering standard errors or using dyad fixed effects ) would fail to account for such inter - dyad correlation . statistical analysis of dyads typically estimates how dyad - level outcomes ( e.g. , whether there is open conflict between countries or the decision for one person to ask another on a date ) relate to characteristics of the individual units as well as to the dyad as a whole ( e.g. , measures of proximity between units ) .the usual approach is to regress the dyad - level outcome on unit- and dyad - level predictors . due to dyadic clustering ,the observations contributing to such an analysis are not independent .failure to account for dyadic clustering may result in significance tests or confidence intervals with sizes that are far too small relative to the true distribution of the parameters of interest .we establish sufficient conditions for the consistency of a non - parametric sandwich estimator for the variance of regression coefficients under dyadic clustering .cluster - robust sandwich estimators are common for addressing dependent data ( ( * ? ? ?8) ; ) . provide a sandwich estimator for `` multi - way '' clustering , accounting , for example , for clustering between people by geographic location and age category .we extend these methods to dyadic clustering , accounting for the fact that dyadic clustering does not decompose neatly into a few crosscutting and disjoint groups of units ; rather , each _ unit _ is the basis of its own cluster that intersects with other units clusters .2.5 ) propose a sandwich estimator for dyadic clustering that is very similar to what we propose below .their derivation is constructed through analogy to the results of .however , neither paper establishes conditions for consistency under dyadic clustering .we establish such consistency conditions .we also provide extensions to the longitudinal case and generalized linear models such as logistic regression . we evaluate performance with simulations and a reanalysis of a classic study on interstate disputes .the appendix generalizes to weighted data and generalized linear models , and the supporting information provides another illustration with a speed dating experiment .current statistical approaches to handling dyadic clustering include the use of parametric restrictions in mixed - effects models , the spatial error - lag model , or permutation inference for testing against sharp null hypotheses .these approaches have important limitations that our approach overcomes .first , mixed effects models and spatial lag models impose a parametric structure to address the clustering problem .this makes them sensitive to misspecification of the conditional mean ( that is , deviations between the data and the linearity assumption ) .our approach is robust in that it provides asymptotically valid standard errors even under such misspecification .this is valuable in itself to the extent that models are typically approximations , although readers should not take this to mean that they are free from the obligation to fit as good an approximation as possible .it is also valuable in providing a reliable benchmark to use in evaluating model specification .second , s solution of non - parametric randomization inference does not provide a procedure for obtaining valid confidence intervals , while our procedure does .third , all three of these alternatives require considerable computational power , possibly exceeding available computational resources even for data analyses that are common in international relations or network analysis .we demonstrate below that a mixed - effects or spatial error - lag approach is infeasible in a typical international relations example .in contrast , our proposed estimator is easy to compute .the variance estimation methods we develop are a natural complement to non- and semi - parametric approaches to regression with dyadic data .we work within the agnostic " or `` assumption lean '' regression framework developed by , , and .we begin with a cross - section of undirected dyads and derive the basic convergence results for this case .below we extend these results to repeated dyads , which covers directed dyads and longitudinal data .proofs are in the appendix .begin with a large population from which we take a random ( i.i.d . )sample of units , with the sample members indexed by and grouped into dyads .pairs of unit indices within each dyad map to dyadic indices , , through the index function , with an inverse correspondence , and we assume no type dyads . consider a linear regression of on a -length column vector of regressors ( which includes the constant ) : where is the slope that we would obtain if we could fit this model to the entire population , allowing for possible non - linearity in the true relationship between ] is nonlinear in , the population residual will itself vary in , which undermines inference that assumes that is independent of .the agnostic regression approach avoids such an assumption . ] to lighten notation , for the remainder of the discussion we use .define the sample data as and .the ordinary least squares ( ols ) estimator is with residual . since the values of and determined by the characteristics of the units and , and for dyads containing either unit or are allowed to be correlated by construction . however , by random sampling of units , for all dyads that do not share a member . the number of pairwise dependencies between and for which is .let the support for and be bounded in a finite interval and assume the usual rank conditions on .[ lemma - var ] suppose data and as defined above . as , the asymptotic distribution of has mean zero and variance ^{-1}\var\left[\sum_{d=1}^d x_d\epsilon_d\right]\e[x_dx_d']^{-1 } , \label{varb}\ ] ] with , = \sum_{d=1}^d \left ( \underbrace{\e\{x_{d}x'_d \var[\epsilon_d|{\mathbf{x}}]\}}_{a } + \underbrace{\sum_{d ' \in \mathcal{s}(d ) } \e\{x_{d}x'_{d ' } \cov[\epsilon_d,\epsilon_{d'}|{\mathbf{x}}]\}}_{b } \right ) , \label{varxe}\ ] ] where , the set of dyads other than that share a member from . is the dyad - specific contribution to the variance , and is the contribution due to inclusion of common units in multiple dyads .note that these features of the distribution of establish the consistency of as well , which is not surprising given standard results for the consistency of ols coefficients on dependent data .given data as defined above , we examine the properties of a plug - in variance estimator that is analogous to the sandwich estimators defined for heteroskedastic or clustered data .we establish sufficient conditions for consistency of the plug - in variance estimator .we consider the cross - sectional case and repeated observations case .the appendix also contains extensions to weighted observations and generalized linear models .define the variance estimator under the conditions of lemma [ lemma - var ] and if and have bounded support with non - zero second moments and cross - moments , then as , [ prop1 ] the proposition indicates that provides a consistent estimator to characterize the true asymptotic sampling distribution of the regression coefficients , .standard error estimates for are obtained from the square roots of the diagonals of .the assumption of bounded support for and merely rules out situations that are unlikely to arise in real world data anyway ( e.g. , where the mass of the data pushes out toward infinity as grows ) .repeated dyad observations are common in applied settings . for example , the data may include multiple observations for dyads over time .dyadic panels are very common in studies of international relations . or ,if the dyadic information is directional , then the data will contain two observations for each dyad , with one observation capturing outcomes that go in the to direction , and the other capturing outcomes that go in the to direction .this is conceptually distinct from repeated observations over time .but if there are dyadic dependences for both senders and receivers of a directed dyad , then the dependence structure for a pair of directed - dyads will be the same as if we had repeated observations of an undirected - dyad .the results above translate straightforwardly to the repeated dyads setting . formally, suppose that for each dyad observations are indexed by , where the values are fixed and finite .let represent the data for observation from dyad , and and , that is , the stacked vectors and matrices .let and denote , respectively , the population slope and ols estimator as defined above but now applied to the repeated dyads data .for the repeated dyads case , assume the same conditions as in proposition [ prop1 ] and consider the following variance estimator then as , - v_r \overset{p}{\rightarrow } 0,\ ] ] and where ^ 2 } \e[x_dx_d']^{-1 } \e\left ( \e\left[x_d'\cov[\epsilon_d|{\mathbf{x}}]x_d\right ] + \sum_{d ' \in \mathcal{s}(d)}\e\left [ x_d'\cov[\epsilon_d , \epsilon_{d'}|{\mathbf{x}}]x_{d'}\right]\right ) \e[x_dx_d']^{-1}.\ ] ] some remarks are in order when it comes to the repeated observations case .first , our analysis of the repeated observations case takes the values to be fixed and finite and demonstrates consistency as grows . in a cross - national study, this would mean that one should pay most attention to the number of countries , rather than the amount of time , that are in the data . under fixed and growing further assumptions would be required for consistency , such as serial correlation of fixed order or , more generally , strong mixing over time .second , with repeated observations , a common strategy for identifying effects is to use dyad - specific fixed effects . with fixed and finite , this does not introduce any new complications .as in for the case of serial correlation with fixed effects , the results for dyadic clustering translate directly to centered data , and dyadic fixed - effects regression amounts to centering the data on dyad - specific means . with fixed and growing ,however , additional assumptions are needed for consistency . for efficient computation ,we follow to perform a `` multi - way decomposition '' of the dyadic clustering structure : we have the following algebraic equivalence , where and [ prop - decomp ] is the usual asymptotically consistent cluster - robust variance estimator ( with no degress - of - freedom adjustment ) that clusters all dyads containing unit and assumes all other observations to be independent . is the same cluster robust estimator but clustering all repeated dyad observations . is the usual asymptotically consistent heteroskedasticity robust ( hc ) variance estimator that assumes all observations ( even within repeated dyad groupings ) are independent . to understand this decomposition , note that dyadic clustering involves clustering on each of units .but in summing the contributions from unit - specific clusters ( the ) , we double count the dyad contributions ( ) and add in the independent contributions ( ) times .we can correct for this by subtracting , which also removes the component , and then subtracting of the components .( the cross section cases simply sets ) .this decomposition shows that one can compute the dyadic cluster robust estimator using readily available robust standard error commands .our dyadic cluster robust variance estimator allows one to perform valid inference under less restrictive dependence assumptions that are used to identify random effects or spatial error lag models .our multi - way decomposition also shows that it is computationally much simpler than those alternatives .the latter point is relevant when one has many units or time periods in which case random effects and spatial error lag models , despite their restrictions , may be infeasible with current computational resources given the challenges of evaluating a likelihood with as many dependencies as emerge in a dyadic analysis .we use monte carlo simulations to evaluate the finite sample properties of the proposed estimators under the cross - sectional and repeated dyads settings . we suppose that population values obey the following , where is the observed outcome for the dyad that includes units and , , , , , and are independent draws from standard normal distributions , and the compound error , , is unobserved . in the cross - sectional case ,we only observe one outcome per dyad , so for all observations ( that is , the subscript is extraneous for the cross - sectional case ) . in the repeated observations case , we have two observations per dyad , so for all dyads .we fix and .we use ols to estimate and . in the supplemental informationwe show results for mixed effects models as well .the fact that and are constant across dyads that include unit ( same for ) implies non - zero intra - class correlation in both and among sets of dependent dyads , in which case ignoring the dependence structure will tend to understate the variability in and .this is the dyadic version of s problem .results from 500 simulation runs are shown in figure [ fig : sim1 ] .the -axis shows the sample size , where we show results for samples with 20 , 50 , 100 , and 150 units ( implying 190 , 1,225 , 4,950 , and 11,175 dyads , respectively ) .the -axis is on the scale of the standard error of the coefficients .the black diamonds plot the empirical standard standard errors for each of the sample sizes ( that is , the standard deviation of the simulation replicates of the coefficient estimates , and ) .the black box plots show the distribution of our proposed standard error estimator .the gray box plots show , in the top figures , the distribution of the `` hc2 '' heteroskedasticity robust variance estimator , which does not account for either dyadic or repeat observation clustering , and for the bottom figures , the `` cluster robust '' analog of s estimator accounting for dependence in repeated dyad observations . goes from 20 to 150 ( implying number of dyads goes from 190 to 11,175 ) for a cross section of dyads ( top ) and repeated dyad observations ( t=2 for all dyads ) .black box plots show the distribution of standard error estimates using the proposed dyadic cluster robust estimator . in the top two figures ,gray box plots show standard error estimates from a heteroskedasticity robust estimator while in the bottom two figures , gray box plots show standard error estimates from a `` naive '' cluster - robust estimator that clusters by dyad over repeated observations ( but does not cluster by units across dyads ) ., title="fig:",scaledwidth=100.0% ] goes from 20 to 150 ( implying number of dyads goes from 190 to 11,175 ) for a cross section of dyads ( top ) and repeated dyad observations ( t=2 for all dyads ) .black box plots show the distribution of standard error estimates using the proposed dyadic cluster robust estimator . in the top two figures ,gray box plots show standard error estimates from a heteroskedasticity robust estimator while in the bottom two figures , gray box plots show standard error estimates from a `` naive '' cluster - robust estimator that clusters by dyad over repeated observations ( but does not cluster by units across dyads ) ., title="fig:",scaledwidth=100.0% ] convergence of our proposed standard errors based on and ( black box plots ) to the true standard error ( black diamonds ) is quick .the alternative estimators , which do not account for dyadic clustering , grossly understate the variability in the coefficient estimates .these results suggest that in finite samples from data generating processes that resemble our simulations , the proposed estimator is quite reliable for inference on ols coefficients so long as the sample size is on the order of 50 to 100 units .most applications in political science operate with samples of at least this size .changes to the shape of the error distributions ( e.g. , uniform or bimodal ) yield the same results . in the supporting information , we demonstrate comparisons between mixed effects models and our `` robust '' estimator .we sought also to include a comparison to spatial error lag models , but the models failed to converge with .we note that in the original paper , they limited their analysis of dyadic data to small cross sections .our experience suggests that it may be infeasible to fit such models with substantially larger datasets . ] specifically , we compare ols with dyadic cluster robust standard errors to a two - way random effects model that incorporates normal random effects for each of the units . the data generating process captured above ( that is , expression [ eq : dgp ] with normal data ) corresponds precisely to the assumptions of a two - way normal random effects model , and so it is no surprise that we find this model to be more efficient than ols and to produce consistent standard errors .however , as discussed above , such models are sensitive to misspecification .for example , in fitting a linear approximation model to mildly non - linear data , the dyadic cluster robust standard errors continue to be consistent for the coefficients of the linear approximation .the degree of misspecification in this case is not extreme ( as illustrated in the appendix ) and resembles the kind of approximations that is common in applied work .nonetheless , the random effects standard errors are inconsistent and substantially anti - conservative . such sensitivityto misspecification is one problem for the random effects model .another problem arises when there is unobserved country - level heterogeneity that could confound estimates of the coefficients of interest .in such situations , our dyadic cluster robust estimator would be a natural complement to a fixed effects analysis , analogous to what proposes for non - dyadic repeated observation settings .the third problem arises in computational feasibility , an issue that arises in our application to which we now turn .our application is based on the classic study by on the determinants of international militarized conflicts .they study how the likelihood of a militarized dispute between states in a dyad relates to various dyad - level attributes , including an indicator of whether the two states are formal allies , the log of the ratio of an index of military capabilities , the lowest score in the dyad on a democracy index , the ratio of the value of trade and the larger gdp of the two states , an indicator of whether both states are members of a common international organization , an indicator of whether the states are geographically noncontiguous , the log of the geographic ( euclidean ) distance between the two states capitals , and an indicator of whether both states are `` minor powers '' in the international system .dyadic clustering could arise in many ways with these data , for example if a country entered into an alliance , thereby changing the joint alliance indicators , or if the military capabilities of a country changed , thereby changing the power ratios .we replicate russett and oneal s primary analysis as reported in their table a5.1 .they use annual data on 146 states in the international system paired into 1,178 dyads ( out of 10,585 possible ) and observed for as few as one and as many as ninety years between 1885 and 1991 , for a total of 39,988 observations . in their original analysis ,russett and oneal fit a gee model assuming ar(1 ) errors within dyads over time .table [ tab : russett ] shows the results of our reanalysis .columns ( 1 ) and ( 2 ) replicate the original published results . columns ( 3)-(6 ) show coefficients and various standard error estimates for a simple ( pooled ) logistic regression .there is little difference in the coefficient estimates from the gee - ar(1 ) model as compared to the simple logistic regression , so we focus on the standard error estimates .column ( 4 ) contains estimates that account only for dyad - year heteroskedasticity , column ( 5 ) also accounts for arbitrary dependence over time for each dyad , and then column ( 6 ) also accounts for dyadic clustering .accounting for the dyadic and repeated - observation clustering results in standard error estimates that are sometimes an order of magnitude larger than what we obtain when we ignore all clustering and also considerably larger than what one would estimate were one to account for repeated dyads clustering but ignore the inter - dyadic clustering .the latter result is also relevant when comparing the standard errors from the original gee - ar(1 ) model , as they resemble the estimates in column ( 5 ) . in their original analysis , russett andoneal found that all of the predictors had a statistically significant relationship to the likelihood of militarized conflict .but when one takes into account dyadic clustering , the coefficient for international organizations would fail to pass a conventional significance test ( e.g. , ) .& & + & & & & het . & naive cluster & dyadic cluster + predictor of militarized conflict & coef . & s.e . & coef . & robust s.e . &robust s.e . & robust s.e .+ alliances & -0.539 & 0.159 & -0.595 & 0.069 & 0.175 & 0.265 + power ratio & -0.318 & 0.043 & -0.328 & 0.020 & 0.047 & 0.070 + lower democracy & -0.061 & 0.009 & -0.064 & 0.005 & 0.010 & 0.015 + lower dependence & -52.924 & 13.405 & -67.668 & 10.734 & 17.560 & 24.749 + international organizations & -0.013 & 0.004 & -0.011 & 0.002 & 0.005 & 0.008 + noncontiguity & -0.989 & 0.168 & -1.048 & 0.074 & 0.181 & 0.185 + log distance & -0.376 & 0.065 & -0.375 & 0.026 & 0.068 & 0.102 + only minor powers & -0.647 & 0.178 & -0.618 & 0.078 & 0.188 & 0.344 + constant & -0.128 & 0.536 & -0.181 & 0.211 & 0.562 & 0.840 + + [ tab : russett ] we attempted also to study how inferences might change with a random effects approach and a spatial error lag approach .the appropriate random effects model would be a three - way random effects model to account for each state in a dyad as well as the dyad as a whole over time .we attempted to fit three - way random effects logit models using the lmer package in r and the melogit function in stata .we posted the jobs to a university cluster with 24 intel xeon cores and 48 gigabytes of ram per node . in neither casedid the random effects models converge .in fact , in neither case could we obtain estimates of any form after days on the cluster .for the spatial error lag model , we sought to implement both the spatial error lag model in the spdep package in r and the two - step estimator of in stata .the latter take as an input dyadic dependence information that can be produced using the spundir package in stata .the spatial error lag model in the spdep failed to produce estimates and spundir package failed to resolve within reasonable time ( a week ) while running on the same cluster .. the computational complexity is due to the fact that the model attempts to account for dependence across all countries in the dataset ( rather than small subsets , as would be the case in a typical spatial analysis ) and the fact that in this application the set of cross - section units changes over time and therefore does not allow for a stable adjacency matrix over time . ]this brings to the foreground the issue of computational complexity , by which our cluster robust estimator is much less demanding .we have established convergence properties for a non - parametric variance estimator for regression coefficients that accounts for dyadic clustering .the estimator applies no restrictions on the dependency structure beyond the dyadic clustering assumption .the estimator is robust to the regression model being misspecified for the conditional mean .such robustness is important because regression analysis typically relies on linear ( in coefficients ) approximations of unknown conditional mean functions .of course our analysis in no way excuses analysts from their responsibility to obtain as good an approximation as possible nor does a robust fix for standard errors also solve the problem of finding a good approximation . given a reasonable approximation for the conditional mean , the methods we have developed here allow for more accurate asymptotic statistical significance tests and confidence intervals .the estimator is consistent in the number of units that form the basis of dyadic pairs .simulations show that the estimator approaches the true standard error with modestly - sized samples from a reasonable data generating process .applications show that inferences can be seriously anti - conservative if one fails to account for the dyadic clustering .this estimator is a natural complement to the non - parametric and semi - parametric regression analyses that are increasingly common in the social sciences .given that we can express the estimator as the sum of simpler and easy - to - compute robust variance estimators , it could be applied to any estimator for which a cluster - robust option is available .a cost of the robust approach is efficiency .our simulations show that a two - way ( for cross - sectional data ) random effects model can be considerably more efficient and provide reliable inference when the conditional mean is correctly specified .our robust estimator could provide the basis of a test for misspecification .if there is little evidence of misspecification , the random effects estimator would be a reasonable choice given its efficiency .however , problems of computational non - convergence may make the random effects estimator infeasible we encountered this in our application when we tried to use a three - way random effects model for dyadic panel data .of course , accounting for dynamic dyadic clustering may fail to fully account for all relevant dependencies in the data .for example , units may exhibit higher - order network effects : for example a shock to unit may affect unit through connections that run via a third unit , . in such cases , the methods developed here will likely yield standard errors that are too small , although they should still outperform methods that fail even to account for dyadic clustering .we have ^{-1 } \frac{\sqrt{n}}{d } \sum_{d=1}^d x_d \epsilon_d.\ ] ] take ^{-1} ] , which has mean zero and variance , ^{-1}\var\left[\sum_{d=1}^d x_d\epsilon_d\right]\e[x_dx_d']^{-1}. \label{varb}\ ] ] then , & = \sum_{d=1}^d \sum_{d'=1}^d \cov[x_d\epsilon_d , x_{d'}\epsilon_{d'}]\nonumber \\ & = \sum_{d=1}^d\sum_{d'=1}^d \e\{x_{d}x'_{d ' } \cov[\epsilon_d,\epsilon_{d'}|{\mathbf{x}}]\}\nonumber \\ & = \sum_{d=1}^d \left ( \underbrace{\e\{x_{d}x'_d \var[\epsilon_d|{\mathbf{x}}]\}}_{a } + \underbrace{\sum_{d ' \in \mathcal{s}(d ) } \e\{x_{d}x'_{d ' } \cov[\epsilon_d,\epsilon_{d'}|{\mathbf{x}}]\}}_{b } \right ) \label{varxe}\end{aligned}\ ] ] where , the set of dyads other than that share a member from .by chebychev s inequality , two conditions are sufficient for as : ( i ) \rightarrow 0 ] ( * ? ? ?. for ( i ) , consider the interior ( `` meat '' ) of : and the corresponding term for : \ } + \sum_{d ' \in \mathcal{s}(d ) } \e\{x_{d}x'_{d ' } \cov[\epsilon_d,\epsilon_{d'}|{\mathbf{x}}]\ } \right).\ ] ] we have = \sum_{d=1}^d \left ( \e\{x_{d}x'_d \e [ e_d^2|{\mathbf{x}}]\ } + \sum_{d ' \in \mathcal{s}(d ) } \e \{x_{d}x'_{d ' } \e[e_d e_{d'}|{\mathbf{x}}]\}\right ) \rightarrow \sigma\ ] ] by consistency of , an implication of lemma [ lemma - var ] .lemma [ lemma - var ] also established that ^{-1} ] . taking these elements together, uniform continuity implies \rightarrow 0 ] , so we can treat this as . given bounded support , ] , it is sufficient to establish that is at most . consider quadruples of dyads indexed by such that .suppose there are quadruples and define .the covariance terms take positive values when , which occurs at rate ( that is , when we have pairs of terms that are actually summed into ) , and we have dependence across and .then , the six distinct ways that such dependence can occur and the associated proportion of quadruples for which these occur for a sample of size are as follows : therefore the proportion of quadruples yielding a positive covariance is the proportion of cases contributing non - zero values to the sum .this proportion is bounded by the proportion of cases where the indicator functions are both one and the order of the proportion of dependent quadruples from across the six cases characterized above , that is .then $ ] is at most .consider the interior ( `` meat '' ) of : taking the second component , define then working with , , and as defined in the proposition , then \nonumber \\ & = \sum_{i=1}^n \hat \sigma_{c , i } - \hat \sigma_d - ( n-2 ) \hat \sigma_0 , \nonumber \end{aligned}\ ] ] and by linear distributivity for with , , and as defined in the proposition .weighting is a common way to adjust for unequal probability sampling of dyadic interactions , among other applications .the extension to the weighted case is straightforward .assume weighted directed dyad observations with weights finite and fixed ; denote the weight for dyad as .define the sample data as and and the matrix of weights as .then the weighted least squares estimator is for the weighted dyads case , assume the same conditions as in proposition [ prop1 ] and consider the following variance estimator then as , - v_w \overset{p}{\rightarrow } 0,\ ] ] and where ^{-1 } \e\left ( \e\left[x_dx'_d w_{d}^{2}\var[\epsilon_d|{\mathbf{x}}]\right ] + \sum_{d ' \in \mathcal{s}(d)}\e\left[x_dx'_{d'}w_{d}w_{d'}\cov[\epsilon_d , \epsilon_{d'}|{\mathbf{x}}]\right]\right ) \e[w_{d}x_dx_d']^{-1}.\ ] ]an implementation for generalized linear models follows the usual -estimation results ( ; , ch .12 ) . given an estimating equation , , on data , and with parameters and parameter estimates , the sandwich approximation for the variance is , where \text{\hspace{1em } and \hspace{1em}}{\mathbf{b}}= \e \psi(d;\hat \theta)\psi(d_i;\hat \theta)'.\ ] ] for logistic regression the estimating equation is , where , the predicted probability for dyad .the plug - in variance estimator for logistic regression coefficients is given by where the extensions to repeated and weighted observations follow analogously .xx angrist , joshua d. guido w. imbens .`` comment on ` covariance adjustment in randomized experiments and observational studies ' by paul r. rosenbaum . ''_ statistical science _ 17(3):304307 .angrist , joshua d. jorn - steffen pischke .princeton , nj : princeton university press .arellano , manuel .1987 . `` computing robust standard errors for within - group estimators . '' _ oxford bulletin of economics and statistics _ 49(4):431434 .beck , nathanial , kristian skrede gleditsch kyle beardsley . 2006 .`` space is more than geography : using spatial ecometrics in the study of political economy . '' _ international studies quarterly _ 50:2744 .buja , andreas , richard berk , lawrence brown , edward george , emil pitkin , mikhail traskin , linda zhao kai zhang .`` models as approximations : a conspiracy of random predictors and model violations against classical inference in regression . ''manuscript , the wharton school , university of pennsylvania , philadelphia .cameron , a. colin , jonah b. gelbach douglas l. miller .`` robust inference with multi - way clustering . '' _ journal of business and economic statistics _ 29(2):238249 .chamberlain , gary .`` multivariate regression models for panel data . ''_ journal of econometrics _ 18(1):546 .conley , timothy g. 1999 .`` gmm estimation with cross sectional dependence . '' _ journal of econometrics _ 92:145 .davidson , russell james g. mackinnon .oxford : oxford university press .erikson , r. s. , p. m. pinto k. t. rader .2014 . `` dyadic analysis in international relations : a cautionary tale . ''_ political analysis _ 22(4):457463 .fafchamps , marcel flore gubert .`` the formation of risk sharing networks . '' _ journal of development economics _ 83:326350 .fisman , raymond , sheena s iyengar , emik kamenica itamar simonson .`` gender differences in mate selection : evidence from a speed dating experiment . '' _ quarterly journal of economics _ 121:673697 .gelman , andrew jennifer hill .cambridge : cambridge university press .goldberger , arthur s. 1991 . .cambridge , ma : harvard university press .green , donald p. , soo yeon kim david h. yoon .2001 . `` dirty pool . '' _ international organization _ 55(2):441468 .greene , william h. 2008 . .upper saddle river , nj : pearson .hansen , christian b. 2007 .`` asymptotic properties of a robust variance matrix estimator for panel data when _t _ is large . ''_ journal of econometrics _ 141:597620 .hoff , peter d. 2005 . ``bilinear mixed - effects models for dyadic data . ''_ journal of the american statistical association _ 100(469):286295 .hubbard , alan e. , jennifer ahern , nancy l. fliescher , mark van der laan , sheri a. lippman , nicholas jewell , tim bruckner william a. satariano .`` to gee or not to gee : comparing population average and mixed models for estimating the associations between neighborhood risk factors and health . '' _ epidemiology _ 21(4):467474 .huber , peter j. 1967 .the behavior of maximum likelihood estimates under nonstandard conditions . in _ proceedings of the fifth berkeley symposium on mathematical statistics and probability_. vol . 1 pp . 221233. kenny , david a. , deborah a. kashy william l. cook .new york , ny : guilford press .king , gary margaret e. roberts .`` how robust standard errors expose methodological problems they do not fix , and what to do about it . ''_ political analysis _( online early view):112 .lehmann , erich l. 1999 . .new york : springer - verlag .liang , kung - yee scott l. zeger .`` longitudinal data analysis using generalized linear models . '' _ biometrika _ 73(1):1322 .lin , winston .`` agnostic notes on regression adjustments to experimental data : reexamining freedman s critique . '' _ annals of applied statistics _ 7(1):295318 . , james g. halbert white .`` some heteroskedasticity - consistent covariance matrix estimators with improved finite sample properties . ''_ journal of econometrics _ 29(3):305325 .moulton , brent r. 1986 .`` random group effects and the precision of regression estimates . '' _ journal of econometrics _ 32:385397 .neumayer , eric thomas pluemper . 2010 .`` spatial effects in dyadic data . '' _ international organization _ 64(1):145165 .russett , bruce m. john r. oneal .new york , ny : norton .stefanski , leonard a. dennis d. boos .2002 . `` the calculus of m - estimation . ''_ the american statistician _ 56(1):2938 .stock , james h. mark w. watson .`` heteroskedasticity - robust standard errors for fixed effects panel data regression . '' _ econometrica _ 76(1):155174 .white , halbert .1980_a_. `` a heteroskedasticity - consistent covariance matrix estimator and a direct test for heteroskedasticity . '' _ econometrica _ 48(4):817838 .white , halbert .1980_b_. `` using least squares to approximate unknown regression functions . '' _ international economic review _21(1):149170 .white , halbert .consequences and detection of misspecified nonlinear regression models . ''_ journal of the american statistical association _ 76(374):419433 .white , halbert .`` maximum likelihood estimation of misspecified models . '' _ econometrica _ 50(1 - 25 ) .white , halbert . 1984 . .new york , ny : academic press .wooldridge , jeffrey m. 2010 . .cambridge , ma : mit press .zorn , christopher .`` generalized estimating equation models for correlated data : a review with applications . ''_ american journal of political science _ 45:470490. cluster robust variance estimation for dyadic data + supporting information not for publicationfigure [ fig : norm ] and [ fig : miss ] show additional simulation results where we make comparisons to mixed effects models specifically , to a two - way random effects model that incorporates random effects for each of units in the data .all random effects models were fit using the lmer package in r. in both sets of graphs , black box plots show the distribution of standard error estimates for ols coefficients using the proposed dyadic cluster robust estimator .gray box plots show standard error estimates for ols coefficients from a heteroskedasticity robust estimator .blue box plots show the distribution of standard error estimators from a two - way random effects model with a random effect for each of units .black diamonds show the true standard error of the ols coefficients and the blue diamonds show the true standard error of the random effects coefficients .figure [ fig : norm ] uses the same data generating process as was described in the main text , and presents the same results for ols with either the dyadic cluster robust standard error estimator or the heteroskedastic robust estimator . in addition , we have added random effects estimates . given that the data generating process conforms exactly to the assumptions of a two - way random effects model , we obtain the straightforward result that the random effects estimates are both more efficient ( the blue diamonds tend to be lower than the black ones ) and the random effects standard errors are consistent . figure [ fig : miss ] illustrates the robustness of the dyadic cluster robust estimator relative to a random effects estimator .in this case , we induce mild misspecification by assuming that the true data generating process is where all parameters are set as in the main text but we also have .we nonetheless fit a model that includes only , thereby mildly misspecifying the model such that we do not account for the nonlinearity induced by the term .figure [ fig : miss - scatter ] shows the scatter plot of against , the linear approximation ( in red ) for one of the simulation replicates ( ) , and the true conditional mean ( in blue ) .such linear approximations are the norm in social science research . in this case, we see that , despite the mild misspecification , the dyadic cluster robust standard error correctly characterizes the standard error of the linear approximation coefficients ( the black box plots still zero in on the black diamonds ) .the random effects model is still more efficient ( the blue diamonds still tend to be below the black diamonds ) .but the random effects standard error estimator ( as computed in the lmer package for r ) is inconsistent and anti - conservative : the blue box plots converge to a limit that is substantially below the true standard error ( the blue diamonds ) .our second application is based on s seminal study of the determinants of mate selection in a `` speed - dating '' experiment .we replicate their primary analysis for female participants ( as reported in column 1 of their table iii ) using data from 21 dating sessions conducted by the experimenters . ]these data include 278 women paired into 3,457 female - male dyads . regress a binary indicator for whether the female subject desired contact information for a male partner on the subject s ratings of the partner s ambition , attractiveness , and intelligence based on a 10-point likert scale .the regression controls for female - subject fixed effects and weights observations contributed by each female subject by the inverse of the number of partners with which she was paired .table [ tab : fisman ] shows the results of our re - analysis ., our estimates differ slightly from those that appear in the original paper . ] again , we see that accounting for the dyadic clustering ( column 4 ) yields standard error estimates that are larger than what one would get if one ignored all clustering ( column 2 ) , although in this case there are no pronounced differences with what one gets when one only accounts for within - subject clustering ( column 3 ) .this is to be expected given that the amount of dyadic dependence in this dataset is limited : dyads were formed only _ within _ sessions that included between 5 and 22 male partners for each female subject . in addition , the female subjects are never paired with each other .the only clustering that occurs is therefore within female subjects and then for male partners that appear across multiple female subjects .the former is addressed in part by the female subject - specific fixed effects .the latter is limited by the within - session pairings . & & het . & naive cluster & dyadic cluster + predictor of mate selection & coef . & robust s.e . & robust s.e . & robust s.e .+ ambition & 0.0192 & 0.0052 & 0.0057 & 0.0061 + attractiveness & 0.1157 & 0.0041 & 0.0051 & 0.0054 + intelligence & 0.0465 & 0.0062 & 0.0073 & 0.0074 + +
dyadic data are common in the social sciences , although inference for such settings involves accounting for a complex clustering structure . many analyses in the social sciences fail to account for the fact that multiple dyads share a member , and that errors are thus likely correlated across these dyads . we propose a nonparametric , sandwich - type robust variance estimator for linear regression to account for such clustering in dyadic data . we enumerate conditions for estimator consistency . we also extend our results to repeated and weighted observations , including directed dyads and longitudinal data , and provide an implementation for generalized linear models such as logistic regression . we examine empirical performance with simulations and an application to interstate disputes . cluster - robust variance estimation for dyadic data keywords : cluster robust variance estimation , dyadic data , agnostic regression + + word count : 5,022 ( including main text , appendix for print , captions , and references ) .
a central problem in quantum information theory is the formulation of appropriate measures that quantify the degree of entanglement in composite systems .particularly important entanglement measures are the concurrence and the entanglement of formation .these quantities have been widely used in many applications .examples include studies on the role of entanglement in quantum phase transitions , on the emergence of long - distance entanglement in spin systems , and on additivity properties of the holevo capacity of quantum channels .the explicit determination of most of the proposed entanglement measures for a generic state is an extremely demanding task that requires the solution of a high - dimensional optimization problem .the development of analytical lower bounds for the various entanglement measures is therefore of great interest .recently , chen , albeverio , and fei have derived such bounds for the concurrence and for the entanglement of formation .they achieved this by relating and to two important and strong separability criteria , namely to the peres criterion of positive partial transposition ( ppt ) and to the realignment criterion . according to these criteriaa given state is entangled ( inseparable ) if the trace norms or are strictly larger than 1 , where denotes the partial transposition and the realignment transformation . in refs . tight lower bounds for and have been formulated in terms of these trace norms . here ,we extend the connection between separability criteria and entanglement measures to a new criterion which has been developed recently .this criterion is based on a universal nondecomposable positive map which leads to a class of optimal entanglement witnesses . employing these witnesses we derive analytical lower bounds for the concurrence and for the entanglement of formation that can be expressed in terms of a simple linear functional of the given state .the entanglement witnesses constructed here have the special feature of being nondecomposable optimal .this notion has been introduced in refs . to characterize optimality properties of entanglement witnesses .it means that the witnesses are able to identify entangled ppt states and that no other witnesses exist which can detect more such states .it follows that the bounds developed here can be sharper than those obtained from the ppt criterion and that they are particularly efficient near the border that separates the ppt entangled states from the separable states .in addition , we will demonstrate that they can also be stronger than the bounds given by the realignment criterion .hence , the new bounds complement and considerably improve the existing bounds .the paper is organized as follows . in sec .[ maps ] we introduce a new separability criterion which is based on a nondecomposable positive map that operates on the states of hilbert spaces with even dimension .we formulate and prove the most important properties of this map , and derive the associated class of optimal entanglement witnesses .analytical lower bounds for the concurrence are developed in sec .[ sec - concurrence ] . in sec .[ sec - example ] we discuss an example of a certain family of states in arbitrary dimensions .it will be demonstrated explicitly with the help of this example that the new bounds can be much sharper than the bounds of the ppt and of the realignment criterion . the new class of entanglement witnesses is used in sec .[ sec - eof ] to develop corresponding lower bounds for the entanglement of formation . finally , some conclusions are drawn in sec .[ conclu ] .we consider a quantum system with finite - dimensional hilbert space . without loss of generalityone can regard as the state space of a particle with a certain spin , where . as usual , the corresponding basis states are denoted by , where the quantum number takes on the values .we will develop a necessary condition for the separability of mixed quantum states which employs the symmetry transformation of the time reversal .in quantum mechanics the time reversal is to be described by an antiunitary operator . as for any antiunitary operator ,we can write , where denotes the complex conjugation in the chosen basis , and is a unitary operator . in the spin representation introduced above the matrix elements of given by .for even , i. e. , for half - integer spins , this matrix is not only unitary but also skew - symmetric , which means that , where denotes the transposition .it follows that which leads to this relation expresses a well - known property of the time reversal transformation which will play a crucial role in the following : for any state vector the time - reversed state is orthogonal to .this is a distinguished feature of even - dimensional state spaces , because unitary and skew - symmetric matrices do not exist in state spaces with odd dimension . the action of the time reversal transformation on an operator on can be expressed by this defines a linear map which transforms any operator to its time reversed operator .for example , if we take the spin operator of the spin- particle this gives the spin flip transformation .a positive map is a linear transformation which takes any positive operator on some state space to a positive operator , i. e. , implies that .a positive map is said to be completely positive if it has the additional property that the map operating on any composite system with state space is again positive , where denotes the unit map .the physical significance of positive maps in entanglement theory is provided by a fundamental theorem established in ref . . according to this theorem a necessary and sufficient condition for a state to be separableis that the operator is positive for any positive map .hence , maps which are positive but not completely positive can be used as indicators for the entanglement of certain sets of states .an important example for a positive but not completely positive map is given by the transposition map .the inequality represents a strong necessary condition for separability known as the peres criterion of positive partial transposition ( ppt criterion ) .the second relation of eq .( [ def - theta ] ) shows that the time reversal transformation is unitarily equivalent to the transposition map .hence , the ppt criterion is equivalent to the condition that the partial time reversal is positive : we define a linear map which acts on operators on as follows where denotes the trace of and is the unit operator .this map has first been introduced in ref . for the special case , in order to study the entanglement structure of su(2)-invariant spin systems . for any even the map defined by eq .( [ phi ] ) has the following features : ( a ) : : is a positive but not completely positive map .( b ) : : the map is nondecomposable .( c ) : : the entanglement witnesses corresponding to are nondecomposable optimal . in the following webriefly explain and prove these statements . * ( a ) * we first demonstrate that is a positive map . to this end, we have to show that the operator is positive for any normalized state vector . using definition ( [ phi ] ) we find : because of eq .( [ ortho ] ) the operator introduced here represents an orthogonal projection operator which projects onto the subspace spanned by and .it follows that also is a projection operator and , hence , that it is positive for any normalized state vector .this proves that is a positive map .we remark that for the projection is identical to the unit operator such that is equal to the zero map in this case . for this reasonwe restrict ourselves to the cases of even .it should be emphasized that would not be positive if we had used the transposition instead of the time reversal in the definition ( [ phi ] ) .the positivity of implies that the inequality provides a necessary condition for separability : any state which violates this condition must be entangled . to show that is not completely positive , i. e. , that the condition ( [ trc ] ) is nontrivial, we consider the tensor product space of two spin- particles .the total spin of the composite system will be denoted by . according to the triangular inequality takes on the values .the projection operator which projects onto the manifold of states corresponding to a definite value of will be denoted by . in particular, represents the one - dimensional projection onto the maximally entangled singlet state .we define a hermitian operator by applying to the singlet state : where the factor is introduced for convenience .more explicit expressions for can be obtained as follows .first , we observe that since is a maximally entangled state ( denotes the partial trace taken over subsystem 2 ) .second , we note that the partial time reversal of the singlet state is given by the formula where denotes the swap operator defined by using then definition ( [ phi ] ) we get another useful representation is obtained by use of the fact that the sum of the is equal to the unit operator . expressing as shown in eq .( [ theta - p0 ] ) we then find : we infer from eq .( [ def - w-3 ] ) that has the negative eigenvalue corresponding to the singlet state . therefore , the operator is not positive and , hence , the map is not completely positive . *( b ) * since is positive but not completely positive the operator defined in eq .( [ def - w ] ) is an entanglement witness .we recall that an entanglement witness is an observable which satisfies for all separable states , and for at least one inseparable state , in which case we say that detects .an entanglement witness is called nondecomposable if it can detect entangled ppt states , i. e. , if there exist ppt states that satisfy . we will demonstrate in sec .[ sec - example ] by means of an explicit example that there are always such states for the witness defined by eq .( [ def - w ] ) .it follows that our witness is nondecomposable .this implies that also the map is nondecomposable , and that the criterion ( [ trc ] ) is able to detect entangled ppt states . *( c ) * the observable introduced above has a further remarkable optimality property . to explain this property we introduce the following notation .we denote by the set of all entangled ppt states of the total state space which are detected by some given nondecomposable witness . a witness said to be finer than a witness if is a subset of , i. e. , if all entangled ppt states which are detected by are also detected by .a given witness is said to be nondecomposable optimal if there is no other witness which is finer , i. e. , if there is no other witness which is able to detect more entangled ppt states .it can be shown that the witness defined by ( [ def - w ] ) is always optimal in this sense .the proof can be carried out by showing that the set of product vectors satisfying spans the total hilbert space .the details of the proof are given in ref .the generalized concurrence of a pure state is defined by where represents the reduced density matrix of subsystem 1 , given by the partial trace taken over subsystem 2 .we consider the schmidt decomposition where and are orthonormal bases in and , respectively , and the are the schmidt coefficients satisfying and the normalization condition the concurrence can then be expressed in terms of the schmidt coefficients : for a mixed state the concurrence is defined to be where the minimum is taken over all possible convex decompositions of into an ensemble of pure states with probability distribution .let be an optimal decomposition of for which the minimum of eq .( [ concurrence ] ) is attained . denoting the schmidt coefficients of by then have : in the second line we have used eq .( [ concu - pure ] ) , and the third line is obtained with the help of the inequality which holds for any set of numbers .consider now any real - valued and convex functional on the total state space with the following property .for all state vectors with schmidt decomposition ( [ schmidt ] ) we have : given such a functional we can continue inequality ( [ concu-1 ] ) as follows : in the second line we have used inequality ( [ prop - f ] ) , and in the third line the convexity of .we conclude that any convex functional with the property ( [ prop - f ] ) leads to a lower bound for the concurrence : in ref . two example for such a functional have been constructed which are based on the ppt criterion and on the realignment criterion : where denotes the partial transposition and the realignment transformation .these functionals are convex because of the convexity of the trace norm which is defined by . moreover , for both functionals the equality sign of eq .( [ prop - f ] ) holds : consider the functional where is the entanglement witness introduced in eq .( [ def - w ] ) .this functional is linear and of course convex .we claim that also satisfies the bound ( [ prop - f ] ) , i. e. , for any state vector with schmidt decomposition ( [ schmidt ] ) we have to show this we first determine the expectation value of . from eq .( [ theta - p0 ] ) we have and , hence , the expression ( [ def - w-2 ] ) can be written as .this gives with the help of the definitions of the swap operator [ eq . ( [ swap ] ) ] and of the time reversal transformation [ eq .( [ def - theta ] ) ] it is easy to verify the formulae this leads to where hence , we have it is shown in appendix [ app - a ] that this leads immediately to the desired inequality : where we have used the normalization condition ( [ normalization ] ) .summarizing we have obtained the following lower bound for the concurrence : of course , this bound is only nontrivial if is detected by the entanglement witness , i. e. , if .it will be demonstrated in sec .[ sec - example ] that this bound can be much stronger than the bounds given by and , which is due to the fact identifies many entangled states that are neither detected by the ppt criterion nor by the realignment criterion .we illustrate the application of the inequality ( [ newbound - concu ] ) with the help of a certain family of states .this family contains a separable state , entangled ppt states , as well as entangled states whose partial transposition is not positive .the example will also lead to a proof of the claim that the map and the witness are nondecomposable .consider the following one - parameter family of states : these normalized states are mixtures of the singlet state and of the state where denotes the projection onto the symmetric subspace under the swap operation .we note that is a separable state which belongs to the class of the werner states .since can be written as a sum over the projections with odd , we immediately get with the help of eq .( [ def - w ] ) : hence , we find that for .it follows that all states of the family ( [ family ] ) corresponding to are entangled , and that is the only separable state of this family . employing eqs .( [ w - rho ] ) and ( [ newbound - concu ] ) we get the following lower bound for the concurrence : to compare this bound with those obtained from the ppt and the realignment criterion we have to determine the trace norms and .the details of the calculation are presented in appendix [ app - b ] .one finds that the ppt criterion gives the bounds : , & \frac{1}{n+2 } \leq \lambda \leq \frac{1}{2 } \\ \sqrt{\frac{2(n-1)}{n } } \frac{n\lambda-1}{n-1 } , & \frac{1}{2 } \leq \lambda \end{array } \right . \nonumber\end{aligned}\ ] ] while the realignment criterion yields : the relations ( [ new - bound])-([realign - bound ] ) lead to a number of important conclusions .first of all , we observe from eq .( [ ppt - bound ] ) that the states within the range have positive partial transposition ( in this range is equal to , see appendix [ app - b ] ) .but from eq .( [ w - rho ] ) we know that all states with must be entangled .it follows that all states in the range are entangled ppt states which are detected by the witness .this proves , as claimed in sec .[ maps ] , that the witness and , hence , also the map are nondecomposable . according to eq .( [ ppt - bound ] ) the ppt criterion only detects the entanglement of the states with .it is thus weaker than the criterion based on the witness . as can be seen from eq .( [ realign - bound ] ) the realignment criterion is even weaker because it only recognizes the entanglement of the states with ( the trace norm of is larger than if and only if , see appendix [ app - b ] ) . a plot of the various lower bounds for the example is shown in fig .[ figure1 ] .we see that the new bound ( [ new - bound ] ) is the best one within the range .the bounds given by the ppt and the realignment criterion coincide in the range . in this rangethey are better than the new bound .note that these features hold true for all .we remark that for large the concurrence approaches the limit . ) for .solid line : the new lower bound given by eq .( [ new - bound ] ) . dashed line :lower bound given by the ppt criterion [ eq .( [ ppt - bound ] ) ] .dashed - dotted line : lower bound given by the realignment criterion [ eq . ( [ realign - bound ] ) ] .dotted line : upper bound given by .[figure1 ] ]for a pure state with schmidt decomposition ( [ schmidt ] ) one defines the entanglement of formation by where denotes the vector of the schmidt coefficients , and is the base logarithm .the quantity is the shannon entropy of the distribution , which is equal to the von neumann entropy of the reduced density matrices .this definition is extended to mixed states through the convex hull construction : where the minimum is again taken over all possible convex decompositions of .an analytical lower bound for the entanglement of formation has been constructed in ref . , which may be described as follows .first , for one defines the function the minimum is taken over all schmidt vectors , i. e. , is the minimal value of the entropy under the constraint .the solution of this minimization problem has been derived by terhal and vollbrecht : \log ( n-1),\ ] ] where ^ 2,\ ] ] and is the binary entropy .second , one introduces the convex hull ] ( see also ref . ) we get : \log ( n-1)\ ] ] for , and for .the general features of this result are similar to those discussed within the context of the concurrence . the special case is plotted in fig . [ figure2 ] .we finally note that represents the asymptotic limit of the entanglement of formation for large .by use of a universal positive map we have obtained a class of nondecomposable optimal entanglement witnesses . employingthese witnesses analytical bounds for the concurrence and for the entanglement of formation have been developed .similar bounds can be derived for other measures , e. g. , for the entanglement measure which is known as tangle . due to the fact that is a nondecomposable optimal entanglement witness , the bounds obtained here are particularly good near the boundary which separates the region of classically correlated states from the region of entangled states with positive partial transposition .it should be clear from the general considerations in secs .[ sec - concurrence ] and [ sec - eof ] and from the example of sec .[ sec - example ] that the bounds derived here are not intended to _ replace _ other known bounds , but rather to _complement _ these .in fact , the bounds based on the witness can be weaker than those given by ppt or the realignment criterion , in particular in those cases in which the optimal decomposition of consists entirely of maximally entangled states . to give an example, we consider the family of states which are invariant under all unitary product transformations of the form , were denotes the complex conjugation of .these states , known as isotropic states , can be parameterized by a single parameter , namely by their fidelity ] , which yields the desired inequality ( [ ineq - aij ] ) .since and are unitarily equivalent we have . using eqs .( [ family ] ) and ( [ theta - p0 ] ) , and the representation we get p_j.\ ] ] hence , the trace norm of is found to be : carrying out this sum one gets : , & \frac{1}{n+2 } \leq\lambda \leq \frac{1}{2 } \\ n\lambda-1 , & \frac{1}{2 } \leq \lambda \end{array } \right.\ ] ] which yields the lower bounds of eq . ( [ ppt - bound ] ) . to determine we note that the realignment transformation may be written as . using and one easily deduces that p_j,\ ] ] which yields : the evaluation of this sum leads to : which gives the lower bounds of eq .( [ realign - bound ] ) .99 g. alber , t. beth , m. horodecki , p. horodecki , r. horodecki , m. rtteler , h. weinfurter , r. werner , and a. zeilinger , _ quantum information _ ( springer - verlag , berlin , 2001 ) .k. eckert , o. ghne , f. hulpke , p. hyllus , j. korbicz , j. mompart , d. bru , m. lewenstein , and a. sanpera , in _ quantum information processing _ , edited by g. leuchs and t. beth ( wiley - vch , berlin , 2005 ) . w. k. wootters , phys .lett . * 80 * , 2245 ( 1998 ) .w. k. wootters , quant .inf . comp . * 1 * , 27 ( 2001 ) .p. rungta , v. buek , c. m. caves , m. hillery , and g. j. milburn , phys .a * 64 * , 042315 ( 2001 ) . c. h. bennett , d. p. divincenzo , j. a. smolin , and w. k. wootters , physa * 54 * , 3824 ( 1996 ) .g. vidal , j. mod . opt . * 47 * , 355 ( 2000 ) .a. osterloh , l. amico , g. falci , and r. fazio , nature * 416 * , 608 ( 2002 ) .t. j. osborne and m. a. nielsen , phys .a * 66 * , 032110 ( 2002 ) .wu , m. s. sarandy , and d. a. lidar , phys .lett . * 93 * , 250404 ( 2004 ) .l. campos venuti , c. degli esposti boschi , and m. roncaglia , .p. w. shor , commun .phys . * 246 * , 453 ( 2004 ) .k. chen , s. albeverio , and s .- m .fei , phys .lett . * 95 * , 040504 ( 2005 ) .k. chen , s. albeverio , and s .- m .fei , phys .lett . * 95 * , 210501 ( 2005 ) .a. peres , phys .lett . * 77 * , 1413 ( 1996 ) .m. horodecki , p. horodecki , and r. horodecki , phys .a * 223 * , 1 ( 1996 ) .k. chen and l .- a .wu , quant .comp . * 3 * , 193 ( 2003 ) .o. rudolph , phys .a * 67 * , 032312 ( 2003 ) .h. p. breuer , .m. lewenstein , b. kraus , j. i. cirac , and p. horodecki , phys .a * 62 * , 052310 ( 2000 ) .m. lewenstein , b. kraus , p. horodecki , and j. i. cirac , phys .a * 63 * , 044304 ( 2001 ) .a. galindo and p. pascual , _ quantum mechanics i _ ( springer - verlag , berlin , 1990 ) .h. p. breuer , phys .a * 71 * , 062330 ( 2005 ) .h. p. breuer , j. phys .a * 38 * , 9019 ( 2005 ) .j. schliemann , phys .a * 72 * , 012307 ( 2005 ) .b. m. terhal , phys .a * 271 * , 319 ( 2000 ) .s. l. woronowicz , rep .* 10 * , 165 ( 1976 ) .r. f. werner , phys .a * 40 * , 4277 ( 1989 ) .b. m. terhal and k. g. h. vollbrecht , phys .85 , 2625 ( 2000 ) . s .-fei and x. li - jost , phys .a * 73 * , 024302 ( 2006 ) .p. rungta and c. m. caves , phys .a * 67 * , 012307 ( 2003 ) .m. horodecki and p. horodecki , phys .a * 59 * , 4206 ( 1999 ) .
employing a recently proposed separability criterion we develop analytical lower bounds for the concurrence and for the entanglement of formation of bipartite quantum systems . the separability criterion is based on a nondecomposable positive map which operates on state spaces with even dimension , and leads to a class of nondecomposable optimal entanglement witnesses . it is shown that the bounds derived here complement and improve the existing bounds obtained from the criterion of positive partial transposition and from the realignment criterion .
extensive interest has recently been devoted to the understanding of molecular motors , which play pivotal roles in cellular processes by performing mechanical work using the energy - driven conformational changes .kinesin , myosin , f-atpase , groel , rna polymerase , and ribosome belong to a group of biological machines that undergoes a series of conformational changes during the mechanochemical cycle where the molecular conformation is directly coupled to the chemical state of the ligand . although substantial progress has been achieved in understanding the underlying physical principles that govern molecular motors during the last decade , major issues still remain to be resolved .specifically , some of the outstanding questions are as follows : ( a ) how is the chemical energy converted into mechanical work ? ( b ) how is the directionality of the molecular movement determined ?( c ) how is the molecular movement coordinated or regulated ?several biochemical experiments have quantified the kinetic steps , single molecule experiments using optical tweezers have measured the mechanical response of individual molecular motors , and an increasing number of crystal structures have provided glimpses into the mechanisms of molecular motors .these experimental evidences , however , are not sufficient to fully address all the questions above .for example , little is known not only about the structural details of each chemical state but also about the kinetic pathways connecting them .hence , if feasible , a computational strategy using the coordinates from x - ray and/or nmr structures can shed light on the allosteric dynamics of molecular motors .although some initial numerical studies have proceeded towards addressing issue ( a ) for a few cases where both open and closed structures are _ explicitly known _ , no previous attempt has been made to answer issue ( c ) . in this paperwe investigate this question in the context of the conventional kinesin where the mechanochemical coordination of the motor movement is best manifested among the motor proteins .one of the experimentally best studied molecular motors is the conventional kinesin ( kinesin-1 ) , a relatively small sized motor protein that transports cellular materials by walking in an unidirectional hand - over - hand manner along the microtubule ( mt ) filaments . compared to other motor proteins involved in material transport such as myosin and dynein, the conventional kinesin has a remarkable processivity , which can travel about a hundred ( .2 ) steps without being dissociated from the mt .the mechanochemical cycle conjectured from experiments suggests that there must be a dynamic coordination between the two motor domains in order to achieve such high processivity .the quest to identify the origin of this dynamic coordination has drawn extensive attention among the kinesin community .since hancock and howard first hypothesized that the `` internal strain '' was needed for processivity , the strain - dependent mechanochemistry became a popular subject in kinesin studies .with the aid of optical tweezers , guydosh and block recently revisited this issue by monitoring the real - time kinesin dynamics in the presence of atp and , a tight binding atp analog .they discovered that , when was bound to the kinesin , the pause - time of the step increased substantially and that the normal step was restored only after the obligatory backstep .this suggests that is released only when the head bound with becomes the leading head ( l ) .supported by this observation , they advocated a kinetic model in which the rearward strain via the neck - linker facilitates the release of the ligand from the l .stated differently , the binding of the ligand to the l is inhibited because the rearward strain constitutes an unfavorable environment for the atp binding sites of the l. in the present study , we focus on the elucidation of the structural origin of the coordinated motion in kinesin by adopting a simple computational strategy. better straightforward evidence of the regulation on the nucleotide binding site can be obtained when a structure in which both kinesin heads are simultaneously bound to the mt binding site is determined .such a structure will allow us to identify the structural differences between the leading ( l ) and the trailing ( t ) head .to date , this structure , however , has not yet been reported .the only available structures include an isolated kinesin-1 without the mt , an isolated single - headed kinesin - like ( kif1a ) with various ligand states , and a single kif1a bound to the tubulin - dimer binding site .therefore , we utilized existing protein data bank ( pdb ) structures and manually built a model system of the two - headed kinesin molecule with both heads bound to the tubulin binding sites ( see fig.2 and legend ) . this model was used to generate an ensemble of structures via simulations . a direct comparison between the l and t equilibrium structures shows that the tension built on the neck - linker induces the disruption of the nucleotide binding site of the l , which directly supports inferences from experimental observations .* mechanochemical cycle of kinesin : * we begin by reviewing the mechanochemical cycle of kinesin molecule on the mt to clarify the importance of dynamic coordination between the two motor domains for kinesin processivity .recent experiments using laser optical tweezers ( lot ) , cryo - electron microscopy , electron paramagnetic resonance , and fret , as well as the crystal structures at various states provide glimpses into the structural and dynamical details of how the kinesin molecule walks on the microbubule filaments . depending on the nucleotide state at the binding site , both the motor domain structure and the binding interface between kinesin and mt are affected .in particular , a minor change of the motor domain coupled to the nucleotide is amplified to a substantial conformational change of the neck linker between the ordered and the disordered state .experimental studies strongly suggest the mechanochemical cycle shown in fig.1 .the mechanical stepping cycle of kinesin initiates with the binding of atp to the empty kinesin head strongly bound to the mt [ .docking of the neck linker to the neck - linker binding motif on the leading head ( x in fig.1 ) propels the trailing head ( y in fig.1 ) in the ( + ) -direction of mt , which leads to an 8 mechanical step [ .the interaction with the mt facilitates the dissociation of adp from the catalytic core of l [ .atp is hydrolyzed and produces state for the t [ . when is released and the trailing head is unbound from the mt , the half - cycle is completed [ .the mechanical step is achieved in a hand - over - hand fashion by alternating the binding of the two motor domains ( x and y in fig.1 ) to the mt .high processivity of the kinesin requires this kinetic cycle to be stable ( remain within the yellow box in fig.1 ) .a premature binding of the atp to the leading head in the state of should be prevented , i.e. , the condition /(k_r^{(iii)}+k_{diss}^{(iii)}) ] should be satisfied in fig.1 ( see supporting information for the master equation describing the kinetic cycle ) .atp binding to the ( iii ) or ( iv ) states can destroy the mechanochemical cycle of the kinesin .the binding of atp on the leading head should be suppressed before the - is released from the t. otherwise , both heads become adp - bound states , which have a weak binding affinity to the mt , and that leads to dissociation from the mt .since the kinesin has a high processivity compared to other molecular motors , effective communication is required between the two heads regarding the chemical state of each of the partner motor domains . + * two - headed kinesin bound to the microtubule : * in the absence of interactions with the mt , the individual kinesin monomers fold into identical conformations . to achieve its biological function , however , folding into the native structure aloneis not sufficient .coupled with the nucleotide and the mt , the two kinesin monomers in the dimeric complex need to alternate the acquisition of the native structure in a time - coordinated fashion for the uni - directional movement .the currently available three - dimensional structure ( pdb i d : 3kin , structure 2 in fig.2-a ) , in which each monomer is in its native state , does not provide such a dynamic picture by failing in fulfilling the geometrical requirement of simultaneous bindings of both motor domain to the adjacent tubulin binding sites that have an 8-nm gap .the inspection of 3-d structure suggests that a substantial increase of the distance between the two motor domains can be gained by breaking a few contacts associated with the neck - linker ( , ) and the neck - linker binding site on the motor domain ( ) . to this end, we manipulated the 3-d structure of 3kin around the neck - linker of the l and created a temporary structure whose two heads bind to the mt binding sites simultaneously .both l and t have energetic biases towards the identical native fold but the interactions with the tubulin binding sites adapt the dimeric kinesin structure into a different minimum structure , which is not known _ a priori_. we performed simulations ( see methods ) to relax this initial structure and to establish the thermal equilibrium ensemble of the kinesin molecule on mt ( see fig.3-a ) .transient dimeric kinesin conformations corresponding to the steps ( iii ) and ( iv ) during the cycle ( fig.1 ) allow us to investigate the structural deviation between l and t of kinesin molecule .this simple computational exercise can confirm or dismiss the experimental conjecture regarding whether the mechanochemical strain significantly induces regulation on the nucleotide binding site and also if it occurs in the l. + * catalytic core of the leading head is less native - like on the mt : * since the nucleotide binding and release dynamics is sensitively controlled by the kinesin structure , we assume that the nucleotide molecule has an optimal binding affinity to the kinesin motor domain in the native structure . for function there is a need to understand how the native structure of the kinesin motor domain is perturbed under the different topological constraints imposed on the dimeric kinesin configuration by interacting with the mt .the equilibrium ensemble of the structures shows that the neck - linker is in the docked state for the t but undocked for l. in comparison to the native structure , the overall shape of the nucleotide binding pocket in the t is more preserved . as long as the mt constrains the two heads 8-nm apart , this configuration is dominant in the thermal ensemble ( see fig.3-a ) ._ global shape comparison : _ there are in principle multiple ways to quantitatively compare the two motor domain structures . to assess the structural differences ,the radius of gyration ( ) of the two motor domain structures from the equilibrium ensemble are computed ( see fig.3-b ) . because the neck - linker and the neck - helix adopt different configurations relative to the motor domain in each monomer, we perform a analysis for the motor domains only ( residue 2 - 324 ) .the distributions show that the l is slightly bigger than the t both in the size ( ) and in the dispersion ( ) .meanwhile the for the native state ( 3kin ) is .clearly , the sizes of both of the heads in the thermal ensemble are expanded at as compared to the native structure .the size alone does not tell much about the difference between the structures .the rmsd relative to the native structure and between the two motor domains ( residues 2 - 324 ) computed over the equilibrium ensemble gives , , , where is the rmsd between conformations x and y. if the helix is excluded from the rmsd calculation of motor domain ( residues 2 - 315 ) , then , , and .the rmsd analysis shows that the helix significantly contributes more for the deviation of the leading head from its native state than the t. additional detailed comparisons with respect to the native state can be made using the structural overlap function of pair , , which is defined as where if , otherwise . is the distance of pair in native state , where . by setting values identical in both heads ( i.e., both heads have the same native state ) , we compute the values for the trailing and the leading heads , respectively . the relative difference of the value between t and l , is defined by which quantitatively measures the structural difference of the two heads . based on the value ( fig.3-c ) , the distances between the mt binding motif of the t ( l11 , l12 , , ) and other secondary structure units ( , , , , , ) are 50 % more native - like than in the l. _ conserved native contacts in trailing head reveals the strain propagation pathway in leading head : _ a direct measure of similarity to the native structure is the fraction of native contacts preserved in the thermal ensemble . since we assume that atp affinity is optimized in the native state , we can readily assess the quality of the structure using this measure .we quantify the nativeness of a pair using , where if , residues are in contact at the native state ( ) , and otherwise . ( with or ) is obtained by averaging over the thermal ensemble .when is averaged over all the native pairs , the average fraction of native contacts , is calculated as where is the total number of native pairs . for the t and l conformations , and , respectively .the relative difference of native contacts between the two kinesin heads at the pair level , , is quantified similarly to eq.[eqn : chi ] as in fig.4 , is color - coded based on its value . as expected from the equilibrium ensemble , conspicuous differences are found around the structural motifs having direct contacts with neck - linker , giving . quantitative inspection of the other contacts is illustrated in the structure .we color the kinesin head structure based on the value .the residue pairs are colored in magenta if , red if , where the positive signifies that the native contacts in trailing head are more intact .the residue pairs are colored in light - blue if , blue if .more intact contacts , when the trailing and the leading head are compared , are visualized by yellow line in fig.5-b .our analysis not only shows that there is higher probability of the formation of native contacts present in the t in comparison to the l , but also suggests how the tension is propagated towards the nucleotide binding site to disrupt the nativeness of the nucleotide binding pocket in the leading head .as expected , a dense network of intact contacts are found between the neck - linker ( ) and the neck - linker binding motif ( ) .this network continues along the helix , perturbing , , , and finally reaches the nucleotide binding site ( see si fig.6 for the nomencaltures of the secondary structures ) .it is surprising that the disruptions of native contacts are found particularly in the nucleotide binding site , which is believed to be the trigger point for the allosteric transition .all the important nucleotide binding motifs ( p - loop , switch-1 , switch-2 , and n4 ) are recognized by our simulational analysis using a nonlinear - hamiltonian ( see supporting information for the comparison with linear - harmonic potential represented as gaussian network model ) . + * estimate of the tension in the neck - linker : * the deformation of the leading motor domain is caused by the internal tension in the neck linker .the tension on the neck linker is estimated using the force ( ) versus extension ( ) relationship of a worm - like chain model , , \label{eqn : wlc}\ ] ] where is the persistence length of the polymer and is the contour length . for the 15-amino acid neck - linker ( residue from 324 to 338 ) , and in the equilibrium ensemble of structures , . assuming that for this segment , we estimate a tension . by integrating eq.[eqn : wlc ] for . the tensional energy stored in the neck - linkeris obtained , which is 17 .about 20% of the atp hydrolysis energy ( ) is stored in the neck - linker and directly perturbs the nucleotide binding site of the l , whereas mechanical action to the t is dissipated through the dense network of contacts formed between the neck - linker ( ) and the neck - linker binding site ( ) . for a given extension ,when the length of the neck - linker is varied by , the variation in the length of the neck - linker can affect the effective tension as (\delta l / l) ], the correlation between the spatial fluctuation of two residues is expressed using the inverse of kirchhoff matrix , for , the mean square displacement of the residue , , corresponds to the b - factor ( debye - waller temperature factor ) as .a comparison between the b - factor and mean square displacement ( msd ) from the gnm determines the effective strength of the harmonic potential that stabilizes the structure .note that the quality of the msd in gnm is solely controlled by the value , thus we scaled the msd with for gnm analysis .we applied the gnm analysis with on the two - headed kinesin whose both heads fit to the adjacent tubulin binding site , and then computed the cross - correlation matrix as shown in fig.7a .the cross - correlation value , scaled by , shows that except for the neck - helix region the amplitude of correlation in leading head is always larger than that of the trailing head .this is expected since the neck - linker of the leading kinesin is detached from the motor domain .the residues in the network with less coordination number are subject to a larger fluctuation .the relative difference of the cross - correlation between the leading and the trailing kinesin using is illustrated in fig . 7b .7c shows the auto - correlations ( or mean square displacement ) , which are the diagonal elements of the matrix .gnm analysis is useful in analyzing the fluctuation dynamics of the stable structure at the residue level in the basin of attraction where the basin is modeled as a quadratic potential .however , the expansion of the potential minima up to the quadratic term is justified only if the fluctuation is small .the amplitude of fluctuation in biological systems at physiological temperatures ( ) is most likely to exceed the limit beyond which nonlinear response is no longer negligible . in order to take this effect into account, the hamiltonian should be expanded beyond the linear response regime .this procedure indeed reverses the simple idea that tirion and bahar et al . have proposed in the context of gnm analysis .however , _minimal _ inclusion of the nonlinear term can be useful by increasing the susceptibility of the structure . once the nonlinear term is included ,a simple analytical expression such as eq.[eqn : z_n ] is not available .thus , we resort to the simulations . the analytically obtained quantities , , in fig .7 can also be calculated over the thermal ensemble of structures obtained from simulations using a nonlinear - hamiltonian ( see fig .the first conclusion drawn from the simulational analysis is similar to the gnm in that the leading head experiences larger fluctuations .secondly , the position and relative amplitude of the msd peaks , reproduced using the simulation results , shows a good agreement with gnm results .however , the direct comparison of ( or ) between fig . 7 and fig . 8shows that the simulation results from the nonlinear - hamiltonian display a more sensitive pattern of cross - correlations .the pronounced amplitude of ( or ) suggests a strong spatial correlation between residues and . + * alternative energy function : sop potential . *an alternative potential function for the sb potential used in the main text is the self - organized polymer ( sop ) potential that was recently adopted for simulations of the mechanical unfolding of large molecules of rna and proteins as well as the allosteric dynamics of groel .the energy hamiltonian is defined as \delta_{ij}\nonumber\\ & + \sum_{i=1}^{n_k-2}\epsilon_l\left(\frac{\sigma}{r_{i , i+2}}\right)^6+\sum_{i=1}^{n_k-3}\sum_{j = i+3}^{n_k}\epsilon_l\left(\frac{\sigma}{r_{ij}}\right)^6(1-\delta_{ij})\nonumber\\ & + \sum_{i=1}^{n_k}\sum_{k=1}^{n_{tub}}\left[\epsilon_h\left(\left(\frac{r^o_{ik}}{r_{ik}}\right)^{12}-2\left(\frac{r^o_{ik}}{r_{ik}}\right)^6\right)\delta^*_{ik}+\epsilon_l\left(\frac{\sigma}{r_{ik}}\right)^6(1-\delta^*_{ik})\right ] .\label{eq : sop } \tag{9}\end{aligned}\ ] ] the first term is for the chain connectivity of the kinesin molecule . the finite extensible nonlinear elastic ( fene ) potential is used with , , and is the distance between neighboring interaction centers and .the lennard - jones potential interactions stabilize the native topology . a native contact is defined as the pair of interaction centers whose distance is less than in native state for .if and sites are in contact in the native state , , otherwise .we used in the native pairs , and for non - native pairs . to ensure the non - crossing of the chain, we used a power potential in the repulsion terms and set , which is typical distance .the parameters determining the native topology , and , are adopted from the trailing kinesin ( x ) whose structure is shown in fig .2_c_. we transferred the topological information in the trailing head ( t ) to the leading head ( l ) by substituting and from the t to l. kinesin - tubulin interaction energies are similarly defined as kinesin intramolecular interaction energies with slightly different native contact distances .we set the cut - off distance for the native interactions between the kinesin and the tubulin as .the parameters , and , defining the interface topology between the kinesin head t and the tubulin is transfered to the kinesin head l and the next tubulin binding site . using the sop potential, we obtained qualitatively identical results as those obtained from the sb potential .the nucleotide binding pocket of the front head is disrupted in the dimeric kinesin configuration whose both heads are bound to the tubulin binding sites .the figures corresponding to fig .3_a _ and _ c _ and fig .5 are regenerated using sop model in fig .+ * master equations for the mechanochemical cycle of kinesin described in fig . 1 . * in the limit when the dissociation of dimeric kinesin from the microtubule is negligible , the kinetic equation describing the dynamic cycle shown in fig . 1 is written as {(i)}+k_rp_{(ii)}+k_{dmt}p_{(iv)}+k_ap_{(v)}\nonumber\\ \frac{dp_{(ii)}}{dt}&=-(k_r+k_d)p_{(ii)}+k_{bi}[atp]p_{(i)}\nonumber\\ \frac{dp_{(ii')}}{dt}&=-k_{dadp}p_{(ii')}+k_dp_{(ii)}\nonumber\\ \frac{dp_{(iii)}}{dt}&=-(k_h+k_{bi}^{(iii)}[atp])p_{(iii)}+k_{dadp}p_{(ii')}+k_r^{(iii)}p_{(iii')}\nonumber\\ \frac{dp_{(iv)}}{dt}&=-(k_{dmt}+k_{bi}^{(iv)}[atp])p_{(iv)}+k_hp_{(iii)}+k_r^{(iv)}p_{(iv')}\nonumber\\ \frac{dp_{(iii')}}{dt}&=-(k_r^{(iii)}+k^{(iii)}_{diss})p_{(iii')}+k_{bi}^{(iii)}[atp]p_{(iii)}\nonumber\\ \frac{dp_{(iv')}}{dt}&=-(k_r^{(iv)}+k^{(iv)}_{diss})p_{(iv')}+k_{bi}^{(iv)}[atp]p_{(iv)}\nonumber\\ \frac{dp_{(v)}}{dt}&=-k_ap_{(v)}+k_{diss}^{(iii)}p_{(iii)}+k_{diss}^{(iv)}p_{(iv ) } , \tag{10}\end{aligned}\ ] ] where is the probability of finding the molecule in a mechanochemical state with .the steady state solutions by setting leads to }\left(1+\frac{k_r}{k_d}\right)\frac{\mathcal{x}}{\mathcal{z}},\quad p_{(ii)}=\frac{1}{k_d}\frac{\mathcal{x}}{\mathcal{z}},\quad p_{(ii')}=\frac{1}{k_{dadp}}\frac{\mathcal{x}}{\mathcal{z}}\nonumber\\ p_{(iii)}&=\frac{\mathcal{y}}{\mathcal{z}},\quad\quad p_{(iv)}=\frac{1}{\mathcal{z}},\quad\quad p_{(iii')}=k_{m}^{(iii)}\frac{\mathcal{y}}{\mathcal{z}},\quad\quad p_{(iv')}=k_{m}^{(iv)}\frac{1}{\mathcal{z}}\nonumber\\ p_{(v)}&=\frac{1}{k_a}\left(k_{diss}^{(iii)}k_{m}^{(iii)}\mathcal{y}+k_{diss}^{(iv)}k_{m}^{(iv)}\right)\frac{1}{\mathcal{z}}\nonumber\\ \mathcal{x}&\equiv k_{dmt}\left(1+\frac{k^{(iii)}_{diss}}{k_h}k_{m}^{(iii)}\right)\left(1+\frac{k^{(iv)}_{diss}}{k_{dmt}}k_{m}^{(iv)}\right)\nonumber\\ \mathcal{y}&\equiv \frac{k_{dmt}}{k_h}\left(1+\frac{k_{diss}^{(iv)}}{k_{dmt}}k_{m}^{(iv)}\right)\nonumber\\ \mathcal{z}&\equiv\left[1+\left(1+\frac{k_{diss}^{(iii)}}{k_a}\right)k_{m}^{(iii)}\right]\mathcal{y}+\left[1+\left(1+\frac{k_{diss}^{(iv)}}{k_a}\right)k_{m}^{(iv)}\right]\nonumber\\ & + \left[\frac{1}{k_{bi}[atp]}\left(1+\frac{k_r}{k_d}\right)+k_d^{-1}+k_{dadp}^{-1}\right]\mathcal{x}\nonumber\\ k_{m}^{(iii)}&\equiv\frac{k_{bi}^{(iii)}[atp]}{k_r^{(iii)}+k_{diss}^{(iii)}},\quad k_{m}^{(iv)}\equiv\frac{k_{bi}^{(iv)}[atp]}{k_r^{(iv)}+k_{diss}^{(iv)}}. \label{eqn : ss } \tag{11}\end{aligned}\ ] ] when the average velocity at steady state is computed using {(i)}-k_r p_{(ii)})=\mathcal{x}/\mathcal{z} ] is negligible in fig .1 ) , then where is the dissociation rate . + 10 ma , y. z. & taylor , e. w. ( 1997 ) _ j. biol . chem . _ * 272 * , 724730 .moyer , m. l. , gilbert , s. p. , & johnson , k. a. ( 1998 ) _ biochemistry _ * 37 * , 800813 .visscher , k. , schnitzer , m. j. , & block , s. m. ( 1999 ) _ nature _ * 400 * , 184187 .shaevitz , j. w. , abbondanzieri , e. a. , landick , r. , & block , s. m. ( 2003 ) _ nature _ * 426 * , 684687 .chemla , y. r. , aathavan , k. , michaelis , j. , grimes , s. , jardine , p. j. , anderson , d. l. , & bustamante , c. ( 2005 ) _ cell _ * 122 * , 683692 .kozielski , f. , sack , s. , marx , a. , thormhlen , m. , schnbrunn , e. , biou , v. , thompson , a. , mandelkow , e. m. , & mandelkow , e. ( 1997 ) _ cell _ * 91 * , 985994 .nitta , r. , kikkawa , m. , okada , y. , & hirokawa , n. ( 2004 ) _ science _ * 305 * , 678683 .sablin , e. p. , case , r. b. , dai , s. c. , hart , c. l. , ruby , a. , vale , r. d. , & fletterick , r. j. ( 1998 ) _ nature _ * 395 * , 813816 .rayment , i. , rypniewski , w. r. , schmidt - base , k. , smith , r. , tomchick , d. r. , benning , m. m. , winkelmann , d. a. , wesenberg , g. , & holden , h. m. ( 1993 ) _ science _ * 261 * , 5058 .abrahams , j. p. , leslie , a. g. w. , lutter , r. , & walker , j. e. ( 1994 ) _ nature _ * 370 * , 621628 .xu , z. , horwich , a. l. , & sigler , p. b. ( 1997 ) _ nature _ * 388 * , 741 .cramer , p. , bushnell , d. a. , & kornberg , r. d. ( 2001 ) _ science _ * 292 * , 18631876 .yusupov , m. m. , yusupova , g. z. , baucom , a. , lieberman , k. , earnest , t. n. , cate , j. h. d. , & noller , h. f. ( 2001 ) _ science _ * 292 * , 883896 .koga , n. & takada , s. ( 2006 ) _ proc .sci . _ * 103 * , 53675372 .okazaki , k. , koga , n. , takada , s. , onuchic , j. n. , & wolynes , p. g. ( 2006 ) _ proc .sci . _ * 103 * , 1184411849 .hyeon , c. , lorimer , g. h. , & thirumalai , d. ( 2006 ) _ proc .sci . _ * 103 * , 1893918944 .yu , j. , ha , t. , & schulten , k. ( 2006 ) _ biophys .j. _ * 91 * , 20972114 .brady , s. t. ( 1985 ) _ nature _ * 317 * , 7375 .vale , r. d. , reese , t. s. , & sheetz , m. p. ( 1985 ) _ cell _ * 42 * , 3950 .hancock , w. d. & howard , j. ( 1999 ) _ proc .sci . _ * 96 * , 1314713152 .uemura , s. & ishiwata , s. ( 2003 ) _ nature struct .* 10 * , 308311 .klumpp , l. m. , hoenger , a. , & gilbert , s. p. ( 2004 ) _ proc ._ * 101 * , 34443449 .rosenfeld , s. s. , fordyce , p. m. , jefferson , g. m. , king , p. h. , & block , s. m. ( 2003 ) __ * 278 * , 1855018556 .guydosh , n. r. & block , s. m. ( 2006 ) _ proc ._ * 103 * , 80548059 .kikkawa , m. , okada , y. , & hirokawa , n. ( 2000 ) _ cell _ * 100 * , 241252 . rice , s. , lin , a. w. , safer , d. , hart , c. l. , naber , n. , carragher , b. o. , cain , s. m. , pechatnikova , e. , wilson - kubalek , e. m. , whittaker , m. , pate , e. , cooke , r. , taylor , e. m. , milligan , r. a. , & vale , r. d. ( 1999 ) _ nature _ * 402 * , 778784 .sindelar , c. v. , budny , m. j. , rice , s. , naber , n. , fletterick , r. , & cooke , r. ( 2002 ) _ nature struct .biol . _ * 9*(11 ) , 844848 .cross , r. a. ( 2004 ) _ trends biochem .* 29 * , 301309 .hackney , d. d. ( 1994 ) _ proc .sci . _ * 91 * , 68656869 .asbury , c. l. , fehr , a. n. , & block , s. m. ( 2003 ) _ science _ * 302 * , 21302134 .cho , s. s. , levy , y. , & wolynes , p. g. ( 2006 ) _ proc .sci . _ * 103 * , 586591 .marko , j. f. & siggia , e. d. ( 1996 ) _ macromolecules _ * 27 * , 981988 . schuler , b. , lipman , e. a. , steinbach , p. j. , kumke , m. , & eaton , w. a. ( 2005 ) _ proc . natl .sci . _ * 102 * , 27542759 .hackney , d. d. , stock , m. f. , moore , j. , & patterson , r. a. ( 2003 ) _ biochemistry _ * 42 * , 1201112018 .terada , t. p. , sasai , m. , & yomo , t. ( 2002 ) _ proc ._ * 99 * , 92029206. fisher , m. e. & kolomeisky , a. b. ( 2001 ) _ proc .sci . _ * 98 * , 77487753 .fisher , m. e. & kim , y. c. ( 2005 ) _ proc .sci . _ * 102 * , 1620916214 .reimann , p. ( 2002 ) _ phys . rep ._ * 361 * , 57265 .miyashita , o. , onuchic , j. n. , & wolynes , p. g. ( 2003 ) _ proc .sci . _ * 100 * , 1257012575 .clementi , c. , nymeyer , h. , & onuchic , j. n. ( 2000 ) _ j. mol .biol . _ * 298 * , 937953 .hyeon , c. & thirumalai , d. ( 2007 ) _ biophys .j. ( in press ) _ * 92 * , 731743 .hyeon , c. , dima , r. i. , & thirumalai , d. ( 2006 ) _ structure _ * 14 * , 16331645 .honeycutt , j. d. & thirumalai , d. ( 1992 ) _ biopolymers _ * 32 * , 695709 .sack , s. , muller , j. , alexander , m. , manfred , t. , mandelkow , e. m , brady , s. t. , & mandelkow , e. ( 1997 ) _ biochemistry _ * 36 * , 16155 16165 .sack , s. , kull , f. j. , & mandelkow , e. ( 1999 ) _ eur .j. biochem . _ * 262 * , 111 .tirion , m. m. ( 1996 ) _ phys .lett . _ * 77*(9 ) , 19051908 .haliloglu , t. , bahar , i. , & erman , b. ( 1997 ) _ phys .lett . _ * 79*(16 ) , 30903093 .tama , f. , valle , m. , frank , j. , & brooks iii , c. l. ( 2003 ) _ proc ._ * 100 * , 93199323 .kremer , k. & grest , g. s. ( 1990 ) _ j. chem. phys . _ * 92 * , 50575086 .* fig . 6 * : kinesin structure .( _ a _ ) top view when kinesin is bound to tubulin .the helices on the top with respect to -sheet are colored in pink .( _ b _ ) bottom view .the helices on the bottom with respect to -sheet or on the side of the tubulin binding interface are colored in lightblue .l11 loop , which is not observable in the crystal structure because of disorder , is tentatively drawn with a dashed line .( _ c _ ) side view .note that the neck - linker ( , ) is connecting the neck - helix with the helix in motor domain .( _ d _ ) view around the nucleotide binding site .p - loop , switch-1 , switch-2 , and n4 regions , which are relevant to nucleotide binding , are colored in red with annotation . +* : analysis of the kinesin equilibrium dynamics using the gaussian network model ( gnm ) .( _ a _ ) the cross - correlation map of the residue fluctuation for the two - headed kinesin structure is shown in fig.2c .( _ b _ ) the amplitudes are color - coded based on its value .the relative difference between the trailing and the leading head with respect to leading head ( ) is plotted on the right .( _ c _ ) mean square displacement of residues in the trailing head ( red ) and the leading head ( blue ) are plotted on the same plot .a comparison of the amplitudes shows that the leading head fluctuates more than the trailing head . the relative difference between the two plots with respect to leading head ( )is plotted on the right panel . + * fig . 8 * : analysis of the kinesin equilibrium dynamics using an equilibrium ensemble generated from the simulations under sb - hamiltonian .the legends for _ a_-_c _ are identical to si fig . 7 .+ * fig . 9* : results of the strain induced regulation in the kinesin dimer on the mt generated using the sop potential ( eq.[eq : sop ] ) .comparisons between the results obtained with the sop potential and with the sb potential confirm qualitatively the identical conclusions .
in the presence of atp , kinesin proceeds along the protofilament of microtubule by alternated binding of two motor domains on the tubulin binding sites . since the processivity of kinesin is much higher than other motor proteins , it has been speculated that there exists a mechanism for allosteric regulation between the two monomers . recent experiments suggest that atp binding to the leading head domain in kinesin is regulated by the rearward strain built on the neck - linker . we test this hypothesis by explicitly modeling a -based kinesin structure whose both motor domains are bound on the tubulin binding sites . the equilibrium structures of kinesin on the microtubule show disordered and ordered neck - linker configurations for the leading and the trailing head , respectively . the comparison of the structures between the two heads shows that several native contacts present at the nucleotide binding site in the leading head are less intact than those in the binding site of the rear head . the network of native contacts obtained from this comparison provides the internal tension propagation pathway , which leads to the disruption of the nucleotide binding site in the leading head . also , using an argument based on polymer theory , we estimate the internal tension built on the neck - linker to be pn . both of these conclusions support the experimental hypothesis . = 22pt
learning the covariance structure of a stochastic process from data is a fundamental prerequisite for problems such as prediction , classification and control . for example , to do prediction for an ornstein uhlenbeck ( ou ) ( uhlenbeck and ornstein ) process , ] .having observed , stein showed that a modified ml ( mml ) estimator of the ratio of the variance of the increments of the bm process to that of measurement error is only fourth - root- consistent , whereas the corresponding mml estimator of the measurement - error variance still remains root- consistent .similar asymptotic results for the ml estimators of the two variances have also been established by at - sahalia , mykland and zhang . in this article, we shall superimpose a time trend ( regression ) term on an ou process with measurement error in order to accommodate a broader range of applications . specifically , we propose the following model for a real - valued stochastic process : where is a -dimensional time trend vector , is a zero - mean ou process with covariance function defined in ( [ expcovfun1dim ] ) , is a zero - mean gaussian measurement error with for some unknown , is a -dimensional constant vector , and , and are independent . in a computer experiment , in ( [ setupfullmodel ] ) can be used to describe the systematic departure of the response from the linear model and denotes the measurement error . for more details ,we refer the reader to sacks , schiller and welch and ying .model ( [ setupfullmodel ] ) can also be applied to one - dimensional geostatistical modeling and therein corresponds to a commonly used exponential covariance model ; see ripley and cressie for numerous examples .denote the true time trend by where is independent of and , and define .the time trend model in ( [ setupfullmodel ] ) is said to be correctly specified if and misspecified otherwise . in this article, we shall allow to be misspecified , which further increases the flexibility of model ( [ setupfullmodel ] ) .however , a misspecified time trend will usually create extra challenges in estimating covariance parameters .this motivates us to ask how the ml estimators of the covariance parameters in model ( [ setupfullmodel ] ) perform when the corresponding time trend model is subject to misspecification . to facilitate exposition, we assume in the sequel that ] has been considered by the aforementioned authors , and the setup is called fixed domain asymptotics . on the other hand , when , the domain grows to infinity as with a faster growing rate for a larger value , and the setup is referred to as the increasing domain asymptotics , even though the minimum inter - data distance goes to zero .this is different from the increasing domain setup considered by zhang and zimmerman , in which the minimum distance between sampling points is bounded away from zero . by incorporating both fixed and increasing domains ,our _ mixed _ domain asymptotic framework enables us to explore the interplay between the model misspecification / complexity and the growing rate of on the asymptotic behaviors of the ml estimators , thereby leading to an intriguing answer to the above question .re - parameterizing ( [ expcovfun1dim ] ) by and , the covariance parameter vector in model ( [ setupfullmodel ] ) can be written as .let , the parameter space , be a compact set in and suppose .based on model ( [ setupfullmodel ] ) and observations , we estimate using the ml estimator , which satisfies where \label{loglikefuntrue } \\[-8pt ] \nonumber & & { } -\tfrac{1}{2}\mathbf{z}'\bigl(\mathbf{i}-\mathbf{m}(\bolds \theta)\bigr)'\bolds\sigma^{-1 } ( { \bolds\theta } ) \bigl ( \mathbf{i}-\mathbf{m}(\bolds\theta)\bigr)\mathbf{z},\end{aligned}\ ] ] is known as the profile log - likelihood function , in which , with and with being full rank almost surely ( a.s . ) . it is not difficult to show that the ml estimator of is given by , where however , since model ( [ setupfullmodel ] ) can be misspecified , investigating the asymptotic properties of is beyond the scope of this paperlet and .by , ( [ setupfullmodelf ] ) and ( [ loglikefuntrue ] ) , we have where with , is the log - density function for . as will be seen in section [ sec2 ] , the contribution of the time trend to mainly made by the first term above , vanishing when ( [ correct ] ) holds true , is due to model misspecification , and the second term , having an order of magnitude uniformly over ( see lemma [ lemmaprojectionsup ] ) , is related to model complexity .we therefore introduce as a uniform bound for ( [ bound ] ) over .let .the growing rates of needed for , to achieve consistency are given in the next theorem in terms of the order of magnitude of .it provides a preliminary answer to the question of whether the covariance structures of and can be learnt from data under possible model misspecification .[ thmconsistency ] suppose for some .then , for , theorem [ thmconsistency ] shows that as long as ( [ assumptionofmisselected ] ) holds true , is a consistent estimator of , regardless of the value of .in contrast , in order for and to achieve consistency , one would require and , respectively .in fact , these two constraints can not be weakened because we provide counterexamples in section [ sec3 ] illustrating that is no longer consistent when , and both and fail to achieve consistency if .it is worth mentioning that is highly convoluted due to the involvement of regression terms , making it difficult to establish consistency of .our strategy is to decompose the nonstochastic part of into several layers whose first three leading orders are , and , respectively , and express the remainder stochastic part as the sum of and two other terms that can be uniformly expressed as and ; see ( [ lemmaconvergea.s.0 ] ) .one distinctive characteristic of these nonstochastic layers is that the coefficient associated with the ( ) leading layer only depends on .when ( [ assumptionofmisselected ] ) is assumed , this hierarchical layer structure together with some uniform bounds established for the second moments of enables us to derive the consistency of in the order of , and by focusing on one layer and one parameter at a time .let denote the trace of a matrix . as shown in the proof of theorem [ thmconsistency ] , the uniform bounds for first expressed in terms the supremums of , or other similar trace terms such as those given in ( [ pfthm01eq01 ] ) .these expressions are obtained using the idea that the sup - norms of a sufficiently smooth function can be bounded above by suitable integral norms , as suggested in lai , chan and ing and chan , huang and ing .we then carefully calculate the orders of magnitude of the aforementioned traces , yielding uniform bounds in terms of or .note that dahlhaus has applied the chaining lemma ( see pollard ) to obtain uniform probability bounds for some quadratic forms of a discrete time long - memory process .however , since no rates have been reported in his bounds , his approach may not be directly applicable here .whereas theorem [ thmconsistency ] has demonstrated the performance of from the perspective of consistency , the questions of what are the convergence rates of and whether there are central limit theorems ( clts ) for , still remain unanswered .the next section is devoted to these questions .in particular , it is shown in theorem [ thmctlunderincorrectmodel ] that for , , if , and has a limiting normal distribution if . since the time trend is involved , our proof of theorem [ thmctlunderincorrectmodel ] is somewhat nonstandard .we first obtain the initial convergence rates of using the standard taylor expansion and an argument similar but subtler than the one used in the proof of theorem [ thmconsistency ] .using these initial rates , we can improve the convergence results through the same argument .we then repeat this iterative procedure until the final convergence results are established .the rest of this article is organized as follows . in section [ sec2 ] ,we begin by establishing the clt for in situations where is fixed and the regression model is correctly specified ( namely , ( [ correct ] ) is true ) ; see theorem [ thmctlundertruemodel ] .we subsequently drop these two restrictions and report in theorem [ thmctlunderincorrectmodel ] the most general convergence results of this paper . in section [ sec3 ] ,we provide two counterexamples showing that the results obtained in theorem [ thmconsistency ] are difficult to improve .the proofs of all theorems and corollaries in the first three sections are given in section [ sectionproofoftheoremsandcorollaries ] .the proofs of the auxiliary lemmas used in section [ sectionproofoftheoremsandcorollaries ] are provided in the supplementary material ( chang , huang and ing ) in light of space constraint . before leaving this section, we remark that although our results are derived under the gaussianity of and , similar results can be obtained when either or is not ( but pretended to be ) gaussian , provided some fourth moment information is available . on the other hand , while we allow the time trend to be misspecified , we preclude a misspecified covariance model .the interested reader is referred to xiu for some asymptotic results on the ml estimators when the covariance model considered in stein or at - sahalia , mykland and zhang is misspecified .in this section , we begin with establishing the asymptotic normality of , in situations where the regression model is correctly specified and is fixed .[ thmctlundertruemodel ] assume that ( [ correct ] ) holds and is a fixed nonnegative integer .( note that these assumptions yield in ( [ assumptionofmisselected ] ) . ) then for , and for , one of the easiest ways to understand theorem [ thmctlundertruemodel ] is to link the result to the fisher information matrix .straightforward calculations show that under the assumption of theorem [ thmctlundertruemodel ] , the diagonal elements of the fisher information matrix evaluated at are given by where the trace terms are solely contributed by the log - density ( log - likelihood ) function for ( defined in ( [ logpdfe+e ] ) ) , and the terms , which vanish if the time trend is known to be zero , are related to the model complexity . moreover , by ( [ sigma1sigmainverse2 ] ) , ( [ diffsigma1sigmainverse ] ) and ( [ sigma1sigmainverse4 ] ) , it is interesting pointing out that the denominator on the right - hand side of the first equation of ( [ fisherview ] ) coincides exactly with the limiting variance in ( [ thmtheta1consistency ] ) .this is reminiscent of a conventional asymptotic theory for the ml estimate which says that the limiting variance of the ml estimate is the reciprocal of the corresponding fisher information number . on the other hand , while the reciprocals of the right - hand sides of the second and third identities of ( [ fisherview ] ) are the same as the limiting variances in ( [ thmtheta2consistency ] ) and ( [ thmtheta3consistency ] ) , the divergence rates of the corresponding trace terms and are much slower than .in fact , they are equal to the divergence rates of the second and third leading layers of the nonstochastic part of ; see ( [ lemmaconvergea.s.0 ] ) .these findings reveal that the amounts of information related to s have different orders of magnitude , thereby leading to different normalizing constants in the clts for s .the next theorem improves theorem [ thmctlundertruemodel ] by deriving rates of convergence of , without requiring in ( [ assumptionofmisselected ] ) .it further shows that clts for , are still possible if the model misspecification / complexity associated with the time trend has an order of magnitude smaller than and , respectively . [ thmctlunderincorrectmodel ] suppose that ( [ assumptionofmisselected ] ) is true .then for , and for , in addition , for , and for , recall that , and . it is shown in ( [ fisherview0 ] ) and ( [ fisherview ] ) that , ignoring the constant , the amount of information regarding contained in is , . on the other hand ,as will become clear later , can be used to measure the amount of information contaminated by model misspecification / complexity ( again ignoring the constant ) .therefore , the first part of theorem [ thmctlunderincorrectmodel ] delivers nothing more than the simple idea that provided that \label{maincondition } \\[-8pt ] \nonumber & & \quad<\mbox{amount of information regarding } \theta_{0,i } \mbox { contained in } \bolds\eta+ \bolds\epsilon.\end{aligned}\ ] ] note that the second term on the right - hand side of ( [ mainidea ] ) is the best rate one can expect when the time trend is known to be zero .the second part of theorem [ thmctlunderincorrectmodel ] further indicates that the clts for s in theorem [ thmctlundertruemodel ] carry over to situations where ( [ maincondition ] ) holds with the right - hand side replaced by its square root . to the best of our knowledge ,this is one of the most general clts established for s . in the following , we present two specific examples illustrating how the asymptotic behavior of s is affected by the interaction between and . in the first example, the model misspecification yields , and hence . according to theorem [ thmctlunderincorrectmodel ] , the clts for and hold for a certain range of .[ coropolymle0 ] consider the intercept - only model of ( [ setupfullmodel ] ) with .suppose that , where and are nonzero constants .then for , \label{cor1theta1consistency } \\[-8pt ] \nonumber n^{1-\delta}(\hat\theta_1- \theta_{0,1 } ) & = & o_p(1 ) ; \qquad\delta\in[1/2,1 ) , \\ n^{(1+\delta)/4 } ( \hat{\theta}_2-\theta_{0,2 } ) & \displaystyle\mathop { \rightarrow}^{d } & n \bigl(0 , 2^{5/2}\theta_{0,1}^{1/2 } \theta_{0,2}^{3/2 } \bigr ) ; \qquad \delta\in[0,1/3 ) , \nonumber \\[-8pt ] \label{cor1theta2consistency } \\[-8pt ] \nonumber n^{(1-\delta)/2 } ( \hat\theta_2 -\theta_{0,2 } ) & = & o_p(1 ) ; \qquad\delta\in[1/3,1).\end{aligned}\ ] ] we remark that the scaling factor is introduced for the linear term , , so that does not depend on , where .the model misspecification in the next example results in , yielding .therefore , is guaranteed to be consistent in view of theorem [ thmctlunderincorrectmodel ] .[ coroexp0 ] consider the same setup as in corollary [ coropolymle0 ] except that , where is generated from a zero - mean gaussian spatial process with covariance function , \ ] ] for some constants . then for , it is worth noting that is inconsistent under the setup of corollary [ coropolymle0 ] .moreover , both and are inconsistent under the setup of corollary [ coroexp0 ] .these inconsistency results will be reported in detail in the next section . before closing this sectionwe remark that our theoretical results on can be used to make statistical inference about the regression function .for example , when ( [ correct ] ) holds and is a fixed integer , the convergence rate of obtained in theorem [ thmctlundertruemodel ] plays an indispensable role in analyzing the convergence rate of the ml estimator , , of .recently , by making use of theorems [ thmctlundertruemodel ] and [ thmctlunderincorrectmodel ] , chang , huang and ing established the first model selection consistency result under the mixed domain asymptotic framework .moreover , some technical results established in the proofs of theorems [ thmctlundertruemodel ] and [ thmctlunderincorrectmodel ] have been used by chang , huang and ing to develop a model selection consistency result under a misspecified covariance model .using the examples constructed in corollaries [ coropolymle0 ] and [ coroexp0 ] , we show in this section that the constraints and imposed in theorem [ thmconsistency ] for the consistency of and , respectively , can not be relaxed .[ coropolymle ] under the setup of corollary [ coropolymle0 ] , [ coroexp ] under the setup of corollary [ coroexp0 ] , all the above results can be illustrated by figure [ orderarea ] , in which some change point behavior of s ( in terms of modes of convergence ) is exhibited when runs through the region .@ to with respect to , where , is the growing rate of the domain and satisfies .note that also possesses asymptotic normality when falls in the dark gray regions , but may fail to achieve consistency when falls in the white regions or on the dash lines .in addition , the points on the lines between the light and dark gray area are referred to as the change points merely in the modes of convergence but not in the convergence rate scenario.,title="fig : " ] & to with respect to , where , is the growing rate of the domain and satisfies . note that also possesses asymptotic normality when falls in the dark gray regions , butmay fail to achieve consistency when falls in the white regions or on the dash lines .in addition , the points on the lines between the light and dark gray area are referred to as the change points merely in the modes of convergence but not in the convergence rate scenario.,title="fig : " ] + & + + [ orderarea ]in this section , we first prove the consistency of in section [ sec41 ] . the proofs of clts for with and without the restrictions of correct specification and fixed dimension on the time trend model are given in sections [ sec42 ] and [ sec43 ] , respectively .the proofs of corollaries [ coropolymle0 ] and [ coropolymle ] and those of corollaries [ coroexp0 ] and [ coroexp ] are provided in sections [ sec44 ] and [ sec45 ] , respectively . to prove theorem [ thmconsistency ], we need a series of auxiliary lemmas , lemmas [ lemmaardecomposition][lemmauniform ] .lemma [ lemmaardecomposition ] gives a modified cholesky decomposition for , which can be used to prove lemma [ propositionmaxandminboundsofeigenvalues ] , asserting that the eigenvalues of are uniformly bounded above and below .lemmas [ lem02ttraces ] and [ lem03dcompsegetag ] provide the orders of magnitude of the cholesky factors of and the products of and these factors . based on lemmas [ propositionmaxandminboundsofeigenvalues][lem03dcompsegetag ] , lemma [ lemmaexpequations ] establishes asymptotic expressions for the key components of the nonstochastic part of , and lemma [ lemmaexpequations1 ] provides the orders of magnitude of .lemmas [ propositionmaxandminboundsofeigenvalues ] and [ lemmaexpequations1 ] can be used in conjunction with lemma [ lemmauniform ] , which provides uniform bounds for quadratic forms in i.i.d .random variables , to analyze the asymptotic behavior of ; see ( [ lemmah0result ] ) .lemmas [ lemmaprojectionsup ] and [ lemmamisselectedmodel ] explore the effects of the time trend model on .[ lemmaardecomposition ] let be given by ( [ sigma ] ) with , and .then where , and [ propositionmaxandminboundsofeigenvalues ] let and denote the maximum and minimum eigenvalues of the matrix . for given by ( [ sigma ] ) , suppose that is compact .then , \label{propositionboundeigen } \\[-8pt ] \nonumber & \leq & \limsup_{n\rightarrow\infty}\sup_{\bolds\theta\in\theta } \lambda_{\max } \bigl(\bolds\sigma^{-1/2}(\bolds\theta ) \bolds \sigma(\bolds\theta_0)\bolds\sigma^{-1/2}(\bolds\theta ) \bigr ) < \infty.\end{aligned}\ ] ] [ lem02ttraces ] under the setup of lemma [ lemmaardecomposition ] , for any , where is compact , and , the following equation holds uniformly over : [ lem03dcompsegetag ] under the setup of lemma [ lem02ttraces ] , for any , where , and .in addition , for any , furthermore , for any , [ lemmaexpequations ] under the setup of lemma [ lem02ttraces ] , the following equations hold uniformly over : \label{logdetsigma } \\[-9pt ] \nonumber & & { } -\frac{1-\delta}{2}\log n + o\bigl(n^{\delta}\bigr)+ o(1 ) , \\\operatorname{tr } \bigl(\bolds\sigma(\bolds\theta_0)\bolds \sigma^{-1}(\bolds\theta ) \bigr ) & = & \frac{\theta_{0,1}}{\theta_1}n - \frac{\theta_{0,1}}{2\theta_1 } \biggl(\frac{2\theta_2}{\theta_1 } \biggr ) ^{1/2}n^{(1+\delta)/2}\nonumber \\ \label{sigma1sigmainverse3 } & & { } + \frac{\theta_{0,2}}{(2\theta_1\theta_2)^{1/2}}n^{(1+\delta)/2 } + \frac { \theta_{0,2}(\theta_3 ^ 2-\theta_{0,3}^2)}{2\theta_2\theta_{0,3}}n^\delta \\ & & { } + o\bigl(n^\delta\bigr)+o(1).\nonumber\end{aligned}\ ] ] [ lemmaexpequations1 ] under the setup of lemma [ lem02ttraces ] , the following equations hold uniformly over : \operatorname{tr } \bigl(\bolds\sigma_\eta(\bolds\theta_0 ) \bolds\sigma^{-1}(\bolds\theta ) \bigr ) & = & \frac{\theta_{0,2}}{(2\theta _ 1\theta_2)^{1/2}}n^{(1+\delta)/2 } + \frac{\theta_{0,2}(\theta_3 ^ 2-\theta_{0,3}^2)}{2\theta_2\theta _ { 0,3}}n^\delta \nonumber \\[-9pt ] \label{sigma1sigmainverse } \\[-9pt ] \nonumber & & { } + o\bigl(n^\delta\bigr)+ o(1 ) ,\\ \label{diffsigma1sigmainverse } \operatorname{tr } \biggl ( \biggl(\bolds\sigma^{-1}(\bolds\theta ) \frac{\partial}{\partial\theta_3 } \bolds\sigma(\bolds\theta ) \biggr)^2 \biggr ) & = & \frac{1}{\theta_3}n^\delta+ o\bigl(n^\delta\bigr).\end{aligned}\ ] ] as will be shown later , ( [ propositionboundeigen ] ) , ( [ sigma1sigmainverse2 ] ) and ( [ sigma1sigmainverse ] ) can be used to derive bounds for and . these bounds , together with ( [ diffsigma1sigmainverse ] ) , play important roles in establishing the consistency of . [ lemmaprojectionsup ]let be full rank a.s . then under the setup of lemma [ lem02ttraces ] , where is defined in ( [ projectionmatrix ] ) .[ lemmamisselectedmodel ] under the setup up of lemma [ lem02ttraces ] , let be full rank a.s .suppose that for some , then before introducing lemma [ lemmauniform ] , we need some notation . for , define .let be a function of .for , define .denote by the -dimensional closed ball centered at with radius . for , define the -dimensional sphere : [ lemmauniform ] assume that are i.i.d .random variables with and .let {1\leq i ,j\leq n} ] , ] ; . then for any , \label{thmg1order } \\[-8pt ] \nonumber & & { } + o_p\bigl(n^\xi\bigr ) + o(1 ) , \\ g_2\bigl((\hat\theta_1,\theta_{0,2},\hat\theta_3)'\bigr ) & = & o_p\bigl(n^{(1+\delta)/4 } \bigr ) + o_p\bigl(n^{(1+\delta)/2-r_1}\bigr)+ o_p \bigl(n^{\delta - r_3}\bigr ) \nonumber \\[-8pt ] \label{thmg2order } \\[-8pt ] \nonumber & & { } +o_p\bigl(n^\xi\bigr ) + o(1),\end{aligned}\ ] ] and for , \label{thmg3order } \\[-8pt ] \nonumber & & { } + o_p\bigl(n^\xi\bigr ) + o(1).\end{aligned}\ ] ] in addition , for any , if and , if , and , furthermore , for any , if , and , [ lemmataylorsecondorder ] under the setup of lemma [ lemmamisselectedmodel ] , let let be an estimate of .suppose that .then for , there exists a constant satisfying such that in addition , suppose that and , then for , there exists a constant satisfying such that furthermore , suppose that , then for , there exists a constant satisfying such that we shall prove ( [ thmtheta1consistency])([thmtheta3consistency ] ) by iteratively applying ( [ thmg1order])([thmg33order ] ) . for the first iteration, we show that \label{thmtheta2iterat1 } \\[-8pt ] \nonumber \hat\theta_2 - \theta_{0,2 } & = & o_p \bigl(n^{-(1-\delta)/2}\bigr)\qquad \mbox{if } \delta\in[1/3,1 ) , \\n^{\delta/2 } ( \hat{\theta}_3-\theta_{0,3 } ) & \displaystyle\mathop { \rightarrow}^{d } & n(0,2\theta_{0,3})\qquad \mbox{if } \delta \in(0,1/2 ) , \nonumber \\[-8pt ] \label{thmtheta3iterate1 } \\[-8pt ] \nonumber \hat{\theta}_3-\theta_{0,3 } & = & o_p \bigl(n^{-(1-\delta)/2}\bigr)\qquad \mbox{if } \delta\in[1/2,1).\end{aligned}\ ] ] proof of ( [ thmtheta1iterate1 ] ) taking the taylor expansion of at yields where satisfies .therefore , for ( [ thmtheta1iterate1 ] ) to hold , it suffices to show that g_{11}\bigl(\hat{\bolds\theta_a^*}\bigr ) & = & \frac{n}{\theta_{0,1}^2 } + o_p(n),\end{aligned}\ ] ] where the first equation follows from ( [ lemmatheta2consistency ] ) and ( [ thmg1order ] ) with , and the second one is given by ( [ lemmatheta1consistency ] ) and ( [ thmg11order ] ) .proof of ( [ thmtheta2iterat1 ] ) let .taking the taylor expansion of at yields where satisfies .therefore , for ( [ thmtheta2iterat1 ] ) to hold , it suffices to show that where the first two equations follow from ( [ thmg2order ] ) with , ( [ thmg2asymptoticnormality ] ) and ( [ thmtheta1iterate1 ] ) , and the last one is ensured by ( [ lemmatheta2consistency ] ) , ( [ thmg22order ] ) and ( [ thmtheta1iterate1 ] ) . proof of ( [ thmtheta3iterate1 ] ) taking the taylor expansion of at yields where satisfies .therefore , for ( [ thmtheta3iterate1 ] ) to hold , it suffices to show that where the first two equations follow from ( [ thmg3order ] ) with , ( [ thmg3asymptoticnormality ] ) , ( [ thmtheta1iterate1 ] ) and ( [ thmtheta2iterat1 ] ) , and the last one is ensured by ( [ thmg33order ] ) .thus , ( [ thmtheta3iterate1 ] ) is established .for the second iteration , we show that \label{thmtheta1iterate2 } \\[-8pt ] \nonumber \hat{\theta}_1-\theta_{0,1 } & = & o_p \bigl(n^{-(1-\delta)}\bigr)\qquad \mbox{if } \delta\in[1/2,1 ) , \\n^{(1+\delta)/4 } ( \hat{\theta}_2-\theta_{0,2 } ) & \displaystyle\mathop { \rightarrow}^{d } & n \bigl(0 , 2^{5/2}\theta_{0,1}^{1/2 } \theta_{0,2}^{3/2 } \bigr)\qquad \mbox{if } \delta\in[0,3/5 ) , \nonumber \\[-8pt ] \label{thmtheta2iterat2 } \\[-8pt ] \nonumber \hat\theta_2 - \theta_{0,2 } & = & o_p \bigl(n^{-(1-\delta)}\bigr)\qquad \mbox{if } \delta\in[3/5,1 ) , \\n^{\delta/2 } ( \hat{\theta}_3-\theta_{0,3 } ) & \displaystyle\mathop { \rightarrow}^{d } & n(0,2\theta_{0,3})\qquad \mbox{if } \delta \in(0,2/3 ) , \nonumber \\[-8pt ] \label{thmtheta3iterate2 } \\[-8pt ] \nonumber \hat{\theta}_3-\theta_{0,3 } & = & o_p \bigl(n^{-(1-\delta)}\bigr)\qquad \mbox{if } \delta\in[2/3,1).\end{aligned}\ ] ] by ( [ thmg1order ] ) with , ( [ thmg1asymptoticnormality ] ) and ( [ thmtheta2iterat1 ] ) , we have the above two equations , ( [ thmg11order ] ) and ( [ thmtheta1taylor ] ) give ( [ thmtheta1iterate2 ] ) . by ( [ thmg2order ] ) with and , ( [ thmg2asymptoticnormality ] ) , ( [ thmtheta3iterate1 ] ) and ( [ thmtheta1iterate2 ] ), we have combining these two equations together with ( [ thmg22order ] ) and ( [ thmtheta2taylor ] ) yields ( [ thmtheta2iterat2 ] ) . by ( [ thmg3order ] ) with , ( [ thmg3asymptoticnormality ] ) , ( [ thmtheta1iterate2 ] ) and ( [ thmtheta2iterat2 ] ), we have which , together with ( [ thmg33order ] ) and ( [ thmtheta3taylor ] ) , lead immediately to ( [ thmtheta3iterate2 ] ) . following the same argument as in the second iteration , we can recursivelyshow that for each , \\n^{\delta/2 } ( \hat{\theta}_3-\theta_{0,3 } ) & \displaystyle\mathop { \rightarrow}^{d } & n(0,2\theta_{0,3})\qquad \mbox{if } \delta\in \bigl(0,i/(i+1)\bigr ) , \\ \hat{\theta}_3-\theta_{0,3 } & = & o_p \bigl(n^{-i(1-\delta)/2}\bigr)\qquad \mbox{if } \delta\in\bigl[i/(i+1),1\bigr).\end{aligned}\ ] ] thus ( [ thmtheta1consistency])([thmtheta3consistency ] ) are proved .we divide the proof into three parts corresponding to , and .first , we consider .we further divide the proof into six subparts with respect to in terms of a partition of , corresponding to , , , , and .we shall prove each of the following six subparts separately : for , for , for , for , for , for , and if in addition , then proof of ( a1 ) applying ( [ thmg1order ] ) with and , we have according to ( [ lemmatheta1consistency ] ) and ( [ thmg11order ] ) , we have the desired conclusion ( a1 ) now follows from plugging ( [ thm2g1order ] ) and ( [ thm2g11order ] ) into ( [ thmtheta1taylor ] ). proof of ( a2 ) applying ( [ thmg1order ] ) with and , we have combining this with ( [ thmtheta1taylor ] ) and ( [ thm2g11order ] ) gives applying ( [ thmg2order ] ) with , and , we obtain from ( [ lemmatheta2consistency ] ) and ( [ thmg22order ] ) , we have combining this with ( [ thmtheta2taylor ] ) and ( [ thm2g2order ] ) leads to ( [ thm2parti-2 - 2 ] ) . in addition , applying ( [ thmg1order ] ) with and , we have this together with ( [ thmtheta1taylor ] ) and ( [ thm2g11order ] ) gives ( [ thm2parti-2 - 1 ] ) .proof of ( a3 ) following the same arguments as the one used in the proof of ( [ thm2parti-2 - 2 ] ) leads to ( [ thm2parti-3 - 2 ] ) . applying ( [ thmg1asymptoticnormality ] ) with , , and , we have this together with ( [ thmtheta1taylor ] ) and ( [ thm2g11order ] ) gives ( [ thm2parti-3 - 1 ] ) .proof of ( a4 ) applying ( [ thmg2asymptoticnormality ] ) with and , we have this , ( [ thmtheta2taylor ] ) and ( [ thm2g22order ] ) imply ( [ thm2parti-4 - 2 ] ) .moreover , ( [ thm2parti-4 - 1 ] ) can be shown by an argument similar to that used to prove ( [ thm2parti-3 - 1 ] ) .proof of ( a5 ) the proofs of ( [ thm2parti-5 - 1 ] ) and ( [ thm2parti-5 - 2 ] ) are similar to those of ( [ thm2parti-4 - 1 ] ) and ( [ thm2parti-4 - 2 ] ) , respectively . applying ( [ thmg3order ] ) with , and , we have from ( [ lemmatheta1consistency])([lemmatheta3consistency ] ) and ( [ thmg33order ] ) , we obtain combining this with ( [ thmtheta3taylor ] ) and ( [ thm2g3order ] ) leads to ( [ thm2parti-5 - 3 ] ) .proof of ( a6 ) equations ( [ thm2parti-6 - 1 ] ) and ( [ thm2parti-6 - 2 ] ) can be proved in a way similar to the proofs of ( [ thm2parti-4 - 1 ] ) and ( [ thm2parti-4 - 2 ] ) . applying ( [ thmg3asymptoticnormality ] ) with , and , we have this together with ( [ thmtheta3taylor ] ) and ( [ thm2g33order ] ) gives ( [ thm2parti-6 - 3 ] ) .second , we consider .following an argument similar to that used in the first part , we obtain for , for , for , for , for , for , third , for , one can similarly show that for , for , for , for , for , for , the proof of the theorem is complete . to prove corollaries [ coropolymle0 ] and [ coropolymle ] , the following lemma , which provides the order of magnitude of defined in ( [ eqrisk ] ) ,is needed .[ lemmapolyequations ] under the setup of lemma [ lem02ttraces ] , let and .then for any , the following equations hold uniformly in : we first prove corollary [ coropolymle0 ] .note that \label{cor1misselectorder } \\[-8pt ] \nonumber & & \quad= \beta_{0,1}^2\mathbf{x}'\bolds \sigma^{-1}(\bolds\theta)\mathbf{x } -\beta_{0,1}^2 \mathbf{x}'\bolds\sigma^{-1}(\bolds\theta)\mathbf{1}\bigl ( \mathbf{1}'\bolds\sigma^{-1}(\bolds\theta)\mathbf{1 } \bigr)^{-1 } \mathbf{1}'\bolds\sigma^{-1}(\bolds \theta)\mathbf{x } \\ & & \quad= \frac{\beta_{0,1}^2\theta_3 ^ 2}{24\theta_2}n^\delta+ o\bigl(n^\delta \bigr),\nonumber\end{aligned}\ ] ] uniformly in , where and the last equality is obtained from ( [ polyresult1])([polyresult3 ] ) .therefore , ( [ cor1xi = theta ] ) holds . with the help of ( [ cor1xi = theta ] ) ,( [ cor1theta1consistency ] ) and ( [ cor1theta2consistency ] ) follow directly from theorem [ thmctlunderincorrectmodel ] .second , we prove corollary [ coropolymle ] . by ( [ loglikedecompose ] ) , ( [ lemmamis2 ] ) and ( [ cor1misselectorder ] ) , we have uniformly in , noting that . therefore , by ( [ logdetsigma ] ) and ( [ sigma1sigmainverse3 ] ) , \label{loglikedecomposepoly } \\[-8pt ] \nonumber & = & n\log(2\pi ) - \frac{1-\delta}{2}\log{n } + \biggl(\log\theta_1 + \frac{\theta_{0,1}}{\theta_1 } \biggr)n \\ & & { } + \biggl(\frac{2\theta_2}{\theta_1 } \biggr)^{1/2 } \biggl(1-\frac { \theta_{0,1}}{2\theta_1 } + \frac{\theta_{0,2}}{2\theta_2 } \biggr)n^{(1+\delta)/2}\nonumber \\ & & { } - \biggl\{\frac{\theta_2}{\theta_1}+\theta_3 \biggl(1-\frac{\theta _ { 0,2}}{\theta_2 } \biggr ) + \frac{\theta_{0,2}\theta_{0,3}+\theta_{0,2}\theta _ { 0,3}^*}{2\theta_2}\nonumber \\ & & { } -\frac{\theta_{0,2}}{2\theta_2\theta_{0,3}^ * } \bigl(\theta_3-\theta _ { 0,3}^ * \bigr)^2 \biggr\ } n^\delta+h(\bolds\theta)+o_p \bigl(n^\delta\bigr)+o_p(1),\nonumber\end{aligned}\ ] ] uniformly in , where .it follows from ( [ loglikedecomposepoly ] ) and the same argument as in the proof of ( [ theta3consistent ] ) that for any , there exist such that as , where . thus ( [ cor1theta3inconsistent ] ) is established , and hence the proof is complete .we first prove ( [ cor1xi=1+theta2 ] ) .let . by an argument similar to that used to prove ( [ lemmamis2 ] ) , it can be shown that this , together with ( [ polyresult3 ] ) and ( [ sigma1sigmainverse ] ) , gives uniformly in , where .in addition , an argument similar to that used to prove ( [ lemmaconvergea.s.5 ] ) yields hence ( [ cor1xi=1+theta2 ] ) follows . in view of( [ cor1xi=1+theta2 ] ) and theorem [ thmctlunderincorrectmodel ] , we obtain ( [ cor3theta1 ] ) .thus , the proof of corollary [ coroexp0 ] is complete . to prove ( [ cor3theta2inconsistent ] ) , note first that by the same line of reasoning as in ( [ loglikedecomposepoly ] ) , one gets uniformly in , where and . moreover , using arguments similar to those used in the proofs of ( [ theta2consistent ] ) and ( [ theta3consistent ] ) , respectively , one can show that for any , there exists an such that and for any , there exist such that where and . combining ( [ loglikedecomposeexp])([ing0507b ] ) yields ( [ cor3theta2inconsistent ] ) and ( [ cor3theta3inconsistent ] ) .this completes the proof of corollary [ coroexp ] .the authors would like to thank the associate editor and an anonymous reviewer for their insightful and constructive comments , which greatly improve the presentation of this paper .the research of chih - hao chang and hsin - cheng huang was supported by ministry of science and technology of taiwan under grants most 103 - 2118-m-390 - 005-my2 and most 100 - 2628-m-001 - 004-my3 , respectively .the research of ching - kang ing was supported by academia sinica investigator award .chang , c .- h ., huang , h .- c . and ing , c .- k .( 2014 ) , `` asymptotic theory of generalized information criterion for geostatistical regression model selection , '' _ the annals of statistics _ , * 42 * , 2441 - 2468 .
we consider a stochastic process model with time trend and measurement error . we establish consistency and derive the limiting distributions of the maximum likelihood ( ml ) estimators of the covariance function parameters under a general asymptotic framework , including both the fixed domain and the increasing domain frameworks , even when the time trend model is misspecified or its complexity increases with the sample size . in particular , the convergence rates of the ml estimators are thoroughly characterized in terms of the growing rate of the domain and the degree of model misspecification / complexity . ./style / arxiv - general.cfg ,
in just the united states each year approximately 795.000 people are affected by stroke . in france strokesare around 150.000 each year .similar trends are shown all around the industrialized world .a stroke can cause disabilities , such as paralysis , speech difficulties , and emotional problems .these disabilities can significantly affect the quality of daily life for survivors .several researchers on the field have shown that important variables in relearning motor skills and in changing the underlying neural architecture after a stroke are the quantity , duration , and intensity of training sessions ( for more information see e.g. , * ? ? ?in this paper we present a mixed reality approach for upper limb rehabilitation focused to increase quantity , duration , and quality of training sessions .in particular the approach answers the following challenges : ( i ) increase motivation of patients by making the training a personalized experience , for example adjusting exercises difficulty according to patients cognitive and physical capabilities ; ( ii ) take into account patients impairments by offering intuitive and easy to use interaction modalities that do not require a learning curve and that are adapted to elder people ; ( iii ) make it possible to therapists to track patient s activities and evaluate progress in order to adapt therapeutic strategies ; ( iv ) open opportunities for telemedicine and tele rehabilitation when patients leave the hospital ; ( v ) provide an economically acceptable system by reducing both equipment and management costs .it is worth to note that the above mentioned issues address two targets : the therapist and the patient .it is our opinion that it is important to evaluate acceptance of these systems from both , the patient s point of view and the therapists point of view .otherwise , even if clinical efficiency is demonstrated , developed systems will be used only for academic studies and may not be widely accepted in real rehabilitation centers . for this reason , before testing the system directly with patients we decide to conduct a pilot study in conjunction with a french hospital .aim of this pilot was understand the potential and benefits of mixed reality from the therapist s point of view .the pilot involved 3 therapists who played the role of patients .three sessions , one using conventional rehabilitation , another using an ad hoc developed game on a pc , and another using a mixed reality version of the same game were held . before entering in detail into the pilot study description , in following subsections we will describe in more depth what a stroke is and what are its consequences , and why a virtual approach could be useful for such a rehabilitation . after a strokethere is a loss of brain functions due to disturbance in the blood supply to the brain .the affected area of the brain is unable to function , leading to inability to move one or more limbs on one side of the body and perhaps to cognitive problems such aphasia or the so - called neglect effect . because of these impairments stroke sufferers are often unable to independently perform day - to - day activities such as bathing , dressing , and eating .nearly three - quarters of all strokes occur in people over the age of 65 .the specific abilities that will be lost or affected by stroke depend on the extent of the brain damage and most importantly where in the brain the stroke occurred .for example , a stroke in the right hemisphere often causes paralysis in the left side of the body .this is known as left hemiplegia .survivors of right - hemisphere strokes may have problems with their spatial and perceptual abilities .this may cause them to misjudge distances ( leading to a fall ) or be unable to guide their hands to pick up an object , button a shirt or tie their shoes .they may even be unable to tell right - side up from upside - down when trying to read .survivors of right - hemisphere strokes may also experience left - sided neglect . stemming from visual field impairments ,left - sided neglect causes the survivor of a right - hemisphere stroke to forget or ignore objects or people on their left side . on the other hand ,someone who has had a left - hemisphere stroke may develop aphasia .aphasia is a catch - all term used to describe a wide range of speech and language problems .these problems can be highly specific , affecting only one component of the patient s ability to communicate , such as the ability to move their speech - related muscles to talk properly. the same patient may be completely unimpaired when it comes to writing , reading or understanding speech .by contrast to survivors of right - hemisphere stroke , patients who have had a left - hemisphere stroke often develop a slow and cautious behavioral style .they may need frequent instruction and feedback to complete tasks ( information from the * ? ? ?as we can see from these two examples , the consequences of two stroke accidents could be deeply different .for this reason each stroke rehabilitation program is personal , designed for a particular patient , and not a generic one . in stroke accidents rehabilitationinvolves intensive and continuous training to regain as much function as possible , depending on several factors including the severity of brain lesions and the degree of cerebral plasticity .following the hypothesis that the theories of motor learning can be applied on motor relearning most rehabilitation techniques are founded on principles of motor learning and skill acquisition established for the healthy nervous system .acknowledged features are among others : ( i ) the motivation of the participant ; ( ii ) the use of variable practice ( i.e. practice a variety of related tasks ) ; ( iii ) training with high intensity / many repetitions ; ( iv ) and providing feedback .in addition , therapy on the lower extremity ( i.e. , legs ) is the primary concern in early inpatient stroke therapy in order to enable mobility of the patient .recovery of the upper extremity ( i.e. , arms ) has a slower progression and is usually gained through outpatient and home therapy .patients with upper extremity paralysis typically regain motion starting from their shoulder . over time, they may gradually regain motion in the elbow , wrist , and , finally , the hand .the most important part of stroke rehabilitation is conducted during the first 6 months after the stroke and due to the cost , only 6/8 weeks of rehabilitation is done under the continued direct supervision of an expert ( i.e. , in the hospital ) . because of limitations on therapy patients must do much of the work necessary to recover arm function at home . to summarize , after a stroke the affected area of the brain is unable to function , leading to inability to move one or more limbs on one side of the body and also perhaps to cognitive problems such aphasia or the so - called neglect effect .as a result of these impairments stroke sufferers are often unable to independently perform day - to - day activities such as bathing , dressing , and eating .as a side effect they can develop depression or aggressiveness due to the trauma of reduced capabilities . depression and aggressivenessalso imply that stroke survivors may find it difficult to focus on a therapy programme .in addition , while these programmes attempt to stimulate the patient with a variety of rehabilitation exercises , stroke victims commonly report that traditional rehabilitation tasks can be boring due to their repetitive nature .we can then say that motivation is an important factor for rehabilitation success .several researchers have shown , both in animal and human , that important variables in relearning motor skills and in changing the underlying neural architecture are the quantity , duration , and intensity of training sessions .in particular , research in animal models suggests that with intensive therapy ( repeating individual motions hundreds of times per day ) , animals that experience strokes can recover a significant amount of their lost motor control .similarly , recent guidelines for treatment of human patients recommend high - intensity , repetitive motions while keeping patients informed about their progress .looking at the effects of different intensities of physical therapy treatment , several authors have reported significant improvement in activities of daily living as a result of higher intensities of treatment . to experience significant recovery, stroke patients must perform a substantial number of daily exercises .unfortunately , typical sessions with therapists include a relatively small number of motions .one of the possible solutions is doing rehabilitation at home .however , while therapists prescribe a home exercise regimen for most patients , a study indicates only 31% of patients actually perform these exercises as recommended .thus , home based stroke rehabilitation has the potential to help patients in recovering from a stroke . on the other handa main problem arises when choosing which kind of home - based technology can both help and motivate patients to perform therapeutic exercises .the challenge for post stroke rehabilitation is to create exercises able to decrease monotony of hundreds of repeated motions . in order to overcome the above mentioned problems different kinds of non traditional therapieshave been proposed .for example , the possibility of using virtual rehabilitation has been the subject of experiments by several authors ( for example ) .although most of studies on this topic are linked to the study of virtual reality environments recent studies have focused on the use of videogames and consoles for rehabilitation ( such as .results of these studies can be summerized as follows : 1 ._ personalization _ : virtual rehabilitation technology creates an environment in which the intensity of feedback and training can be systematically manipulated and enhanced in order to create the most appropriate , individualized motor learning paradigm .rehabilitation using games can take advantage of the use of adaptation in order to create ad hoc personalized games_ interactivity _ : an advantage present in all forms of virtual rehabilitation is the use of interactivity .for example it has been suggested that integrating gaming features in virtual environments for rehabilitation could enhance user motivation .virtual rehabilitation exercises can be made to be engaging , such that the patient feels immersed in the simulated world .this is extremely important in terms of the patient motivation , which , in turn , is key to recovery . a person who enjoys what he is doing spends more time developing her skills in a given activity .feedback _ : interactive feedback can contribute to motivation .for example , by providing visual and auditory rewards , such as displaying gratifying messages in real time , patients are motivated to exercise 4 . _tracking _ : the evolution of the patient s performance can be easily stored , accessed and developed without the patient s or therapist s input .in addition , the internet can be used for data transfer , allowing a therapist to remotely monitor progress and to modify the patient s therapy program ._ telerehabilitation _ : virtual rehabilitation can stimulate the patient using a variety of rehabilitation exercises at a low cost .this means that rehabilitation costs can be contained if the technology used for virtual rehabilitation ( consoles and games ) is easily accessible .in addition lower cost personal equipment ( for example pc - based ) will eventually allow rehabilitation stations to be placed in locations other than the rehabilitation center , such as a patient s home .on the other hand , virtual rehabilitation raises important challenges that may limit its widespread adoption such as : * clinical acceptance , which relies on proved medical efficacy * therapist s attitude towards the technology ( e.g. , the therapist agrees that technology is able to replace therapists and the like ) * patient s attitude towards the technology ( e.g. , the patient may not consider a game to be real rehabilitation ) .* challenges linked to the kind of technology used ( challenges differ from virtual reality to consoles ) . in order to overcome the last three challenges we propose an approach based on mixed reality ( see section [ sytems ] of this paper for the description of such a system ). however , while evaluating usability for systems has become a standard attitude is a more complex element which requires an in depth assessment .for this reason we propose to use an approach meant to evaluate acceptance of a tool based on shackel .shackel defines a model where product acceptance is the highest concept .the user has to make a trade off between utility , the match between user s needs and functionality , usability , ability to use functionality in practice and likeability , affective evaluation vs costs ( see fig.[fig : shackel ] ) it is worth noting that in a system for stroke rehabilitation we are dealing with two kind of acceptances .the first one is patient s acceptance of the system . in order to be accepted by our users the system has to be considered usable , useful and likeable .the second one is therapist s acceptance .in addition to the above mentioned elements the system has also to show adequate cost for the therapist .while in this paper we address directly the therapist acceptance and the patient s perspective only as a simulate one ( see section [ protocol ] ) it s our opinion that both perspectives are very important .this section reviews existing approaches for post stroke rehabilitation .the presentation of works is not exhaustive and the focus has been made on three types of rehabilitation systems : robot based system , virtual reality based systems and mixed reality based systems . besides , only early stages of rehabilitation have been considered which excluded for instance system dedicated to the rehabilitation of fingers such as . to avoid ambiguities , hereafter definition of each type of systemis given .[ [ definition - of - virtual - reality ] ] definition of virtual reality + + + + + + + + + + + + + + + + + + + + + + + + + + + + + virtual reality system is commonly defined as an environment that can simulate real world situations and allows users to interact within this simulated environment using different types of devices such as head mounted device ( hmd ) and gloves .[ [ definition - of - mixed - reality ] ] definition of mixed reality + + + + + + + + + + + + + + + + + + + + + + + + + + + mixed reality system merges real and virtual worlds to create a new context of interaction where both physical and digital objects co - exist and interact consistently in real time .this contrasts with virtual reality where the interaction is directional from the physical world where the user is situated towards the virtual environment where the interaction holds . within mixed reality system ,the interaction holds among objects of both virtual and physical environments .a classical example of a mixed reality game is the mr pong where players play pong game with a virtual ball colliding with physical objects on a table .[ [ definition - of - robot - assisted - rehabilitation ] ] definition of robot assisted rehabilitation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + robot based rehabilitation uses a training robot .the robot complements or induces patient s movement and provides feedback .robot systems are often coupled with a computer program used to create virtual context for actions and delivering a visual feedback .an evaluation framework is used to analyze usability of each system from therapist , patient and financier point of view . from the therapist s point of view ,we are interested by the following evaluation criteria : [ [ therapist - intervention ] ] therapist intervention + + + + + + + + + + + + + + + + + + + + + + this criterion informs whether the therapist can intervene during the rehabilitation sessions .in fact , we have noticed from observations and preliminary interviews with therapists the importance of assistance and guidance delivered by the therapist .the therapist has to intervene physically during the rehabilitation sessions to support the patient and to prevent him from developing incorrect compensating gestures such as chest balancing .possible values of this criterion are : yes , if the therapist can intervene and no , otherwise .[ [ changes - on - therapist - habits ] ] changes on therapist habits + + + + + + + + + + + + + + + + + + + + + + + + + + + this criterion informs about changes induced by introduction of the rehabilitation system to therapist working habits .the introduction of a new rehabilitation system may induce changes in the way therapists were conducting therapeutic sessions .the goal of this criterion is to evaluate the amount of the induced changes .possible values are : negligible , if the therapists has not to change , her usual way of work ; moderate , when the therapist has to change some of her working habits and important , when the therapist has to change her way of conducting therapies .[ [ system - setup ] ] system setup + + + + + + + + + + + + this criterion informs about the setup phase of the system .often , the rehabilitation system needs some setup phase in order to get devices installed and configured correctly before starting rehabilitation sessions .this criterion informs whether the therapist alone can perform this setup phase or a specialized assistant is required to setup the system .possible values of this criterion are : therapist , if the therapist could perform the setup phase and assistant , if intervention of a specialized technician is required .[ [ location ] ] location + + + + + + + + this criterion concerns the place where the rehabilitation sessions can be performed .in fact , for some systems such as robot based systems and some virtual reality simulators a special room has to be dedicated to perform rehabilitation sessions .other lightweight systems do not require a fixed infrastructure and can be used almost anywhere . from the patient s point of view, we are interested in two evaluation criteria that are : [ [ eye - hand - focus ] ] eye - hand focus + + + + + + + + + + + + + + observations from therapeutic sessions and interviews with therapists have revealed an important point related to the interaction of the patient with the system .in fact , post stroke patients during early stages may be reluctant to any system that requires an important cognitive effort . ideally , the attention of the patient has to be attracted and focused on a single point .for instance , when given a mouse patients look at their hand when trying to move and not to the screen .consequently , they find it difficult to follow two actions at the same time : the hand that is moving and the game displayed on the screen .this criterion evaluates whether the eye and hand of the patient are attracted to the same place or to different places .possible values for this criterion are : same place , when the hand and eyes of the patient are directed to the same place and different places , when the hand and eyes of the patient are attracted to different spaces .[ [ invasiveness - of - the - system ] ] invasiveness of the system + + + + + + + + + + + + + + + + + + + + + + + + + + this criterion informs about the invasiveness of the system from the patient s perspective .in fact , systems may require the patient to wear special devices to track their movement or to deliverer feedback .these devices are more or less convenient to wear and to tolerate by patients especially the ones that suffer from impairment and muscular spasticity. possible values for this criterion are : convenient , if the system is considered as convenient to use ; invasive , if the system is considered as interfering with the patient . for a rehabilitation system to be used in rehabilitation centers ,it is necessary to demonstrate its medical efficiency and also to show that the system is economically sustainable .the economical aspect of the rehabilitation system are evaluated through two criteria that are : [ [ unitary - cost - of - the - system ] ] unitary cost of the system + + + + + + + + + + + + + + + + + + + + + + + + + + this criterion evaluates the unitary cost of the system in kilo euros ( ke ) .the objective is not to deliver a precise price , but evaluate an order of magnitude of each system cost .thus , possible values of this criterion are : less than 1 ke per unit , 1 - 5 ke per unit , 5 - 10 ke per unit , and more than 10 ke per unit . [[ extra - required - resources ] ] extra required resources + + + + + + + + + + + + + + + + + + + + + + + + while the previous criterion estimates the unitary cost of the system , this criterion evaluates all extra resources that must provided for running the system such as recruiting specialized personnel , making available a special and so on .robot based approach for rehabilitation has been used since the late 90s to assist patients .the manus robot developed at the mit is an exemplar system ( figure [ fig : manus ] ) . in this systemthe patient is in front of the robot and her shoulder is strapped to a chair . the patient s impaired arm is strapped to a wrist carrier and attached to the manipulandum .a video screen is above the training table to create a virtual context for movements and provide visual feedbacks to the patients .several clinical trials have been conducted to evaluate the efficiency of robots based rehabilitation when compared to classical therapies .for instance , in it has been demonstrated that robot - assisted therapy improved outcomes for long period of training when compared with usual care .[ tab : theraper ] .therapist perspective [ cols="<,<,<,<,<",options="header " , ] always in their patient s role therapist were asked to answer the following question : `` in general , which therapy did you preferred to do ?( classify exercises in order of preference ) '' .this means that we asked them to order the therapies as they thought the patients will do .all the three gave the following order : 1)mixed reality game , 2)pc game , 3)physical therapy .hereafter the results of the therapists questionnaire ( without any order and with duplicates ) .in synthesis they were asked as therapists to list the most negative and positive aspects of all the used systems . _ list of the most negative aspect(s ) _ : + for the pc game : + - the patient had to look at the screen while playing + - the patient had to grab the mouse + - the fact to use only the upper part of the triangle was upsetting ( here the therapist is referring to the game ) + - the screen was small + - it s difficult for the patient to move the mouse + - it s difficult for the patient to use the mouse + for the mr game : + - it takes more time to start up ( here the therapist is referring to the system / wii calibration ) + for classical therapy : + - tiresome and painful + - motivation is low + - the patient has to confront with his own limitation _ list the most positive aspect(s ) _ : + for the pc game : + - games are fun + - the patient can forget his environment + - different and may be funny for the mr game : + - accessible .you can use whatever surface you want ( for example a table ) + - not so difficult to use as in the mouse case + - less material needed once started + none of the therapists gave particular comments on the classical therapy .finally , as therapists they had to answer the following question : `` in general , which therapy did you preferred to do ?( classify exercises in order of preference ) '' .this time they were thus asked their own preferences on the therapies . all the three gave the following order : 1)mixed reality game , 2)pc game , 3)physical therapy .as described in section [ methods ] the main goal of the experiment is to compare a mixed reality system ( mrs ) with two alternative single - user tools through an pilot study .the two alternative tools were classical physical rehabilitation and an ad - hoc post stroke pc game .we expected the mrs and the pc game to be more accepted in shackel s sense than classical rehabilitation when therapists are playing the patient role .the mrs easiness of use is also expected to be higher than the pc one .we are then analyzing perceived utility , usability , likeability ( affective evaluation ) ( see figure [ fig : shackel ] ) it is important to note that this is a pilot study involving a limited number of participants ( ) to prepare a larger scale experiment .consequently , quantitative data are presented as a support for discussion and no statistically valid generalizations could be inferred at this stage .as we can see from figure [ fig : graph ] elements linked to the environment oblivion ( immersion and flow ) have different ratings .flow is rated low for classical rehabilitation ( ) while pc ( ) and mixed reality ( ) have higher scores .sensory and imaginative immersion ratings are definitely lower for classical rehabilitation ( ) , next comes pc ( ) and finally mixed reality ( ) we can conclude that within the context of this experiment , both mrs and pc have been considered as more immersive , followed by classical therapy .this is an expected result since it is already known that using games for rehabilitation creates a sense of meaningfulness for repeated movements and encourages flow experience by forgetting everything around and being focused only on the task to be performed .lack of tension is an element characterizing tiredness and boringness .results show that this component is almost similar for pc , mrs and classical therapy with respectively ( ) , ( ) and ( ) .so , within the context of this experiment all systems have been considered equivalent in terms of tiredness and boringness .when asked about this point , therapists highlight the fact that post stroke rehabilitation demands a lot of effort from the patient .so , game based rehabilitation can facilitate , to some extent , the context of rehabilitation but patients will sooner or later show fatigue and pain and the therapeutic session has to stop .if we look at negative affects , pc ( ) is the one rating lower , followed by mixed reality ( ) and the classical therapy ( ) is the one rated with higher negative affect .when asked qualitatively about this fact , therapists mentioned the problem of exertion when using mrs .in fact , there is a one to one mapping of patient movements between the physical and virtual world .so the scaling and artificial compensation of movements is not possible as it could be performed within a virtual environment by changing speed and sensitivity of the pointing device for instance .this was considered as a very interesting remark that we were not aware of before conducting the study .consequently , within mrs the exertion of the patient is an important challenge to take into account and innovative solutions have to be provided to compensate patient s limited movement such as modifying dynamically size of objects or moving objects closer to patient when she is stuck .what is very interesting is that while mr is seen as a possible cause of moderately negative affects , it is rated as the systems with the higher positive affects ( ( ) for mr ; ( ) for pc ; ( ) for classical therapy ) this reinforces the idea that mr has the potential to increase the volume of rehabilitation by increasing the exercises they do .however , as said previously , this should be done while taking in account patient s exertion . finally , competences and challenges are rated in a very similar way .this means that the effort the patient had to put to understand and perform exercises was the same . to summarize , when playing the role of patients , therapistshave considered : * that pc , mrs and classical therapy were equivalent on competence , tension and challenge components . *both pc and mrs systems are better than classical therapy on immersion , flow , negative affect and positive affects . *mrs has been considered as worse than pc on negative affect due to worries expressed on fatigue .if we switch to the therapists questionnaire we can describe why the mrs has a higher potential to be accepted by patients .in fact , one of the most underlined problems with pc rehabilitation was the use of the mouse as a pointing device .this limitation becomes one of potentiality of mrs ( see the comment : not difficult to use as in the mouse case ) .in addition , comments on pc usage are also related to limitations about screen dimension .finally the most interesting comment is linked to the fact that the patients have to look in another place ( the pc screen ) in order to perform the task .this could be a great problem when using this kind of device on some patients .the mrs on the other hand can project on any surface .finally it is interesting to note that all three subjects gave the same order of preferences : ( 1 ) mrs , ( 2 ) pc , ( 3 ) classical therapy both as patients and as therapists .objective of this paper was to present a mixed reality system ( mrs ) for rehabilitation of the upper limb after stroke .the system answers the following challenges : ( i ) increase motivation of patients by making the training a personalized experience ( ii ) take into account patients impairments by offering intuitive and easy to use interaction modalities ( iii ) make it possible to therapists to track patient s activities and to evaluate / track patient s progress ( iv ) open opportunities to telemedicine and tele rehabilitation ( v ) provide an economically acceptable system by reducing both equipment and management costs .it s our opinion that it is important to evaluate the acceptance of these systems not only from the the patient s point of view but also from therapists point of view .we decide then to test this system wih therapists in a pilot study conducted in conjunction with a french hospital .the main assumption behind this experiment was that , even if clinical efficiency is demonstrated , if therapist does not accept the system it will be used only for academic studies and may not be widely accepted in real rehabilitation centers .the pilot described in this paper involved 3 therapists who played the role of patients .the main idea was that , because of the years they spent with patients , they are able to simulate patients reactions to the system .three sessions , one using conventional rehabilitation , another using an ad hoc developed game on a pc , and another using a mixed reality version of the same game were held .we expected the mrs and the pc game to be accepted more than physical rehabilitation by the therapists in the patient s role(h1 ) , and mrs easiness of use to be considered higher than the pc one(h2 ) .results shows that while the effort the patient has to put in the exercises was practically the same for all the systems , mixed reality can be a mean to help patients to forget exertion and potentially augment the number of exercises they do .we can say that mixed reality and pc games requires the same amount of effort than classical rehabilitation .however , because of the characteristics of mixed reality and pc games we can suppose that patients will prefer to use this two systems with a preference for the mrs ( h2 ) .in addition , analyzing the therapists questionnaire we can say that the mixed reality system has a higher potential to be accepted by patients ( h1 ) .in fact , one of the most underlined problems with pc rehabilitation was the use of a mouse as a device .this limitation becomes one of potentiality of mixed reality .in addition the mrs can be project in whatever surface and avoid the problem to look in a place and perform the task in another .the pilot study described in this paper was not intended to deliver statistical evidence , but simply to give approximate values guiding the set - up of following experiments .for this reason we scheduled two more experiments .the first one is an in depth version of the pilot study presented in this paper . for this experimentwe want to gather a greater number of data using the same role playing protocol .the second one will drop the role playing and will analyze actual stroke rehabilitation of the upper limb with patients and therapists each in they own role .the main idea behind this two experiment is to compare results from the two in order to understand the discrepancy between patients attitudes and therapists expectations towards the system .will be added , a. , mubin , o. , shahid , s. , martens , j .- b . ,2008 . designing and evaluating the tabletop game experience for senior citizens .proceedings of the 5th nordic conference on human - computer interaction building bridges - nordichi 08 , 403 .burke , j. w. , mcneill , m. d. j. , charles , d. k. , morrow , p. j. , crosbie , j. h. , mcdonough , s. m. , aug .2009 . optimising engagement for stroke rehabilitation using serious games .the visual computer 25 ( 12 ) , 10851099 .cameiro , m. s. , badia , s. b. i. , oller , e. d. , verschure , p. f. , jan .neurorehabilitation using the virtual reality based rehabilitation gaming system : methodology , design , psychometrics , usability and validation .journal of neuroengineering and rehabilitation 7 ( 1 ) , 48 .http://www.ncbi.nlm.nih.gov/pubmed/20860808 chen , y. , huang , h. , xu , w. , wallis , r. i. , sundaram , h. , rikakis , t. , ingalls , t. , olson , l. , he , j. , 2006 .the design of a real - time , multimodal biofeedback system for stroke patient rehabilitation .proceedings of the 14th annual acm international conference on multimedia - multimedia 06 , 763 .http://portal.acm.org/citation.cfm?doid=1180639.1180804 , palma , phyllis , bender , anneke , december 2007 .feasibility of using the sony playstation 2 gaming platform for an individual poststroke : a case report .journal of neurologic physical therapy 31 ( 4 ) , 180189 .goude , d. , bjrk , s. , rydmark , m. , jan . 2007 .game design in virtual reality systems for stroke rehabilitation .studies in health technology and informatics 125 ( 4 ) , 1468 .http://www.ncbi.nlm.nih.gov/pubmed/17377254 jack , d. , boian , r. , merians , a. , tremaine , m. , burdea , g. , adamovich , s. , recce , m. , poizner , h. , sep .2001 . virtual reality - enhanced stroke rehabilitation .ieee transactions on neural systems and rehabilitation engineering : a publication of the ieee engineering in medicine and biology society 9 ( 3 ) , 308318 .lo , a. c. , guarino , p. d. , g.richards , l. , haselkorn , j. k. , anddaniel g. federman , g. f. w. , ringer , r. j. , wagner , t. h. , krebs , h. i. , volpe , b. t. , bever , c. t. , bravata , p. , 2010 .robot - assisted therapy for long - term upper - limb impairment after stroke .new england journal of medicine 362 ( 19 ) , 17721783 .micera , s. , carrozza , m. c. , guglielmelli , e. , cappiello , g. , zaccone , f. , freschi , c. , colombo , r. , mazzone , a. , delconte , c. , pisano , f. , minuco , g. , dario , p. , 2005 . a simple robotic system for neurorehabilitation .autonomous robots 19 ( 3 ) , 271284 .poels , k. , yvonnede kort , ijsselsteijn , w. , 2007 .`` it is always a lot of fun ! '' : exploring dimensions of digital game experience using focus group methodology . in : future play 07 : proceedings of the 2007 conference on future play .acm , new york , ny , usa , pp .8389 .reid , d. , hirji , t. , 2004 . the influence of a virtual reality leisure intervention program on the motivation of older adult stroke survivors : a pilot study . physical & occupational therapy in geriatrics 21 ( 4 ) , 119 .shackel , b. , 1991 .usability context , framework , definition , design and evaluation . in : shackel , b. , richardson , s. j. ( eds . ) , human factors for informatics usability .cambridge university press , pp . 2131 .van der lee , j. h. , wagenaar , r. c. , lankhorst , g. j. , vogelaar , t. w. , deville , w. l. , bouter , l. m. , november 1999 .forced use of the upper extremity in chronic stroke patients : results from a single - blind randomized clinical trial .stroke 30 ( 11 ) , 23692375 .yavuzer , g. , senel , a. , atay , m. , hj , 2008 .`` playstation eyetoy games '' improve upper extremity - related motor functioning in subacute stroke : a randomized controlled clinical trial .european journal of hys rehabil med 44 ( 3 ) , 237244 .
the objective of this paper is to present a mixed reality system ( mrs ) for rehabilitation of the upper limb after stroke . the system answers the following challenges : ( i ) increase motivation of patients by making the training a personalized experience ; ( ii ) take into account patients impairments by offering intuitive and easy to use interaction modalities ; ( iii ) make it possible to therapists to track patient s activities and to evaluate / track their progress ; ( iv ) open opportunities for telemedicine and tele rehabilitation ; ( v ) and provide an economically acceptable system by reducing both equipment and management costs . in order to test this system a pilot study has been conducted in conjunction with a french hospital in order to understand the potential and benefits of mixed reality . the pilot involved 3 therapists who played the role of patients . three sessions , one using conventional rehabilitation , another using an ad hoc developed game on a pc , and another using a mixed reality version of the same game were held . results have shown the mrs and the pc game to be accepted more than physical rehabilitation . mixed reality , post stroke rehabilitation , serious games
we will consider in this paper _ las vegas algorithms _ , introduced a few decades ago by , i.e. randomized algorithms whose runtime might vary from one execution to another , even with the same input . an important class of las vegas algorithms is the family of _ stochastic local search _ methods . they have been used in combinatorial optimization for finding optimal or near - optimal solutions for several decades , stemming from the pioneering work of lin on the traveling salesman problem .theses methods are now widely used in combinatorial optimization to solve real - life problems when the search space is too large to be explored by complete search algorithm , such as mixed integer programming or constraint solving , c.f . . in the last years , several proposal for implementing local search algorithms on parallel computer have been proposed , the most popular being to run several competing instances of the algorithms on different cores , with different initial conditions or parameters , and let the fastest process win over others .we thus have an algorithm with the minimal execution time among the launched processes .this lead to so - called independent multi - walk algorithms in the local search community and portfolio algorithms in the sat community ( satisfiability of boolean formula ) .this parallelization scheme can of course be generalized to any las vegas algorithm .the goal of this paper is to study the parallel performances of las vegas algorithms under this independent multi - walk scheme , and to predict the performances of the parallel execution from the runtime distribution of the sequential runs of a given algorithm .we will confront these predictions with actual speedups obtained for a parallel implementation of a local search algorithm and show that the prediction can be quite accurate , matching the actual speedup very well up to 100 parallel cores and then with a deviation limited to about 20% up to 256 cores .the paper is organized as follows .section 2 is devoted to present the definition of las vegas algorithms , their parallel multi - walk execution scheme , and the main idea for predicting the parallel speedups .section [ probabilistic - model ] will detail the probabilistic model of las vegas algorithms and their parallel execution scheme .section [ local - search ] will present the example of local search algorithms for combinatorial optimization , while section [ benchmarks ] will detail the benchmark problems and the sequential performances . then, section [ prediction ] will apply the general probabilistic model to the benchmark results and thus predict their parallel speedup , which will be compared to actual speedups of a parallel implementation in section [ analysis ] . a short conclusion and future work end will the paper .we borrow the following definition from , chapter 4 .an algorithm a for a problem class is a ( generalized ) las vegas algorithm if and only if it has the following properties : 1 .if for a given problem instance , algorithm a terminates returning a solution , is guaranteed to be a correct solution of .2 . for any given instance ,the run - time of a applied to is a random variable .this is a slight generalization of the classical definition , as it includes algorithms which are not guaranteed to return a solution .a large class of las vegas algorithms is the so - called family of _ metaheuristics _ , such as simulated annealing , genetic algorithms , tabu search , swarm optimization , ant - colony optimization , etc , which have been applied to different sets of problems ranging from resource allocation , scheduling , packing , layout design , frequency allocation , etc .parallel implementation of local search metaheuristics has been studied since the early 1990s , when parallel machines started to become widely available . with the increasing availability of pc clusters in the early 2000s ,this domain became active again .apart from domain - decomposition methods and population - based method ( such as genetic algorithms ) , distinguishes between single - walk and multi - walk methods for local search .single - walk methods consist in using parallelism inside a single search process , _ e.g.,_for parallelizing the exploration of the neighborhood ( see for instance for such a method making use of gpus for the parallel phase ) .multi - walk methods ( parallel execution of multi - start methods ) consist in developing concurrent explorations of the search space , either independently or cooperatively with some communication between concurrent processes .sophisticated cooperative strategies for multi - walk methods can be devised by using solution pools , but require shared - memory or emulation of central memory in distributed clusters , thus impacting on performance .a key point is that a multi - walk scheme is easier to implement on parallel computers without shared memory and can lead , in theory at least , to linear speedups .however this is only true under certain assumptions and we will see that we need to develop a more realistic model in order to cope with the performance actually observed in parallel executions .let us now formally define a parallel multi - walk las vegas algorithm .an algorithm a for a problem class is a ( parallel ) multi - walk las vegas algorithm if and only if it has the following properties : 1 .it consists of instances of a sequential las vegas algorithm a for , say .if , for a given problem instance , there exists at least one ] , be the instance of a terminating with the minimal runtime and let be the solution returned by .then algorithm a terminates in the same time as and returns solution .3 . if , for a given problem instance , all ] , or by its distribution , which is by definition the derivative of : .it is often more convenient to consider distributions with values in because it makes calculations easier .for the same reason , although is defined in , we will use its natural extension to .the expectation of the computation is then defined as }}=\int_0^\infty t { \ensuremath{f_{y}}}(t ) dt ] . in the following , we will study it for different specific distributions . to measure the gain obtained by parallelizing the algorithm on core , we will study the speed - up defined as : }}/{\ensuremath{\mathbb{e}[{\ensuremath{z^{(n)}}}]}}\ ] ] again , no general formula can be computed and the expression of the speed - up will depend on the distribution of .however , it is worth noting that our computation of the speed - up is related to order statistics , see for a detailed presentation .for instance , the first order statistics of a distribution is its minimal value , and the order statistic is its -smallest value . for predicting the speedup , we are indeed interested in computing the expectation of the distribution of the minimum draw . as the above formula suggests , this may lead to heavy calculations , but recent studies such as give explicit formulas for this quantity for several classical probability distributions .assume that has a shifted exponential distribution , as it has been suggested by .}}=&x_0 + 1/\lambda\end{aligned}\ ] ] then the above formula can be symbolically computed by hand : the intuitive observation of section [ min - distribution ] is easily seen on the expression of the parallel distribution , which has an initial value multiplied by but an exponential factor decreasing -times faster , as shown on the curves of figure [ lois - exponentielle ] . for an exponential distribution , here in blue with and , simulations of the distribution of ( pink ) , ( yellow ) and ( green).,scaledwidth=45.0% ] and in this case, one can symbolically compute both the expectation and speed - up for : }}&= & n\lambda \int_{x_0}^\infty t e^{- n \lambda ( t - x_0 ) } dt \\ & = & x_0+\frac{1}{n\lambda } \\ { \ensuremath{\mathcal{g}_{n}}}&= & \frac{x_0+\frac{1}{\lambda } } { x_0+\frac{1}{n\lambda } } \\\end{aligned}\ ] ] figure [ speedup - exponentielle ] shows the evolution of the speed - up when the number of cores increases .with such a rather simple formula for the speed - up , it is worth studying what happens when the number of cores tends to infinity .depending on the chosen algorithm , may be null or not .if , then the expectation tends to and the speed - up is equal to .this case has already been studied by . for ,the speed - up admits a finite limit which is .yet , this limit may be reached slowly , but depends on the value of and : obviously , the closest is to zero and the higher it will be . another interesting value is the coefficient of the tangent at the origin , which approximates the speed - up for a small number of cores . in case of an exponential ,it is .the higher and , the bigger is the speed - up at the beginning . in the following, we will see that , depending on the combinations of and , different behaviors can be observed. predicted speed - up in case of an exponential distribution , with and , w.r.t .the number of cores.,scaledwidth=40.0% ] other distributions can be considered , depending on the behavior of the base algorithm .we will study the case of a lognormal distribution , which is the log of a gaussian distribution , because it will be shown in section [ ms200 ] that it fits one experiment .it has two parameters , the mean and the standard deviation . in the same way as the shifted exponential, we shift the distribution so that it starts at a given parameter .formally , a ( shifted ) lognormal distribution is defined as : where erfc is the complementary error function defined by .figure [ loi - lognormale ] depicts lognormal distributions of , for several .the computations for the distribution of , its expectation and the theoretical speed - up are quite complicated formulas . but gives an explicit formula for all the moments of lognormal order statistics with only a numerical integration step , from which we can derive a computation of the speed - up ( since the expectation of the first order moment for the first order statistics ) .this allows us to draw the general shape of the speed - up , an example being given on figure [ speedup - lognormale ] .due to the numerical integration step , which requires numerical values for the number of cores , we restrict the computation to integer values of .this is a reasonable limitation as the number of cores is indeed an integer . for a lognormal distribution , here in blue with , and ,simulations of the distribution of ( pink ) , ( yellow ) and ( green).,scaledwidth=40.0% ] predicted speed - up in case of a lognormal distribution , with , and , depending on the number of cores ( on the abscissa).,scaledwidth=40.0% ]since about a decade , the interest for the family of local search methods and metaheuristics for solving large combinatorial problems has been growing and has attracted much attention from both the operations research and the artificial intelligence communities for solving real - life problems .local search starts from a random configuration and tries to improve this configuration , little by little , through small changes in the values of the problem variables .hence the term `` local search '' as , at each time step , only new configurations that are `` neighbors '' of the current configuration are explored. the definition of what constitutes a neighborhood will of course be problem - dependent , but basically it consists in changing the value of a few variables only ( usually one or two ) .the advantage of local search methods is that they will usually quickly converge towards a solution ( if the optimality criterion and the notion of neighborhood are defined correctly ... ) and not exhaustively explore the entire search space . applying local search to constraint satisfaction problems ( csp )has been attracting some interest since about a decade , as it can tackle csps instances far beyond the reach of classical propagation - based constraint solvers .a generic , domain - independent constraint - based local search method , named adaptive search , has been proposed by .this meta - heuristic takes advantage of the structure of the problem in terms of constraints and variables and can guide the search more precisely than a single global cost function to optimize , such as for instance the number of violated constraints. the algorithm also uses a short - term adaptive memory in the spirit of tabu search in order to prevent stagnation in local minima and loops .an implementation of adaptive search ( as ) has been developed in c language as a framework library and is available as a freeware at the url : + we used this reference implementation for our experiments .the adaptive search method can be applied to a large class of constraints ( _ e.g. _ linear and non - linear arithmetic constraints , symbolic constraints , etc . ) and naturally copes with over - constrained problems .the input of the method is a constraint satisfaction problem ( csp for short ) , which is defined as a triple ( x;d;c ) , where x is a set of variables , d is a set of domains , i.e. , finite sets of possible values ( one domain for each variable ) , and c a set of constraints restricting the values that the variables can simultaneously take . for each constraint , an _ error function _ needs to be defined ; it gives , for each tuple of variable values , an indication of how much the constraint is violated .this idea has also been proposed independently by , where it is called `` penalty functions '' , and then reused by the comet system , where it is called `` violations '' . for example , the error function associated with an arithmetic constraint , for a given constant , can be .adaptive search relies on iterative repair , based on variable and constraint error information , seeking to reduce the error on the worst variable so far .the basic idea is to compute the error function for each constraint , then combine for each variable the errors of all constraints in which it appears , thereby projecting constraint errors onto the relevant variables .this combination of errors is problem - dependent , see for details and examples , but it is usually a simple sum or a sum of absolute values , although it might also be a weighted sum if constraints are given different priorities . finally , the variable with the highest error is designated as the `` culprit '' and its value is modified . in this second step ,the well known min - conflict heuristic is used to select the value in the variable domain which is the most promising , that is , the value for which the total error in the next configuration is minimal . in order to prevent being trapped in local minima, the adaptive search method also includes a short - term memory mechanism to store configurations to avoid ( variables can be marked tabu and `` frozen '' for a number of iterations ) .it also integrates reset transitions to escape stagnation around local minima .a reset consists in assigning fresh random values to some variables ( also randomly chosen ) .a reset is guided by the number of variables being marked tabu .it is also possible to restart from scratch when the number of iterations becomes too large ( this can be viewed as a reset of all variables but it is guided by the number of iterations ) .the core ideas of adaptive search can be summarized as follow : * to consider for each constraint a heuristic function that is able to compute an approximated degree of satisfaction of the goals ( the current _ error _ on the constraint ) ; * to aggregate constraints on each variable and project the error on variables thus trying to repair the _worst _ variable with the most promising value ; * to keep a short - term memory of bad configurations to avoid looping ( _ i.e. _ some sort of _ tabu list _ ) together with a reset mechanism .we have chosen to test this method on two problems from the csplib benchmark library , and on a hard combinatorial problem abstracted from radar and sonar applications .after briefly introducing the classical benchmarks , we detail the latter problem , called costas array . then we show the performance and the speed - ups obtained with both sequential and a multi - walk adaptive search algorithm on these problems .we use two classical benchmarks from csplib consisting of : * the all - intervalseries problem ( prob007 in csplib ) , * the magic - squareproblem ( prob019 in csplib ) . although these benchmarks are academic , they are abstractions of real - world problems and could involve very large combinatorial search spaces , _ e.g.,_the200 magic - squareproblem requires 40,000 variables whose domains range over 40,000 values . indeedthe search space in the adaptive search model ( using permutations ) is , _i.e.,_more than configurations .classical propagation - based constraint solvers can not solve this problem for instances higher than 20x20 . also note that we are tackling constraint _ satisfaction _problems as optimization problems , that is , we want to minimize the global error ( representing the violation of constraints ) to value zero , therefore finding a solution means that we actually reach the bound ( zero ) of the objective function to minimize. this problem is described as ` prob007 ` in the csplib .this benchmark is in fact a well - known exercise in music where the goal is to compose a sequence of notes such that all are different and tonal intervals between consecutive notes are also distinct .this problem is equivalent to finding a permutation of the first integers such that the absolute difference between two consecutive pairs of numbers are all different .this amounts to finding a permutation of such that the list is a permutation of .a possible solution for is since all consecutive distances are different : 3 6 0 7 2 4 5 1 the magic - squareproblem is catalogued as ` prob019 ` in csplib and consists in placing the numbers on an square such that each row , column and main diagonal equal the same sum ( the constant ) .for instance , this figure shows a well - known solution for ( depicted by albrecht drer in his engraving _ melancholia i _ , 1514 ) .a costas array is an grid containing marks such that there is exactly one mark per row and per column and the vectors joining the marks are all different .we give here an example of costas array of size 5 .it is convenient to see the costas arrayproblem ( cap ) as a permutation problem by considering an array of variables which forms a permutation of .the above costas array can thus be represented by the array $ ] .historically these arrays have been developed in the 1960 s to compute a set of sonar and radar frequencies avoiding noise . a very complete survey on costas arrayscan be found in .the problem of finding a costas array of size is very complex since the required time grows exponentially with . in the 1980 s ,several algorithms have been proposed to build a costas array given ( methods to produce costas arrays of order 24 to 29 can be found in ) , such as the welch construction and the golomb construction , but these methods can not built costas arrays of size and some higher non - prime sizes .nowadays , after many decades of research , it remains unknown if there exist any costas arrays of size or .another difficult problem is to enumerate all costas arrays for a given size . using the golomb and welch constructions , drakakis _ et .al _ present in all costas arrays for .they show that among the permutations , there are only 164 costas arrays , and 23 unique costas arrays up to rotation and reflection .we run our benchmarks in a sequential manner in order to have about 650 runtimes for each .sequential experiments , as well as parallel experiments , have been done on the _ griffon _ cluster of the grid5000 platform .the following tables [ tab : seqtime ] and [ tab : seqiter ] shows the minimum , mean , median and maximum respectively among the runtimes and the number of iterations of our benchmarks ..sequential execution times ( in seconds ) [ cols="^ , > , > , > , > " , ] it is worth noticing that our model approximates the behaviors of experimental results very closely , as shown by the predicted speed - ups matching closely the real ones .moreover we can see that on the three benchmark programs , we needed to use three different types of distribution ( exponential , shifted exponential and lognormal ) , in order to approximate the experimental data most closely .this shows that our model is quite general and can accommodate different types of parallel behaviors .a quite interesting behavior is exhibited by the costas 21 problem .our model predicts a linear speedup , up to 10,000 cores and beyond , and the experimental data gathered for this paper confirms this linear speed - up up to 256 cores .would it scale up with a larger number of cores ?indeed such an experiment has been done up to 8,192 cores on the jugene ibm bluegene / p at the jlich supercomputing center in germany ( with a total 294,912 cores ) , and reported in , of which figure [ fig : speedup - jugene ] is adapted .we can see that the speed - up is indeed linear up to 8,192 cores , thus showing the adequation of the prediction model with the real data .[ fig : speedup - jugene ] finally , let us note that our method exhibits an interesting phenomenon . for the three problems considered , the probability of returning a solution in _ no _ iterations is non - null :since they start by a uniform random draw on the search space , there is a very small , but not null , probability that this random initialization directly returns the solution .hence , in theory , and the speed - up should be linear , with an infinite limit when the number of cores tends to infinity .intuitively , if the number of cores tends to infinity , at some point it will be large compared to the size of the search space and one of the cores is likely to immediately find the solution .yet , in practice , observations shows that the experimental curves may be better approximated by a shifted exponential with , as it is the case for ai 700 . with an exponential distribution ,this leads to non - linear speed - up with a finite limit .indeed , the experimental speed - up for ai 700 is far from linear . on the contrary ,costas 21 has a linear speed - up due to its , which makes the statistical test succeed for .firstly , this suggests that the comparison between and on a number of observations is a key element for the parallel behavior .it also means that the number of observations needed to properly approximate the sequential distribution probably depends on the problem .we have proposed a theoretical model for predicting and analyzing the speed - ups of las vegas algorithms . it is worth noticing that our model mimics the behaviors of the experimental results very closely , as shown by the predicted speedups matching closely the real ones .our practical experiments consisted in testing the accuracy of the model with respect to three instances of a local search algorithm for combinatorial optimization problems .we showed that the parallel speed - ups predicted by our statistical model are accurate , matching the actual speed - ups very well up to 64 parallel cores and then with a deviation of about 10% , 15% or 30% ( depending on the benchmark problem ) up to 256 cores .however , one limitation of our approach is that , in practice , we need to be able to compute the expectation of the minimum distribution . nevertheless , apart from the exponential distribution for which this computation is easy , recent results in the field of order statistics gives explicit formulas for a number of useful distributions : gaussian , lognormal , gamma , beta .this provides a wide range of tools to analyze different behaviors . in this paperwe validated our approach on classical combinatorial optimization and csp benchmarks , but further research will consider a larger class of problems and algorithms , such as sat solvers and other randomized algorithms ( e.g. quick sort ) .another interesting extension of this work would be to devise a method for predicting the speed - up from scratch , that is , without any knowledge on the algorithm distribution .preliminary observation suggests that , given a problem and an algorithm , the general shape of the distribution is the same when the size of the instances varies .for example , the different instances of all - intervalthat we tested all admit a shifted exponential distribution .if this property is valid on a wide range of problems / algorithms , then we can develop a method for predicting the speed - up for large instances by learning the distribution shape on small instances ( which are easier to solve ) , and then estimating the parallel speed - up for larger instances with our model .j. beard , j. russo , k. erickson , m. monteleone , and m. wright .costas array generation and search methodology . _ aerospace and electronic systems , ieee transactions on _ , 430 ( 2):0 522 538 , april 2007 .issn 0018 - 9251 .doi : 10.1109/taes.2007.4285351 .y. caniou , d. diaz , f. richoux , p. codognet , and s. abreu .performance analysis of parallel constraint - based local search . in _ppopp 2012 , 17th acm sigplan symposium on principles and practice of parallel programming _ , new orleans , la , usa , 2012 .acm press .poster paper .d. diaz , f. richoux , y. caniou , p. codognet , and s. abreu .parallel local search for the costas array problem . in _ ieee workshop on new trends in parallel computing and optimization ( pc012 ) , in conjunction with ipdps 2012 _ , shanghai , china , may 2012 . ieee press .s. minton , m. d. johnston , a. b. philips , and p. laird .minimizing conflicts : a heuristic repair method for constraint satisfaction and scheduling problems . _ artificial intelligence _, 580 ( 1 - 3):0 161205 , 1992 . p. m. pardalos , l. s. pitsoulis , t. d. mavridou , and m. g. c. resende .parallel search for combinatorial optimization : genetic algorithms , simulated annealing , tabu search and grasp . in _ proceedings of irregular _ , pages 317331 , 1995 .t. van luong , n. melab , and e .-. local search algorithms on graphics processing units . in _evolutionary computation in combinatorial optimization _ , pages 264275 .lncs 6022 , springer verlag , 2010 .
we propose a probabilistic model for the parallel execution of _ las vegas algorithms _ , _ i.e.,_randomized algorithms whose runtime might vary from one execution to another , even with the same input . this model aims at predicting the parallel performances ( _ i.e.,_speedups ) by analysis the runtime distribution of the sequential runs of the algorithm . then , we study in practice the case of a particular las vegas algorithm for combinatorial optimization , on three classical problems , and compare with an actual parallel implementation up to 256 cores . we show that the prediction can be quite accurate , matching the actual speedups very well up to 100 parallel cores and then with a deviation of about 20% up to 256 cores . theory , algorithms , performance . las vegas algorithms , prediction , parallel speed - ups , local search , statistical modeling , runtime distributions .
in complex control systems containing sampled - data elements , it is possible that these elements operate asynchronously . in some cases asynchronous character of operation of sampled - data elements does not influence stability of system . in other casesany small desynchronization of the updating moments of sampled - data elements leads to dramatic changes of dynamics of a control system , and the system loses stability .last years there is begun ( see , e.g. , ) intensive studying of the effects connected with asynchronous operation of control systems ; both necessary , and sufficient stability conditions for various classes of asynchronous systems were obtained . at the same timeno one succeed in finding general , effectively verified criteria of stability of asynchronous systems , similar to known for synchronous systems . the problem on stability of linear asynchronous systems has appeared more difficult than the problem on stability of synchronous systems . in the paper ,attempt of formal explanation of complexity of the stability analysis problem for linear asynchronous systems is undertaken .it is shown that there are no criteria of absolute stability of linear asynchronous systems consisting of a finite number of arithmetic operations .consider a discrete - time linear control system whose dynamics is described by the vector difference equation where is the state vector of the system and is a square matrix of dimension with the elements .the system will be called synchronous if .if , and the set consists of finitely many elements , then the system will be called asynchronous or desynchronized .let be a finite totality of square matrices of dimension . the system will be called absolutely stable with respect to the class of matrices ( cf . ) if there exists such that for any sequence of matrices the following estimates hold : let us call the system absolutely exponentially stable with respect to the class of matrices if there exist and such that for any sequence of matrices the following estimates hold : if the class of matrices consists of the square matrices , , , of dimension then , for its description , it suffices to specify numbers : , , , , , , , , , , , , .therefore , each class consisting of square matrices of dimension can be treated as a point in some space .denote by the set of those classes in the space with respect to which the system is absolutely stable . by denote the set of those classes with respect to which the system is absolutely exponentially stable .now , the problem of studying the absolute stability of the system can be reformulated as the problem of description of the sets and ; the simpler in some sense the structure of the sets or the easier to obtain a criterion of absolute stability or absolute exponential stability .the sets and allow a simple description .indeed , each class consists of a single matrix .therefore , we need to obtain conditions of stability or asymptotic stability of some difference equation .the routh hurwitz stability criterion allows to represent these conditions as a finite system of polynomial inequalities including the elements of the matrix .verification of the obtained inequalities can be performed by a finite number of arithmetic operations over the elements of the matrix . in other words ,the question whether an arbitrary class belongs to the set or may be resolved by a finite number of arithmetic operations .is it possible , for , by a finite number of arithmetic operations to resolve the question whether an arbitrary class belongs to the set or ?the answer to this question will be given in the next section .let be an element of the coordinate space .a finite sum with numerical coefficients is called a polynomial in variable .a set is said to have the _sa_-property if there exists a finite number of polynomials , , , , , such that coincides with the set of elements satisfying the condition a set is called semialgebraic if it is a unity of a finite number of the sets possessing the _ sa_-property . [ t:1 ]let .if a subset of the space satisfies conditions then it is not semialgebraic .the proof of the theorem is given in the appendix .semialgebraicity of a set is equivalent to the existence of a criterion ( consisting in verification of a finite number of the conditions of the form ) which allows by a finite number of arithmetic operations of addition , subtraction , multiplication and comparison of numbers to establish belonging of an element to a given set . as seen from theorem [ t:1] , neither the set nor the set are semialgebraic .so , the meaning of theorem [ t:1 ] is that in general , by a finite number of arithmetic operations , it is impossible to ascertain whether a desynchronized system is absolutely stable ( absolutely exponentially stable ) or not . the problem on the existence of algebraic criteria of stability is acute also for classes of desynchronized systems different from those considered above . for example , in the theory of continuous - time desynchronized systems there arises the problem of stability of the so - called regular systems ( i.e. , the systems with the infinite number of updating moments for each component ) . the discrete - time system , and the related to it sequence of matrices ,will be called regular if each matrix from the class appears in the sequence infinitely many times .denote by the greatest integer having the property : the set of matrices can be decomposed in subsets , , , such that each of them contains all the matrices .clearly , the system is regular if and only if . the system will be called absolutely exponentially regularly stable with respect to the class of matrices if there exist , such that for any regular sequence of matrices the following estimates hold : denote by the set all the classes with respect to which the system is absolutely exponentially regularly stable .[ t:2 ] let .then the set is not semialgebraic . to prove theorem [ t:2 ]it suffices to note that the set contains , and is contained in .then by theorem [ t:1 ] it is not semialgebraic . in other words , for there are no semialgebraic criteria of absolutely exponentially regular stability of discrete - time desynchronized systems .as shown above , the problem of absolute stability of the system can be reduced to the analysis of behaviour of infinite products of the matrices .theorem [ t:3 ] below reduces the same problem to the descriptive - geometric question on existence in the space such a norm in which each matrix is contractive .[ t:3 ] the system is absolutely stable in a class of matrices , if and only if there is a norm in for which the following inequalities hold : the system is absolutely exponentially stable in a class of matrices if and only if there is a norm in and a number for which the following inequalities hold : the proof of the theorem is given in the appendix . several important properties of the sets and from theorem [ t:3 ] .for example , the set is open in ; the set belongs to the interior of the set .due to openness of the set , if the system is absolutely exponentially stable with respect to some class then it is also absolutely exponentially stable with respect to any class , , of matrices sufficiently close to the corresponding matrices .theorems [ t:1 ] and [ t:3 ] imply that the problem of construction , for a given set of square matrices , of a norm satisfying conditions or is algebraically unresolvable .theorem [ t:1 ] states that in general there are no effective criteria of absolute stability of desynchronized systems .nevertheless , such criteria may exist for some particular desynchronized systems .let us present examples .[ ex:1 ] denote by the subset of the space consisting of the classes of matrices of the form the problem on absolute stability of the system with respect to the classes arises in in the process of study of continuous - time systems with a special types of desynchronization of updating moments . [ t:4 ]the system is absolutely stable with respect to the class of matrices of the form if and only if one of the following system of relations holds : , , ; , , , is arbitrary ; , is arbitrary , , ; , ; , , .the proof of theorem [ t:4 ] is cumbersome and so is skipped .let us point out that the criterion of absolute stability of the system with respect to the classes of matrices from , given in theorem [ t:4 ] , is semialgebraic .[ ex:2 ] denote by the subset of the space consisting of the classes of matrices of the form for which .the set is semialgebraic .the criterion of absolute stability of the system with respect to the classes of matrices from consists in verification that the maximal eigenvalue of the matrix does not exceed .this assertion is proved similarly to theorem 2 from the first part of ._ proof of theorem [ t:1 ] _ suffices to present for the case .the idea of proof is simple .we construct two families of matrices depending on the real parameter ] whose validity is justified by direct calculations , see fig .[ semialg2 ] . under the action of the map [ l : a3 ]let , and or for .then where , integers and are non - negative , .prove the lemma by induction .for the assertion of the lemma is evident ; suppose that it is valid for .then for the matrix can be represented as , where , or , , . if then , and for the matrix the representation holds in which , , , . if and then , and for the matrix the representation holds in which , , , . if and then . herethe factor according to lemma [ l : a2 ] can be replaced by . then . therefore , for the matrix the representation holds in which , , , .in addition , since , . the inductive step is completed .lemma [ l : a3 ] is proved . .the proof of the corollary immediates from the representation and unitarity of the rotation matrix .[ l : a4 ] let . then .let be a sequence of matrices from .then , for each , one of two equalities or holds . by , , where .therefore , the product of matrices , , , can be represented in the form : , where or .then , by corollary from lemma [ l : a3 ] , which implies absolute stability of the class of matrices .lemma [ l : a4 ] is proved .[ l : a5 ] let .then for all sufficiently large .proof . clearly the lemma will be provedif , for each sufficiently large , there can be found a sequence of matrices such that .define the sequence of matrices as follows : =g(s_{n}) ] , =g(s_{n})$ ] .let us set . then ^{i}.\ ] ] since and , where , then ^{i}.\ ] ] consequently , by lemma [ l : a1 ] .recall that is a projector and so .therefore , direct calculations show that . henceforth , for sufficiently large values of the inequality holds . from this and from the relationlemma [ l : a5 ] is proved .let us complete the proof of the theorem . since by lemmas [ l : a4 ] and [ l : a5 ] , and then , .but because of the points and interleave between each other then the set contains infinitely many different components of connectedness ( different classes belong to different components of connectedness ) . therefore , the set by the theorem of whitney is not semialgebraic .but since is an algebraic set then the set is not semialgebraic .theorem [ t:1 ] is proved ._ proof of theorem [ t:3 ] . _ in one side the assertion of theorem [ t:3 ] is obvious : absolute stability and absolute exponential stability of the system with respect to the class immediately follow from inequalities and .let us show that absolute exponential stability of the system with respect to the class of matrices implies .let , for some , the relation be valid .set , , where is the euclidean norm on , and the maximum is taken over all possible collections of the matrices .define the norm as follows : .the function is semiadditive and due to it satisfies the relations .henceforth , if and only if . other properties of a norm are obvious for .let us justify inequalities .clearly , for any and the estimate is valid . therefore , from here for .inequalities are proved .construction of the norm and the proof of inequalities in the case of absolute stability of the system are carried out similarly .theorem [ t:3 ] is proved .10 url # 1`#1`urlprefixhref # 1#2#2 # 1#1 kleptsyn a. f. , kozyakin v. s. , krasnoselskii m. a. , and kuznetsov n. a. , _ effect of small synchronization errors on stability of complex systems .i - iii _ , automat .remote control , 1983 , vol .44 , no .7 , pp . 861867 , 1984 , vol . 45 , no . 3 , pp .309314 , 1984 , vol .45 , no . 8 , pp .10141018 .kleptsyn a. f. , _ investigation of stability of two - component desynchronized systems _ , in _ixth all - union workshop on control problems .theses _ , pp . 2728 , moscow : nauka , 1983 , in russian .kleptsyn a. f. , kozyakin v. s. , krasnosel m. a. , and kuznetsov n. a. , _ stability of desynchronized systems _ ,nauk sssr , 1984 , vol .274 , no . 5 , pp . 10531056 , in russian , translation in soviet phys .29 ( 1984 ) , 9294 .kleptsyn a. f. , krasnosel m. a. , kuznetsov n. a. , and kozjakin v. s. , http://www.sciencedirect.com/science/article/pii/037847548490106x[_desynchronization of linear systems _ ] , math .simulation , 1984 , vol .26 , no . 5 , pp .[ ] . http://www.sciencedirect.com/science/article/pii/037847548490106x kleptsyn a. f. , _ stability of desynchronized complex systems of a special type _ , avtomat .i telemekh . , 1985 , no . 4 , pp . 169171 .tsypkin a. z. , _ theory of linear sampling systems _ , moscow : fizmatgiz , 1963 , in russian .aizerman m. a. and gantmacher f. r. , _ absolute stability of regulator systems _ , translated by e. polak , san francisco , calif . : holden - day inc . , 1964 .treves j. f. , _ lectures on linear partial differential equations with constant coefficients _ , notas de matemtica , no .27 , instituto de matemtica pura e aplicada do conselho nacional de pesquisas , rio de janeiro , 1961 . milnor j. , _ singular points of complex hypersurfaces _ , annals of mathematics studies , no . 61 , princeton , n.j .: princeton university press , 1968 .kozyakin v. s. , _ algebraic unsolvability of problem of absolute stability of desynchronized systems _, avtomat .i telemekh . , 1990 , no . 6 , pp .4147 , in russian , translation in automat .remote control 51 ( 1990 ) , no .6 , part 1 , 754759 .asarin e. a. , kozyakin v. s. , krasnosel m. a. , and kuznetsov n. a. , http://eqworld.ipmnet.ru/ru/library/books/asarinkozyakinkrasnoselskijkuznecov1992ru.pdf[_analiz ustoichivosti rassinkhronizovannykh diskretnykh sistem _ ] , moscow : nauka , 1992 , in russian .kozyakin v. s. , _ indefinability in o - minimal structures of finite sets of matrices whose infinite products converge and are bounded or unbounded _ , avtomat .i telemekh . , 2003 , no . 9 , pp . 2441 , in russian , translation in autom .remote control 64 ( 2003 ) , no .9 , 13861400 .http://dx.doi.org/10.1023/a:1026091717271 [ ] .in the foregoing text , a few misprints in the proof of theorem [ t:1 ] occurred in the original journal version of the article were corrected , and two figures were added .the improved text was included in the monograph .generalization of the presented results can be found in .
in the author s article `` algebraic unsolvability of problem of absolute stability of desynchronized systems '' ( automat . remote control 51 ( 1990 ) , no . 6 , pp . 754759 ) , it was shown that in general for linear desynchronized systems there are no algebraic criteria of absolute stability . in this paper , a few misprints occurred in the original version of the article are corrected , and two figures are added .
friction phenomena take place across a broad range of time and length scales , from microscopic atomistic processes , as in the gliding motion of a nanocluster or a nanomotor , up to extremely macroscopic instances , as in fault dynamics and earthquake events . due to the ubiquitous nature of mechanical dissipative processes and the enormous practical relevance ,friction has been investigated over the centuries . while the empirical laws of macroscopic friction are well known , the fundamental understanding of the tribological phenomena at the microscopic scales is still lacking from many points of view .the basic difficulty of friction is intrinsic , involving the dissipative dynamics of large systems , often across ill - characterized interfaces , and generally violent and nonlinear .the severity of the task is also related to the experimental difficulty to probe systems with many degrees of freedom under a forced spatial confinement , that leaves very limited access to probing the buried sliding interface . thanks to remarkable developments in nanotechnology , new inroads are being pursued and new discoveries are being made . at the nanometer scale ,state - of - the - art ultra - high - vacuum systems and local probe studies show a dynamical behavior which is often significantly different , not just quantitatively but even qualitatively , from the ones observed in macroscopic tribology . bridgingthe gap among the different length scales in tribological systems still remains an open challenge .the phenomenological descriptions that apply to macroscopic friction can not yet be derived from the fundamental atomic principles and the interplay of processes occurring at the molecular level .nanofriction is in somewhat better shape .together with the current experimental possibility to perform well - defined measurements on well - characterized materials at the fundamental microscopic level of investigation of the sliding contacts , advances in the computer modeling of interatomic interactions in materials science and complex systems encompass molecular - dynamics ( md ) simulations of medium to large scale for the exploration of the tribo - dynamics with atomic resolution . despite the benefits brought about by numerical simulations of realistic 3d sliding systems ,the resulting proliferation of detailed complex data , and the requirement of always - growing computational efforts have stimulated , in parallel , the concurrent search for simpler modeling schemes , such as , e.g. , generalized prandtl - tomlinson ( pt ) , frenkel - kontorova ( fk ) , for nanofriction , and of burridge - knopoff and earthquake - like models , for mesoscale and macroscale friction , suitable to describe the essence of the physics involved in highly nonlinear and non - equilibrium tribological phenomena in a more immediate fashion .here we discuss current progress and open problems in the simulation and modeling of tribology at the microscopic scale , and its connection to the macroscale .neither the pt model , described in detail in several surveys with several applications to concrete tip - based physical systems , nor the phenomenological approach based on the rate - and - state models will be considered here . with a view to emphasize the role of nonlinearity the present topical review will restrict to the following theoretical approaches to sliding friction .section [ linear : sec ] revises the simple case of near - equilibrium linear friction in classical mechanics . section [ fk : sec ] focuses on nonlinearity in crystal sliding in the framework of the fk model and its generalizations .atomistic models and md nanofriction simulations are presented in sec .[ md : sec ] .mesoscopic multicontact earthquake - like models are finally examined in sec .[ multicontact : sec ] .statistical mechanics accounts for the intimate mechanism of friction : a system at equilibrium has its kinetic energy uniformly distributed among all its degrees of freedom .a sliding macroscopic object clearly is not at equilibrium : one of its degrees of freedom ( the center - of - mass motion ) has far more kinetic energy than any other .the tendency of the system toward equilibrium will lead to the transfer of energy from that degree of freedom to all other ones : as a result the macroscopic object will slow down and its energy will be transferred to the disordered motion of the other degrees of freedom , resulting in warming up .this is all sliding friction really is : the tendency of systems toward equilibrium energy equipartitioning among many interacting degrees of freedom .thus , in the course of friction under an applied external force , energy is reversed into the system in the form of frictional heat .the frictional heat is generally dissipated by some form of heat bath , such as that provided by a thermostat at temperature . in a frictional steady state , caused for example by submitting a slider to an external force , the slider dissipates energy to the bath , and therefore does not accelerate indefinitely it reaches instead a steady state characterized by an average drift velocity . when both and are infinitesimal , the relationship between the two quantities is linear , in this so - called `` viscous friction '' the proportionality constant is the linear friction coefficient .it is known from classical statistical mechanics , for example of brownian motion as described by the langevin equation , that for linear friction systems which obey eq .( [ erio-1 ] ) , the einstein relationship is generally valid , connecting the friction coefficient , which measures dissipation , to the diffusion coefficient , which measures fluctuations .this expresses the fluctuation - dissipation theorem of linear , viscous friction . to simulate the classical motion of a macroscopic object moving in contact with an equilibrium bath such as the molecules of a gas or a liquid , or the phonons of a solid, the standard implementation requires adapting newton s equations of motion with the addition of a damping force plus a random force .the damping force represents the transfer of energy from the macroscopic object of mass to the heat bath , i.e. dissipation : this formula , equivalent to eq .( [ erio-1 ] ) , assumes that the deviation from equilibrium is small , so that _ linear response _ holds : the restoring stokes force is linear in the perturbing velocity , and acts opposite to it to restore the equilibrium regime .this linear dependence is purely the lowest order term in a taylor expansion : there is no reason to expect the linear relation ( [ eq_langevin ] ) to extend to large velocity , and indeed e.g. the drag friction of speeding objects in gases is well known to follow rayleigh s quadratic dependence on speed .the random - force term represents statistically the `` kicks '' that the objects experiences due to its interaction with the thermal bath . in the frame of reference of the thermal bath of course .the random term is the result of many very frequent collisions events , resulting in random forces uncorrelated with themselves except over very short time spans .more precisely , we assume there is some maximum time beyond which any correlation vanish : in addition , the assumption of thermal equilibrium ensures us that the bath is in a steady state , so that is independent of , and depends on only : the statistical properties of the random force are constant in time . in most practical situations, one can safely ignore the dynamics over a time scale of the order of or shorter .we are interested instead in the integral effect of over some time period that is long compared to .we can break up that integral into many pieces , each covering a duration : this integral is then a sum of many independent random terms , each drawn from the same distribution whose only relevant property is that it has zero mean value . as a result of the central - limit theorem, the total integral obeys a gaussian distribution with null mean , and whose standard deviation scales with the number of terms in the sum , i.e. . by taking the equation of motion in the absence of any external driving , and integrating it in timewe obtain the first term at the right - hand side of eq .( [ eq_diffint ] ) becomes negligible for a time , long enough for the object to equilibrate with the thermostat and lose memory of its initial condition . in this large- limit , by taking the square module of eq .( [ eq_diffint ] ) and executing the ensemble average , we have the term at the left side multiplied by yields the average kinetic energy , which by standard equipartition needs to equal . by rearranging the exponentials on the right - hand side and substituting and , we obtain : where we have used the steadiness of the stochastic process discussed after eq .( [ bathuncorr ] ) . assuming , as is commonly the case , that the autocorrelation time of the random term is short compared to , the integrand of eq .( [ equipart ] ) has in all region of delays where the factor is significantly different from .this observation further simplifies eq .( [ equipart ] ) to the _ fluctuation - dissipation _relation this expression draws an explicit link between the autocorrelation amplitude of the fluctuations and the product of the dissipation coefficient and thermostat temperature . note that the integral in eq .( [ fluctdiss ] ) depends on both the amplitude of fluctuations and the time over which they remain self - correlated .the effect on the mesoscale object increases if the random force is larger and/or if the time interval over which pushes in the same direction before changing is longer .the relation ( [ fluctdiss ] ) can be equally well satisfied by weaker random forces acting for longer correlation times or stronger forces with shorter , which thus lead to the same statistical effects . as is the shortest time scale around , for all practical purposeone can satisfy eq .( [ fluctdiss ] ) assuming a sort of limit : providing a simple recipe for computer simulations . for simulations of models such as the pt or fk ones , the phenomenological degrees of freedomare often coupled to a langevin thermostat of this kind , implying that each degree of freedom is actually coupled to a vast number of other bath degrees of freedom . even in md simulations of atomic - scale friction ,langevin thermostats are applied to all or a part of the atoms involved .of course , this approach is not rigorous , since the relevant particles colliding with each given simulated atom are already all included in the conservative and deterministic forces explicitly accounted for by the `` force field '' .the langevin approach is quite accurate to describe small perturbations away from equilibrium , but it may fail quite badly in the strongly out - of - equilibrium nonlinear phenomena which are the target of the present paper . in the rest of this review we will deal with nonlinear frictional phenomena , which deviate violently from linearity and near - equilibrium , and where therefore eqs .( [ erio-1 ] ) and ( [ erio-2 ] ) do not generally apply .as it has , surprisingly , only been realized in the last few decades , even arbitrarily violently non - equilibrium and nonlinear driven phenomena adhere to an extension of the fluctuation - dissipation theorem .that is the jarzynski ( or jarzynski crooks ) relation , whose simplest form can be briefly summarized as follows .suppose starting from a system in state a at temperature , and apply an external force of arbitrary form and strength causing it to evolve , for example to slide , to another state b ; assume for simplicity b to be also a state of equilibrium .call the work done by the external force , and call the difference of equilibrium free energy between the states b and a. clearly , must be valid , because some work will be wasted in going from a to b , unless that was done infinitely slowly ( adiabatically ) .suppose now repeating the forced motion a many times .each time , will be different .the jarzynski equality states that it can be shown that in near - equilibrium conditions , eq .( [ erio-4 ] ) is completely equivalent to the fluctuation - dissipation theorem .the beauty of it is however that eq .( [ erio-4 ] ) is totally general .one particular case is useful in order to underline its far - reaching power .suppose to take b = a , that is a final state identical to the starting one .in that case this equation appears at first sight impossible to satisfy , because surely all : all forced motion must cost work .the answer is that the probability is indeed a distribution centered around a positive , but with a nonzero tail extending to .this tail represents rare events where work is gained rather than spent we can think of them as a sort of `` free lunches '' .jarzynski s theorem requires that `` free lunches '' must occur precisely in such a measure to satisfy eq .( [ erio-5 ] ) .however it is easy to convince ourselves that they will be frequent only in microscopic systems , where is broad . the larger the system involved , the narrower will be , the rarer and rarer the occurrence of `` free lunches '' . in a macroscopic friction experiment , the occurrence of a `` free lunch'' will be virtually impossible .in nanoscale tribology , extensive attention has focused on the time - honored pt model , which describes a point - like tip sliding over a space - periodic crystalline surface in a minimal fashion .we shall omit this model from the present review , since it is covered in great detail elsewhere .we concentrate instead on its natural extension , the one - dimensional fk model , which provides a prototypical description of the mutual sliding of two perfect , extended crystalline surfaces .first studied analytically in ref . and later introduced independently to address the dynamics of dislocations in crystals , subsequently this model became the paradigm describing the structure and dynamics of adsorbed monolayers in the context of surface physics .the standard fk model consists of a 1d chain of classical particles ( `` atoms '' ) , interacting via harmonic forces and moving in a sinusoidal potential , as sketched in fig . [fig : fkmodel ] .the hamiltonian is \,.\end{aligned}\ ] ] in eq .( [ fkhamil ] ) , the term represents the kinetic energy of the particles , and the next term describes the harmonic interaction , with elastic constant , of nearest - neighboring atoms at equilibrium distance .the final cosine term describes the `` substrate corrugation '' , i.e. the periodic potential of amplitude and period , as experienced by all particles alike . to probe static friction ,all atoms are driven by an external force , which is increased adiabatically until sliding starts .the continuum limit of the fk model , appropriate for large , is the exactly integrable sine - gordon ( sg ) equation , and this mapping contributed to the great success of the fk model .the solutions of the sg model include nonlinear topological solitons ( known as `` kinks '' and `` antikinks '' ) , plus dynamical solitons ( `` breathers '' ) , beside linear vibration waves ( phonons ) . in the fk model , the sliding processes are entirely governed by its topological excitations , the kinks .let us consider the simplest `` commensurate '' case , where before sliding the chain is in a trivial ground state ( gs ) , when atoms fit one in each of the minima of the substrate potential , so that the coverage ( i.e. the relative atomic concentration ) equals . in this case ,the addition ( or subtraction ) of a single atom results in configurations of the chain characterized by one kink ( or antikink ) excitation . still at zero applied force , in order to reach a local minimum of the total potential energy in eq .( [ fkhamil ] ) , the kink expands in space over a finite length , so that the resulting relaxed chain configuration consists in a local compression ( or expansion , for an antikink ) . upon application of a force ,it is far easier to move along the chain for kinks than for atoms , since the activation energy for a kink displacement [ known as the peierls - nabarro ( pn ) barrier ] is systematically smaller , and often much smaller , than the amplitude of the energy barrier that single atoms experience in the substrate corrugation . the motion of kinks ( antikinks ) , i.e. the displacement of the extra atoms ( vacancies ) represents the mechanism for mass transport along the chain .these displacements are responsible for the mobility , diffusivity , and conductivity within this model .generally therefore a larger concentration of kinks is associated to a larger the overall mobility . for the simple commensurate gs ( e.g. , ) , which contains neither kinks nor antikinks , the onset of sliding motion under a driving force occurs via the creation of a kink - antikink pair , e.g. induced by a thermal fluctuation ,see fig .[ kinks ] .if the fk chain is of finite length , kinks / antikinks are usually created at one chain free end , then they advance along the chain , eventually disappearing at the opposite end .every kink running from one end of the chain to the other produces the advancement of the entire chain by one lattice spacing . for a finite film confined between two surfaces , or for an island deposited on a surface, the general expectation is that sliding initiates likewise with the formation and entrance of a kink , or antikink at the boundary . in this two - dimensional ( 2d ) case , and more generally in -dimensional systems , the zero - dimensional kinks of the fk model are replaced by dimensional misfit dislocations or domain walls , whose qualitative physics and role is essentially the same .incommensurability between the periods and plays an important role in the fk model .assume , in the limit of an infinite chain length , the ratio of the substrate period to the average spacing of the chain to be irrational .the gs of the resulting incommensurate fk model is characterized by a sort of `` staircase '' deformation , with a regular sequence of regions where the chain is compressed ( or expanded ) to match the periodic potential , separated by kinks ( or antikinks ) , where , at regular intervals , the misfit stress is released through a localized expansion ( compression ) .the incommensurate fk model exhibits , under fairly general conditions on , a critical elastic constant , such that if the chain can slide freely on the substrate at no energy cost , i.e. the static friction drops to zero ( and the low - velocity kinetic friction becomes extremely small ) , while , remarkably , this is no longer true when . in the early 1980s a rigorous mathematical theory of this phenomenon called `` the transition by breaking of analyticity '' , now widely known as the _ aubry transition _ , was developed .a simple explanation of free sliding in the unpinned state is the following .for every atom climbing up toward a corrugation potential maximum , there always is another atom moving down , with an exact energy balance of these processes . quite generally , incommensurability guarantees that the total energy ( we are at ) is independent of the relative position of the chain and the periodic lattice .however , in order for the chain to slide with continuity between two consecutive positions , it is necessary that particles should be able to occupy a maximum of the potential , the worst possible position . at the aubry transition , however , realized by a relative increase of the periodic potential magnitude , or equivalently by a softening of the chain stiffness , the probability for a particle to occupy that position drops from a finite value to exactly zero .the nature of this transition , which is structural but without any other static order parameter ( besides energy , of course ) , is dynamical , similar in that to a glass transition : simply , a part of phase space becomes unavailable , in this case by sliding .the chain is unpinned and mobile as long as , in its gs , atoms may occupy with a finite probability all positions , including those arbitrarily close to the maxima of the substrate potential , but is immobilized when that possibility ceases .the critical chain stiffness marks the crossing of the aubry transition , where the chain turns from the free sliding state to the locked ( `` pinned '' ) state with a nonzero static friction .the value is in turn a discontinuous function on the length ratio characterizing the model .the minimum value [ in units of is achieved for the golden - mean ratio .the stiff - spring chain with can explore adiabatically the full infinite and continuous set of gss configurations by means of displacements at no energy cost .this zero - frequency freely - sliding mode is the goldstone mode consistent with an emerging continuous translational invariance of the model , connecting continuously with an acoustical phase mode ( phason ) at finite wavelength .by contrast , in the pinned soft - chain region , all particles remain trapped close to the substrate - potential minima , a configuration which exhibits a finite energy barrier against motion over the corrugation .the locking is provided here , despite translational invariance , by the inaccessibility of forbidden configurations , which act as dynamical constraints . above incommensurate - chain sliding can therefore be initiated by an arbitrarily small driving force , whereas for the chain and the corrugation lock together through the pinning of the kinks or superkinks separating locally lattice - matched regions .note that the locking of the free ends in a finite - size fk chain necessarily leads to pinning , even when is irrational and regardless of how large is . even for a finite chain ,it is nevertheless still possible to define and detect a symmetry - breaking aubry - like transition .for the characterization of the aubry transition , a `` disorder '' parameter was conveniently defined as the smallest distance of atoms from the nearest maximum of the corrugation potential .this quantity vanishes in the freely - sliding state , and is nonzero in the pinned state . at the critical pinned - to - sliding point ,the disorder parameter exhibits a power - law behavior here the values of the critical exponents are functions of the irrational length ratio .specifically , for the golden - mean ratio and .equation ( [ criticalexp : eq ] ) characterizes the continuous aubry transition with a scaling behavior typical of critical phenomena , here at but as a function of the the stiffness parameter .it is common to refer to the exponents in eq .( [ criticalexp : eq ] ) as super - critical , since they are specific to the pinned side of the transition , .sub - critical exponents were introduced for the freely - sliding state , as well .to describe the response of the model to an infinitesimally small dc force applied to all atoms , an extra damping term has to be included in the equation of motion to prevent unlimited acceleration , and to achieve instead a steady - state .the resulting effective viscosity in the subcritical region is defined as , in terms of steady - state average velocity resulting in response to . at the aubry critical point ,the effective viscosity diverges . for , the golden - mean ratio ,the scaling behavior of is with . as is the case for all scaling relations , eq .( [ gammaaubry ] ) provides the leading divergence close to the aubry point ; at a larger distance from , deviates from eq .( [ gammaaubry ] ) .eventually , in the sg limit , decreases toward . in general ,in the unpinned phase at the incommensurate fk model exhibits an effective viscosity systematically larger and thus a mobility consistently smaller than its maximum value .this observation is illustrated by the limiting values of the curves of fig .[ fig : aubryfig ] . exclusively in the sg limit , the incommensurate system moves under an infinitesimal force without any extra dissipation added to the base value , therefore in a frictionless sliding motion , despite the finite corrugation magnitude . the first prediction of vanishing static friction was formulated for the incommensurate infinite - size sufficiently hard fk chain by peyrard and aubry .this phenomenon was subsequently re - discovered for incommensurate tribo - contacts , and named _ superlubricity _ .this name has drawn criticism , because it could misleadingly suggest the vanishing of _ kinetic _ friction too , in analogy to superfluidity or superconductivity .actually instead , the depinning of sliding interfaces closes just one of the channels for energy dissipation , namely the one associated to the stick - slip elastic instability at low speed .additional dissipation channels , including the emission of vibrations such as sound waves , remain active , with the result that the actual kinetic friction force remains nonzero and growing with increasing sliding speed . all the same , the superlubric state does attain a significant reduction of the kinetic friction force ( and thus an increased mobility ) , compared to the pinned state , see fig . [fig : aubryfig ] .the driven fk model was usefully employed to describe the onset of sliding of a crystalline contact , even though this model can not describe rigorously the experimentally - significant real - life plastic deformations of the contact .experimentally , superlubricity has been studied for a graphite flake sticking to the tip of an atomic force microscope ( afm ) sliding over an atomically flat graphite surface .extremely weak friction forces of less than pn were detected in the vast majority of the relative flake - substrate orientations , namely those orientations generating incommensurate contacting surfaces , see fig .[ fig : flakesuperlub ] . stick - slip motion , associated with a much higher friction force ( typically pn ) , was instead found in the narrow ranges of orientation angles where the flake - substrate contact was commensurate .the above discussion ignores temperature , assuming so far . at nonzero temperature ,the sliding - friction response of the fk model requires of course the addition of a thermostat ( see sec . [ linear : sec ] ) .the common choice of a langevin thermostat for example simulates all dissipation mechanisms through a viscous force , and includes fluctuations by the addition of gaussian random forces whose variance is proportional to temperature , as sketched in sec .[ fluctdiss : sec ] . at ,thermal fluctuations can always overcome all sorts of pinning and will thus initiate sliding by nucleation of mobile defects even in the fully commensurate ( pinned ) condition , see fig .[ kinks ] .more generally , in the fk model the dimensionless coverage plays a central role , because it defines the concentration of geometrical kinks `` close to and of ' ' superkinks " which arise when deviates slightly from a background commensurate pattern that is not , but rather a rational , with and mutually prime integers . if is only slightly different from , the gs of the fk model consists of extended domains with the commensurate pattern associated to , separated by superkinks ( super - antikinks ) , in the form locally mismatched regions of compression ( expansion ) relative to .the above concepts of pinning or superlubricity apply for an infinitesimal applied force .additional interesting physics arises at finite force . by increasing the driving force , a fk model with a pinned gs ( either commensurate , or incommensurate but past the aubry transition )is known to show a hierarchy of first - order dynamical phase transitions , starting from the completely immobile state , passing through several intermediate stages characterized by different running states of the kinks , to eventually reach a totally running state .consider , for example , the ratio : initially the mass transport along the chain is supported by superkinks constructed on top of the background structure . since the average superkink - superkink distance is large , they interact weakly , and the atomic flow is restricted by the need for these rarefied superkinks to negotiate their pn barriers ( see fig . [ fig : th-01]a ) . for larger driving ,the effective pn barriers are tilted and lowered ( in analogy to barriers of the corrugation potential ) , producing an increased single - kink mobility . as a result, the zero - temperature transition from the locked state ( ) to the running state takes place at the force , where the factor depends on the shape of the pn potential . in terms of the dimensionless superkink concentration ,the mobility becomes . beyond ,further possibilities depend on the damping coefficient . at very smalldamping , , the driven model transition leads directly into the fully running state , because running superkinks self - destroy soon after they start to move , causing an avalanche , thus driving the whole chain to a total running state similar to that shown at the right side of fig .[ kinks ] .when the dissipation rate is larger , , one instead finds intermediate stages with stable running superkinks , see fig .[ fig : th-01 ] . the mechanism for a second rapid increase of the mobility afterdepinning depends again on the value of ( for details see refs . ) . in between the initial superkink - sliding stage and the fully running state, a sort of `` traffic - jam '' intermediate regime may emerge .a qualitatively similar picture was confirmed also for different and more complex kink patterns , such as that shown for the example in fig .[ fig : th-01]b . in this case , the gs consists of domains of the commensurate structure , separated by superkinks at an average spacing . even the pattern could itself be viewed as a dense array of trivial kinks constructed on top of the simple background structure .the force dependence of the mobility bears a trace of this double nature , with a state of running superkinks preceding a state of running kinks . for not too small therefore the mobility increases in two distinct steps as the driving force is increased .a first step , at ( here defines the depinning force for the fully commensurate model ) occurs when the superkinks begin to slide ; then a second step , at , occurs in correspondence to the unpinning of the trivial kinks .several extensions of the fk model have been proposed to describe a broad range of frictionally relevant phenomena .most of these generalizations involve modifications of either the interactions or the system dimensionality . to address more realistic systems ,anharmonicity of the chain interatomic potential has been studied in detail .the resulting features include mainly new types of dynamical solitons ( supersonic waves ) , a modification of the kink - kink interaction , the breaking of the kink - antikink symmetry , and even the possibility of a chain rupture associated to the excessive stretching of an antikink . the large kink - antikink asymmetry consistent with friction experiments in layers of repulsive colloids was attributed to the strong anharmonicity in the colloid - colloid interaction .the essence of this asymmetry is the same as that between the physical parameters of a vacancy and those of an interstitial .research has addressed also substrates with a complex corrugation pattern , including quasiperiodic and random / disordered corrugation profiles .modifications from the plain fk model may generate qualitatively different types of excitations , e.g. phonon branches and kinks of different kinds , as well as modifications in the kink - antikink collisions . at small driving force , where the dynamics and tribology are dominated by moving kink - like structures ,different sliding modes appear .the frenkel - kontorova - tomlinson ( fkt ) model introduces an harmonic coupling of the sliding atomic chain to a driving support , thus making it possible to investigate stick - slip features in a 1d extended simplified contact .the fkt framework provided the ideal platform to investigate the tribological consequences of combined interface incommensurability , finite - size effects , mechanical stiffness of the contacting materials , and normal - load variations .important generalizations involving increased dimensionality compared to the regular fk model bear significant implications for tribological properties such as critical exponents , size - scaling of the friction force , depinning mechanisms , and others . in particular 2d extensions of the fk model been applied to the modeling of the ( unlubricated ) contact of two crystals .such is the case , for example , in quartz - crystal microbalance ( qcm ) experiments , where single - layer adsorbate islands are made slide over a crystalline substrate .another case is that of recent experiments carried out with 2d monolayers of colloids driven over a laser - generated optical lattice .interestingly , the 2d aubry transition of incommensurate colloids was shown by mandelli _et al . _ to be of first order , rather than of second order as in 1d . as a consequence in 2d the free sliding andthe pinned phases retain local stability for a range of parameters that extends beyond the transition point , a point where the total energy has a crossing singularity instead of a smooth stiffness dependence as in 1d .it is likely , although not proven to our knowledge , that the 2d fk should possess a first - order aubry transition too . among generalized 2d fk models we recall the two coupled fk chains , the `` balls and springs '' layer of particles linked in 2d by harmonic springs and moving in a 2d periodic corrugation potential , the scalar anisotropic 2d fk model consisting of a coupled array of 1d fk chains , the 2d vector anisotropic model ( namely the zigzag fk model wherethe transverse motion of atoms is included ) , the 2d vector isotropic fk model , and finally the 2d tribology model ( see also ref . and references therein ) .these approaches which generalize the fk model have been of use for the study of the transient dynamics at the onset of sliding .capturing these transient phenomena is often highly nontrivial in fully realistic md simulations ( see e.g. ref .an interesting example of such a transient is the depinning of an atomic monolayer driven across a 2d periodic substrate profile of hexagonal symmetry .the formation , by nucleation , of an island of moving atoms in a `` sea '' of quasi - stationary particles mediates the transition from the locked to the running state .the moving island expands rapidly along the direction of the driving force , and grows at a slower rate in the orthogonal direction . within the island, the 2d crystal retains its approximate ordered hexagonal structure , thanks to its stiffness supported by the intra - atomic forces . as a result ,at the onset of depinning , the model exhibits regions of almost perfect hexagonal - lattice order delimited by a closed boundary of dislocations .kinks in 1d and dislocation lines in 2d exhibit peculiar tribological properties . in noncontact experiments ,an oscillating afm tip was seen to dissipate significantly more when hovering above a dislocation line of a incommensurate adsorbates than above in - registry regions .the larger softness and mobility of the dislocation regions accounts for this effect .an explicit demonstration of this mechanism was carried out by the study of a mismatched fk chain , whose dynamics was forced and simultaneously probed by an oscillating localized model tip .this approach illustrates the ability of the fk model to capture the local modifications of the dissipation properties .in contrast , if retardation effects related to the finite speed of sound across a material need to be taken into account , more sophisticated models are called for .the investigation of systems _ confined _ between two shearing sliders , such as single particles or harmonic chains embedded between competing periodic potentials , have led to the discovery of several nonlinear tribological phenomena involving either stick - slip dynamics or the formation of peculiar `` synchronized '' sliding regimes .the fk model can be generalized with the addition of a second , different sinusoidal corrugation potential , as sketched in fig .[ velcm : fig]c .when the second potential is spatially advanced relatively to the first as a function of time , the model realizes the simplest idealization of a slider - solid lubricant - slider confined geometry . in this extended modelthe lattice mismatch was shown to generate peculiar and robust `` quantized '' sliding regimes , where the chain deformations are synchronized to the relative motion of the two corrugations , in such a way that the chain s ( i.e. the solid lubricant s ) average drift velocity acquires nontrivial fixed ratios to the externally - imposed sliding velocity .specifically , the ratio of the lubricant speed to that of the slider remains locked to specific `` plateau '' values across broad ranges of most model parameters , including the potential magnitude of the two sliders , the chain stiffness ( see fig .[ velcm : fig]a ) , the dissipation rate , and even the external velocity itself .the speed ratio is ultimately determined by geometry alone : , where is the incommensurability ratio between the chain spacing and that , , of the closest slider .the plateau mechanism is the fact that the kinks formed by the mismatch of the chain with one slider ( the slider whose spatial periodicity is closest to that of the chain ) , are rigidly dragged at velocity by the other slider .the kink density being geometrically determined and lower than the chain density implies that the overall velocity ratio shares exactly the same properties .the exactness of the velocity plateaus implies a sort of `` dynamical incompressibility '' , identically zero compliance to perturbations trying to modify from its quantized - plateau value .this robustness of the plateaus can be demonstrated e.g. by adding a constant force , pushing all particles in the chain : as long as is small enough , it does perturb the dynamics of the velocity - plateau attractor , but not the value of . eventually , above a critical force , the driven model abandons the plateau dynamics .this transition , explored by increasing the external driving force , exhibits a broad hysteresis , and shares many features of the _ static_-friction depinning transition , except that here it takes place `` on the fly '' . disregarding details , this transition is then formally equivalent to the standard aubry depinning transition , with the moving kinks of the lubricant - substrate interface taking here the role of particles .the robustness of the quantized plateau stands even after replacement of the sinusoidal corrugation potential of eq .( [ fkhamil ] ) with a deformed profile : the remoissenet - peyrard non - sinusoidal potential even extends the velocity plateau in the space of model parameters .the quantized sliding regime of the crystalline solid lubricant was also investigated in a significantly less idealized 2d model , including the perpendicular degree of freedom .md simulations carried out for a monolayer or multilayer lubricant film where atoms interacting via lennard - jones potentials can also move perpendicularly to the sliding direction , as sketched in fig .[ model_quantized : fig ] showed quantized plateaus in this case too .these plateaus were shown to be resilient against a variations in the loading forces across a broad range , against thermal fluctuations , and also against the presence of quenched disorder in the substrates .this quantized sliding state was also characterized by significantly lower values of kinetic friction ( the average force needed to maintain the advancement of the top slider ) , than the regular non - quantized regime , see fig .[ hyst - diss - kink : fig ] .quantized sliding has been again demonstrated more recently in a 3d model where the lubricant is represented by a layer of lennard - jones atoms .the quantized - sliding state and its boundaries were fully characterized in the special case of perfectly - aligned crystalline layers .we note however there are reasons to expect that incommensurately mismatched epitaxial layers could relax to a mutually rotated alignment .the quantized - sliding state in such rotated arrangements is the subject of active investigation . more generally , no experimental observations of the quantized sliding predicted for solid lubricants has appeared so far .layered systems such as graphene and bn appear to offer a good opportunity for the future study of these curious phenomena .the simple models considered above yielded precious qualitative and often semi - quantitative understanding of several features of friction . to address subtler physical behavior in specific systems ,it is nevertheless necessary and desirable to include atomistic structural and mechanical details of the interface .md simulations can help make an inroad such detail , also offering a level of detail that can in some instances replace experiment .thanks to advances in computing algorithms and hardware , recent years have witnessed a remarkable increase in our ability to simulate tribologic processes in realistic nano - frictional systems , and obtain detailed microscopic information .a md simulation is _ de facto _ a controlled computational experiment , where the overall atomic dynamics is provided by the numerical solutions of suitably generalized newton s equation of motion relying on interatomic forces derived by specific realistic interparticle - interaction potentials .tribological simulations require a careful selection of the geometric arrangement of the sliding interface , e.g. as in fig . [simulation : fig ] , and of the applied boundary conditions .influential review articles cover the atomistic md simulation of friction , with a focus on technical aspects such as the construction of a realistic interface , suitable techniques for the application of load , shear , and temperature control .the simplest approach to temperature control , namely adding a langevin thermostat to newton s equations , has been adopted broadly , but more refined approaches have been proposed and adopted for friction simulations , as discussed in sect .[ thermostat : sect ] below .physically relevant quantities , including the average friction force , the slider and the lubricant mean velocities , several correlation functions , and the heat flow can be evaluated numerically by carrying out suitable averages over the model dynamics of a sliding interface , as long as it is followed for a sufficiently long time .the modeling of friction must first of all address correctly ordinary equilibrium and near - equilibrium phenomena , where the fluctuation - dissipation theorem ( sec .[ linear : sec ] ) governs the smooth conversion of mechanical energy into heat , but most importantly it must also deal with inherently nonlinear dissipative phenomena such as instabilities , stick - slip , and all kinds of hysteretic response to external driving forces , characteristic of non - equilibrium dynamics .the choice of realistic interatomic forces is often a major problem . indicating with the total interaction energy as a function of all atomic coordinates , the force on atom is , fully determined in terms of .unfortunately , the adiabatic energy results from the solution of the quantum ground state of the electrons a practically complicated problem whose quantitative outcome may moreover be of uncertain quality .the reason why _ ab - initio _md , e.g. of the car - parrinello type , is not generally used in sliding friction is that it can neither handle large systems , exceeding few hundreds of atoms , nor run for tribologically significant duration , usually in excess of ns . on the other hand ,the physical situations where a first - principles description of interatomic forces is mandatory are not too common in the frictional phenomena studied so far . as a consequence ,most md models for friction rely on more or less refined interatomic `` force fields '' , ranging from sophisticated energy surfaces modeled on calculations at _ ab - initio _ density - functional or tight - binding level , to empirical distance- and angle - dependent many - body classical potentials , to basic pairwise potentials ( e.g. morse or lennard - jones ) , to the simplest elastic - springs models , which represent generalizations of the fk model . concretely , the scientific literature documents many realistic force fields , ready to address several classes of materials and their combinations .while these force fields allow qualitative atomistic simulations of tribological systems , their limitations often prevent quantitative accuracy . in particular , in the course of such a violent frictional process as wear , atoms are likely to modify their chemical coordination and even their charge state : phenomena and radical chemical changes usually impossible to describe with empirical force fields .mechanochemistry and tribochemistry are time - honored areas offering obvious examples where empirical force - fields would fail , and simulations must by necessity be conducted by electronic - structure based first - principles methods .also , even if for a specific system , element or compound , a satisfactory force field has been arrived at , the mere replacement of one atomic species with another one generally requires a complete and usually difficult re - parameterization of the whole force field . as a result ,quantitatively accurate nanofrictional investigations remain a substantial challenge because of opposite limitations in the use of first principles versus empirical force fields . a promising compromise could possibly be provided by the so - called reactive potentials , capable of describing some chemical reactions , including interface wear with satisfactory computational efficiency in large - scale atomic simulations , compared to semi - empirical and first - principles approaches .retardation effects due to the finiteness of the speed of sound are usually irrelevant in slow - speed experiments ( mm / s ) . for larger speed, retardation effects related to the finite speed of sound can be taken into account explicitly in md modeling , provided that rigid layers are either omitted or introduced with special care .other effects of nonlocality in time , such as retardation due to the finite speed of light are usually omitted in the md force fields altogether , as they lead to negligible corrections in all conditions where sliding involves a proper material contact .such retardation effects do play a role in noncontact geometries , such as in experiments probing lateral casimir forces , whose strength can become relevant at large sliding speed . as we already mentioned above in sec .[ linear : sec ] , any kind of sliding friction involves mechanical work , some of which is then transformed into heat ( the rest going into structural transformations , wear , etc . ) .the heat is then transported away by phonons ( and electrons in the case of metallic sliders ) and eventually dissipated to the environment .likewise , all excitations generated at the sliding interface in simulations should be allowed to propagate away from it , and to disperse in the bulk of both sliders . instead , due to the small simulation size , this energy may unphysically pile up in the rather small portion of solid representative of the `` bulk '' of the substrates , where these excitations are scattered and back - reflected by the simulation - cell boundary , instead of being properly dissipated away . in order to prevent continuous heating and attain a steady state of the tribological system , the joule heat must then be removed at a steady rate . in the fk and pt models , a viscous damping term , eq .( [ eq_langevin ] ) , is generally introduced for this purpose . in these minimal modelshowever , the value of is well known to affect the dynamical and frictional properties , but there is unfortunately no clear prescription for the choice of . in md atomistic simulations ,the heat removal is often achieved by means of equilibrium thermostats , e.g. nos - hoover or langevin , see sec .[ fluctdiss : sec ] . in this wayhowever an unphysical energy sink is spread throughout the simulation cell . as a result , the atoms at the interface fail to follow their actual conservative trajectories , but evolve through an unphysically damped dynamics , with unknown and generally undesired effects on the overall tribological properties . in order to address and mitigate this problem , modifications of the equations of motion for the atoms inside the microscopically small simulation cell were proposed with the target of reproducing the frictional dynamics of a realistic macroscopic system , after the integration of extra `` environment '' variables .one possible approach is the application of langevin equations with a damping coefficient that changes as a function of the position and velocity of each atom in the lubricant , in accordance with the dissipation known for the atoms adsorbed on a surface .this method involves modifying the standard langevin equations .another approach to improve the simulation of dissipation within blocks in reciprocal motion requires modifying the damping term ( [ eq_langevin ] ) to a form where is the average center - mass velocity of the atoms forming the sliding block to which particle belongs locally . another more rigorous , physically appealing approach is the recently - implemented dissipation scheme , drawing on earlier , long - known formulations and subsequent derivations , describing the correct embedding of the newtonian simulation cluster inside a larger heat bath made of the same material . upon integrating out the heat bath degrees of freedom , atoms in the boundary layer that borders between the cluster and the heat bathare subjected to additional non - conservative and non - markovian forces that mimic the surrounding bath through a so - called memory kernel .an approximate but very practical scheme replaces this memory kernel by a simple viscous damping , here applied exclusively to atoms in the boundary layer .the magnitude of the parameter is optimized variationally by minimizing , with surprising accuracy , the energy reflected across the boundary .this dissipation scheme has been implemented recently in nanofriction simulations where it was shown to improve greatly over other conceptually and practically inadequate thermostats . besides the limitations of system size and simulation times that are obvious and will be discussed later, there is another limitation concerning temperature , that is rarely mentioned .all classical frictional simulations , atomistic or otherwise , are only valid at sufficiently high temperature .they become in principle invalid at low temperatures where the mechanical degrees of freedom of solids progressively undergo `` quantum freezing '' , and both mechanics and thermodynamics deviate from classical .unfortunately there is at the moment no available route to include appropriately these quantum effects in dynamical and frictional simulations .each core of a present - day cpu executes floating - point operations per second ( flops ) .md simulations usually benefit effectively of medium - scale parallelization .approximately linear scaling can be achieved up to cores , thus a md simulation can execute flops routinely .the evaluation of the forces is usually the most cpu - intensive part of a md simulation . for each atom , depending on the force - field complexity and range , this evaluation can require operations , or even more . as a result , the number of time - integration steps multiplied by the number of simulated particles , is per computer runtime second . given that simulations of atomic - scale friction require time - steps in the femtosecond region , a medium - size simulation involving simulated particles , can advance at an estimated speed of fs each real - life second , namely each simulation day .clearly , speed scales down for more refined force fields , and for larger systems size , although this increase may be mitigated by a larger - scale parallelization .we can compare these estimates with typical sizes , duration , and speeds in frictional experiments . in macroscopic tribology experiments ,sliding speeds often range in the m / s region : each microsecond the slider would progress by to m , namely lattice spacings of standard crystalline surfaces . in such conditions may suffice to generate a good statistics of atomic - scale events , although it may still be insufficient to address e.g. the diffusion of wear particles or additives in the interface , or phenomena associated to surface steps and/or point defects .by contrast , in nanoscale afm experiments the tip usually advances at much lower speeds : over a typical run it is possible to simulate a tiny pm displacement , far too small to explore even a single atomic - scale event , let alone averaging over a steady state .for this reason , in all conditions where long equilibration times and/or slow diffusive phenomena and/or long - distance correlations can be expected , models should be preferred to realistic but expensive md .however , md simulations can provide so much physical insight that they make sense even if carried out at much higher speeds than in real - life afm or surface force apparatus ( sfa ) experiments : in practice , currently the sliding speeds of most atomistic tribology simulations are in the m / s region. here however we should distinguish between static and kinetic friction , and for the latter between smooth - sliding and stick - slip regimes .smooth kinetic friction generally increases with speed ( velocity strengthening ) , but sometimes decreases with increasing speed in certain intervals . in the former case ,simulating smooth high - speed frictional sliding is not fundamentally different from the real sliding at low speed , with appropriate changes in frictional forces with .velocity weakening conditions , alternatively , tend to lead to an intrinsic instability of smooth sliding , which is therefore not often pertinent to real situations .as a result , for nanoscale systems , md simulations is of value in the description of smooth dry kinetic friction despite the huge velocity gap . on the other hand , static friction the smallest force needed to set a slider in motion is also dependent on the simulation time ( a longer wait may lead to depinning when a short wait might not ) , and generally dependent on system size , often increasing with sub - linear scaling with the slider s contact area . to address this kind of behavior in md simulations , it is often necessary to resort to scaling arguments in order to extrapolate the large - area static friction from small - size md simulations . returning to the simulation time problem ,let us come to stick - slip in md simulation , and to the desirability to describe the stick - slip to smooth - sliding transition as a function of parameters such as speed . in afm and sfa experiments , stick - slip andits associated characteristically high friction and mechanical hysteresis tend to transition into smooth sliding when the speed exceeds m/s ; in contrast , in md modeling the same transition is observed in the m / s region .this 6-order - of - magnitude discrepancy in speed between experiments and simulations is well known and has been largely discussed in connection with the effective mass distributions and spring - force constants , that are vastly different , and highly simplified in simulations .attempts to fill the time and speed gaps can rely on methods , such as hyperdynamics , parallel - replica dynamics , on - the - fly kinetic monte carlo , and temperature - accelerated dynamics which have been developed in the last decades .however , caution should generally be exerted in that some of these schemes and methods are meant to accelerate the establishment of equilibrium but not always to treat properly the actual frictional - loss mechanisms .concerning stick - slip friction , another problem is that , unlike simulations , real experiments contain mesoscale or macroscale component intrinsically involved in the mechanical instabilities of which stick - slip consists . herethe comforting observation is that stick - slip is nearly independent of speed , so that so long as a simulation is long enough to realize a sufficient number of slip events , the results may already be good enough .one can even describe stick - slip friction adiabatically , e.g. , from a sequence of totally static calculations , where a periodic back - and - forth sliding path is trodden , the area of hysteresis cycle generated by two different to and from instabilities representing the friction .a serious aspect of stick - slip friction which md simulation is unable to attack is ageing .the slip is a fast event , well described by md , but sticking is a long waiting time , during which the frictional contact settles very slowly .the longer the sticking time , the larger the static friction force necessary to cause the slip . typically experiments show a logarithmic increase of static friction with time .rate and state friction approaches , widely used in geophysics , describe phenomenologically frictional ageing , but a quantitative microscopic description is still lacking .mechanisms invoked to account for contact ageing include chemical strengthening at the interface in nanoscale systems , and plastic creep phenomena in macroscopic systems .contact ageing is observed also in other disordered systems out of equilibrium , including glasses and granular matter . in seismology finally , as will be discussed later , it is generally accepted that ageing is responsible for aftershocks , as also shown by some models .if md simulation may be satisfactory in nanoscale friction , it is clearly not capable of describing mesoscale and macroscale tribology .the insurmountable difficulties of the fully atomistic treatment of all typical and large length scales that are responsible for the dynamical processes in large scale systems has prompted in recent years increasing efforts towards multiscale approaches .the main idea is that , at a sufficient distance from the sliding interface , continuum mechanics should describe all processes to a fair level of approximation .finite - elements simulations of the continuum mechanics may provide a practical model for the elastic and plastic deformations . using finite element methods , one can increase the coarse - graining level while moving away from the sliding interface , thus keeping the computational effort under control .several groups combined the md description of the sliding interface , where local deformations at the atomic length scale and highly nonlinear phenomena occur , with a continuum - mechanics description in the `` bulk '' regions where strains are continuous and small .the main difficulty faced by this class of approaches is the correct choice of the matching between the atomistic region with the continuum part . because at the continuum level the detail of lattice vibrations can not be represented in full , the matching conditionsshould at least minimize the reflection of the acoustic phonons at the atomistic - continuum interface .in other words , the matching should allow the transmission of sound deformations in both directions with sufficient accuracy : this is necessary for a proper disposal of the joule heat into the bulk .simulations can provide direct insight in the dynamical processes at the atomistic level , that are at the origin of friction , allowing a connection of these microscopic facts with their macroscopic counterparts .case studies , in which the system is well describable by both experimental and theoretical sides , are of extreme importance firstly to permit a crosscheck between the two , and then to make use of simulations in order to highlight particular aspects that can not be accessed by experiments . herewe summarize the main results of a few selected simulations sampled from the expanding literature of friction simulation , certainly not claiming an exhaustive review of the field .the sliding of rare - gas overlayers deposited on metallic substrates at low temperature has contributed much to the understanding of how friction scales with the contact - area size , the substrate corrugation , and the sliding velocity .rare - gas atoms condense into 2d solid islands showing a faceted - circular shape , arranged on multiple layers at low temperatures or on a single layer at diffusion - enabled temperatures .the friction characteristics of these solid islands on the substrate , resulting from inertial sliding , has been probed experimentally by qcm apparatuses , revealing a complex interplay among friction , coverage , and temperature .the rare - gas lattice spacing inside the island , generally incommensurate but sometimes commensurate with that of the substrate , plays a very important role in determining the pinning or free sliding that controls the frictional behavior . a generally overlooked aspect which has been highlighted only recently is the larger thermal expansion coefficient of rare - gas layers than that of a metal substrate , causing a temperature - dependent lattice mismatch at the interface with possible incommensurate - commensurate transitions . due to this mechanism ,md simulations have predicted the possible appearance of static - friction peaks in the correspondence of a long - range commensurate phase occurring at a particular temperature . in the case of periodic monolayers ,a change in the lattice mismatch can be also induced by an adhesion - driven densification of the adsorbate , again eventually encountering a commensurate phase .simulations of rare - gas incommensurate adsorbates , whose linear substrate - induced misfit dislocations ( `` solitons '' ) must flow during sliding , have revealed the role of their entrance in the depinning of the island , and of their dissipation through anharmonic coupling to phonons , in kinetic friction .finite - size effect are in this case of absolute relevance , effectively generating ( or enhancing ) static friction through a pinning barrier arising at the interface edge , which solitons must overcome to establish motion .the edge - related origin of the pinning mechanisms implies that static friction can grow with the island size at most as , i.e. as its perimeter , if the pinning points were uniformly distributed along the island or cluster edge . as shown in fig .[ varini.fig2 ] , a different shape of the deposited nano - object can generally lead to a different scaling exponent .similar sublinear scaling exponents were identified in dynamic friction experiments in which gold nanoclusters of variable size / shape were dragged at low speed over a graphite substrate by an afm tip . scaling exponents of both the rare - gas island / metal surfaces ( theoretical ) and dragged gold clusters / graphite ( experimental ) are in the order of .this indicates that not all points at the boundary provide pinning with equal efficiency .a scaling close to might rather indicate a random efficiency of boundary points , whereby only provide effective pinning .nowadays computational capabilities even permit the atomistic simulation of an entire afm tip , enabling the understanding of several mechanisms which are not describable by simplified pt - like models ( see fig .[ slufi2009_fig1 ] ) .for example , it is possible to highlight the formation / rupture dynamics of contacts in multi - asperity interfaces , and consequently estimate the true contact area as a function of the apparent one . besides, it is possible to investigate the effect of the tip plasticity and elasticity , which are of fundamental importance to define the load - dependent contact area , and as channels for dissipation and wear .this approach enables the bottom - up derivation of the linear scaling laws of macroscopic friction with size , and their transition to the sublinear ones for incommensurate nanosized contacts .we can now understand that such transition takes place when the contact roughness becomes large compared to the range of interfacial interactions . in the study of repeated scratching of metallic surfaces by hard afm tips , widely employed in the field of micro / nano machining ,md simulations have uncovered strongly non - linear trends of the frictional force with the feed ( i.e. the distance from the first groove ) , induced by lateral forces exerting on the tip due to the substrate plasticity .it is also important to mention the simulations of nanotubes ( nt ) , either made of carbon or hexagonal bn , which , due to their extraordinary mechanical and electronic properties , have been investigated with enormous interest in the last decades .almost defectless nt can be formed nowadays with lengths of the order of 1 cm , and precise measurements of their mechanical and frictional properties have started to appear in literature .simulations of concentric nanotubes in relative motion ( telescopic sliding ) , have revealed the occurrence of well - defined velocities at which friction is enhanced , corresponding to a washboard frequency resonating with longitudinal or circular phonon modes , leading to enhanced energy dissipation .the frictional response becomes highly non - linear while approaching the critical velocity and , contrary to macroscopic systems , washboard resonances can arise at multiple velocities , especially for incommensurate interfaces where more than one length scale may be in common to the contacting surfaces .the exceptional electro - mechanical properties of nts have also been investigated by various tip - based techniques , revealing a strong friction anisotropy dictated by nts orientation . in this respect ,simulation - assisted experiments of a sliding nanosized tip over cnts reveal that transversal friction is enhanced by a hindered - rolling motion of the nt , with a consequent frictional dissipation that is absent in the longitudinal sliding .the same elastic deformation have been reportedly responsible of a reverse stick - slip effect in the case of an afm probe sliding over a super - lattice cnt forest . here, simulations reveal that the fast sticking is induced by the penetration of the tip into the valley between the nts and its interaction with the tubes on both sides , causing an elastic shell buckling of the cnts .in contrast , the gradual slipping occurs over much longer distance because it includes both the sliding on the top of the nt and the energy release at both sides of the graphitic wall . when two sliding surfaces are separated by a thick lubricant film , as it ordinarily happens under weak - load conditions , the tribological response of the confined system is typically determined by the fluid viscosity . in these cases of hydrodynamic lubrication ,friction can be computed based on the navier - stokes equations , which prescribe a monotonically increasing kinetic friction as a function of the relative sliding speed . by contrast , at high load and low driving velocity, the lubricant may not maintain a broad gap between the sliding surfaces , with the result that solid - solid contact eventually occurs .prior to full squeezeout under pressure , as confirmed by experiments and simulations , the intervening boundary film usually changes from liquid to solid or nearly solid , exhibiting a layered structure prone to develop finite static friction and a high - dissipative stick - slip dynamics in such `` boundary - lubrication '' regime . both sfa measurements and md investigations have demonstrated sharp upward jumps of friction at squeezeout transitions where the number of lubricant layers decreases from to .boundary - lubricated systems often display stick - slip dynamics during tribological measurements associated with a significant value of the friction dissipation . as the load increases , however , it becomes harder and harder to squeeze out an extra lubricant layer .this hardening and increased difficulty of squeezeout reflects the tendency to crystallization of the initially liquid lubricant and the increased cost of the `` crater '' whose formation constitutes the nucleation barrier of the transition .once it has happened , generally this `` re - layering '' transition gives rise to an upward friction jump . in principle however , upon re - layering of the solidified ( structured ) confined film , the two - dimensional ( 2d ) parallel crystalline - like order could occasionally change under pressure toward a more favorable mismatched ( incommensurate ) substrate - lubricant geometry . in that case , as a result , sliding friction might actually switch downward from highly - frictional stick - slip to smooth dynamical regimes characteristic of incommensurate superlubric interfaces , with a highly mobile 2d soliton pattern , of the type sketched in fig .[ solid_lub ] .so far this type of event has only been observed in simulations . in sfa experiments ,boundary - lubricated systems often display stick - slip dynamics during tribological measurements , associated with a significant value of the friction dissipation .one can not directly access the detailed film and interface rearrangements giving rise to the stick - slip mesoscopic intermittent dynamics .the mechanisms at play for the stick - slip dynamics in the boundary - lubrication regime have been studied by md investigations .several realistic models for lubrication layers were simulated .the issue whether frictional shearing occurs through the middle of the solid lubricant film , possibly accompanied by melting - freezing , or if it forms a smooth shear band , or it occurs at the substrate - lubricant boundary , is one which can be addressed by computer modeling .depending on the relative strength of the potentials governing the lubricant - lubricant and lubricant - substrate interactions , a thin confined film may exhibit a solid - like or liquid - like behavior under shear .if the interaction with the substrates is weaker than the lubricant - lubricant one , then sliding takes place mainly at the surface - lubricant interfaces .the lubricant film is then allowed to maintain or acquire a solid order .if both the solid lubricant and the substrate are characterized by nearly - perfect crystalline structures and these structures are mismatched and/or misaligned , then smooth superlubric sliding with reduced kinetic friction ensues : in such conditions , solid lubrication can provide quite low friction . in practicehowever neither the substrates nor the lubricant are likely to maintain undefected crystalline order . defects and/or impurities between the sliding surfaces , even if diluted to a weak concentration , may suffice to induce pinning and finite static friction , thereby eliminating superlubricity . in the opposite condition of prevailing lubricant - substrate interactions ,the surfaces are covered and protected from wear by lubricant monolayers : sliding occurs inside the lubricant bulk . in such conditionthe lubricant film can be led to melt during sliding ; alternatively , the layering imposed by the surfaces can remain solid , with slips occurring in a layer - over - layer sliding .simulations are of particular value in the exploration of extreme frictional regimes , that are difficult to access experimentally . among such extreme regimes ,researchers have investigated or are investigating high temperature , high speed , high pressure , and high plate charging in ionic liquid lubrication .although for most of these conditions there still is no experimental evidence to discuss , simulation has made some interesting predictions that should become of future reference ._ high temperature ._ close to the substrate melting point , its crystal surface may or may not undergo surface melting the formation , in full thermal equilibrium , of a thin liquid or quasi - liquid film at the substrate - vacuum interface .either event influences importantly the contact of an afm tip with the surface .surface melting gives rise to a local jump - to - contact of the film with the afm tip , as was found both in experiments and in md simulations . in that case, friction is expected to become hydrodynamic and uninteresting .more interesting is the case where the substrate surface does not undergo surface melting , such as is the case for particularly well packed , stable surfaces like pb(111 ) or nacl(100 ) . for an afm tip sliding on nacl(100 ) ,frictional md simulations suggested two quite different outcomes depending on the frictional mode . a sharp penetrating tip plowing the solid surface experiences a large friction , which drops sharply when the substrate temperature is only slightly below , so that the joule heat suffices to raise temperature locally and form a liquid drop accompanying and lubricating the moving tip . a blunt tip sliding wearlessly experiences instead a very small friction at low temperature , counterintuitively surging and becoming large close to , where the nonmelting surface lattice softens a phenomenon analogous to that exhibited by flux lattices in type ii superconductors ._ high speed ._ friction at high speed , of order of m / s or higher , is common in several technologically relevant situations but is rarely addressed in nanoscale , atomistically characterized situations , where velocity is more typically m/s , many orders of magnitude smaller . as anticipated in sec .[ sizeissues : sec ] , md simulation is an ideal tool for the study of friction in fast - sliding of nanosized systems . using gold clusters on graphite as test system, simulation has explored high - speed friction , and especially differences and similarities from low speed , examining the slowing down of a ballistically kicked cluster .both kinetic frictions are similarly viscous proportional to velocity .however , they show just the opposite thermal dependence .whereas low speed ( diffusive ) friction decreases upon heating , when diffusion increases , the high speed ( ballistic ) friction rises with temperature , when thermal fluctuations of the contact increase . _ high pressure . _the local uniaxial pressure transmitted to a local contact by the overall load on a slider may reach a hundred kbar , but is generally not very well characterized , and the effects of pressure insufficiently explored .md simulation makes suggestions of different kinds .first , pressure may provoke structural transformation of a crystalline substrate ( or slider ) from its initial crystal structure to another .as a recent simulation has shown this will reflect in a frictional jump , either up or down .second , pressure may bring a solid compound close enough to the chemical stability limit for the frictional perturbation to cause bond breaking and the beginning of chemical decomposition .third , pressure may lead to electronic or magnetic transformations , such as insulator - metal transitions , and this may also in principle influence friction . _ high plate charging in ionic - liquid lubrication ._ ionic liquids are salts whose ions have such a large size that falls below room temperature .experimental data have shown that friction across contacts lubricated by ionic liquids depends on the state of electrical charging of the sliders .md simulations applied to heavily simplified ionic liquid models indicate how this dependence can be ascribed to electrically induced structural modifications at the slider - lubricant interface . for extreme plate charging , these modifications may even modify the lubricant thickness , and also affect its whole molecular structure , with strong predicted consequences on friction .on meso- and macro - scales the interface between two bodies is quite generally far from uniform .when rough surfaces come into dry contact , the actual contacts occur at asperities of different sizes , typically characterized by a fractal distribution . even for a contact of ideally flat surfaces of polycrystal bodies ,different regions of the interface will be characterized by different local values of the static friction due to structural or orientational domains . for a lubricated contact , different values of the local static friction may appear due to patches of solidified lubricant or due to capillary bridges .all these cases can be rationalized with the help of an earthquakelike ( eq ) model first introduced by burridge and knopoff to describe real earthquakes .the nature of two problems earthquakes and friction is very similar : differences are restricted to their spatial - temporal scales : kilometers and years to millenia in geology compared to nanometers and seconds to hours in tribology .the eq model , also known as the spring - and - block model or the multi - contact model , has been successfully used in many studies of friction ; similar schemes have been used also to model the failure of fiber bundles and faults . in eq models ,two corrugated surfaces make contact only at a discrete set of points , as shown schematically in fig .[ earthquakemodel ] .when the slider moves , a single point contact elongates elastically as a spring , as long as the local shear force ( is the contact stretching and is its elastic constant ) remains below a threshold value ; then the contact breaks and slips for some distance , as indeed was observed in tip - based microscopy experiments as well as in md simulations . then , either immediately or after some delay time , the contact is reformed again with zero or lower stretching and a new threshold value .the simplified version of the eq model assumes that all contacts have the same threshold ; such a model however corresponds to a singular case and may lead to unphysical results . in a real situation, the contacts always have different thresholds with a continuous distribution of their static threshold elongations .therefore , when the upper block begins to advance , the forces acting locally on each contact increase : at successive moments , the contacts begin to snap in a sequence : weaker contacts break earlier , while the strongest contact resists to the last .eq - like models are usually studied by simulation .nonetheless , the kinetics of the eq model can be described by a master equation ( me ) , occasionally known as boltzmann equation or kinetic equation . in concrete , indicate the distribution of contact s stretchings by , when the sliding plate reaches position ( see fig .[ earthquakemodel ] ) : the evolution of is described by the equation q(x ; x ) = \delta ( x ) \ , \int_{-\infty}^{\infty } d\xi \ , p(\xi ) \ , q(\xi ; x ) \ , , \label{int - eq02}\ ] ] where is the fraction of contacts which break at stretching as a consequence of the plate advancing by .the `` rate '' and the distribution of the breaking thresholds are connected by the relation indicating that the fraction of contacts which snap when increases by equals those that have their thresholds between and , divided by the total fraction of contacts still unbroken at stretching .the eq model can be extended to account for thermal effects as well as the ageing of contacts ; the latter requires an additional equation to describe the increase of threshold values with the time of stationary contact .analytic solutions of the me are available and , in the smooth - sliding regime , they provide us with the velocity and temperature dependence of the kinetic friction force .contrary to the amontons coulomb laws , which state that ( macroscopic ) friction is independent of velocity , the friction force in the eq model depends on the sliding speed . at smalldriving velocity increases linearly with speed , . indeed , if the slider moves slowly enough , thermal fluctuations will soon or later break all the contactsthe slower the slider the longer time any contact will be given to undergo a fluctuation exceeding its respective threshold ; therefore the smaller the resulting kinetic friction force is .this linear dependence could be represented as a ( characteristically large ) effective viscosity of a ultrathin lubricant film . at high driving velocitiesthe friction force exhibits the opposite behavior , it decreases when grows , , due to relaxation , which one can also call an ageing effect . after snapping, a contact slips for a short time , then it stops and is reformed again , growing in size and strength .the faster the slider moves , the shorter time the contacts are left to be reformed and to grow .overall therefore , the kinetic friction force increases with at low , up to a maximum at and then decreases . at very high velocities should eventually grow again due to an increased damping in the slider bulk . at intermediate speeds typical of experiments ,the interplay of thermal and ageing effects generates a weak ( approximately logarithmic ) dependence , approximately consistent with the amontons - coulomb laws , although the proper dependence is rather difficult to detect in experiments . on the decreasing ( `` velocity - weakening '' ) branch of the dependence at slider motion may become unstable and change from smooth sliding to stick - slip motion . if the slider velocity increases due to a fluctuation , the friction force decreases , and the slider accelerates .this effect is studied usually with the help of phenomenological approach ( e.g. , see and references therein ) .a detailed study of the eq model shows that stick - slip for the multi - contact interface may appear , if and only if two necessary conditions are satisfied .first , the interface must exhibit an elastic instability .when the slider moves , the contacts break but then are formed again , but only if the reformed contacts build up a force sufficient to balance the driving force , the motion will be stable .otherwise the slider will develop an elastic instability , and will keep accelerating until the overall pulling spring force ( of elasticity , see fig . [ earthquakemodel ] ) decreases enough to regain stability .second , the contacts must undergo ageing .once these conditions are satisfied , stick - slip will exists for an interval of driving velocities only , , while for lower ( ) and higher ( ) speeds the motion is smooth . the me approach discussed above however assumed a rigid slider which is not a proper model of a realistic extended system . for a nonrigid slider ,its elasticity produces contact - contact interaction : as soon as a contact fails , the forces on nearby contacts must increase by an amount .this was shown to depend on the distance from the failed contact as at short distance , and as at long distance . the crossover length , known as the elastic correlation length , depends on properties of both the slider ( its young modulus ) and the interface ( the mean separation between nearby contacts and their average rigidity ) .accordingly , a simpler model can be formulated that considers the slider as rigid across regions of lateral size , with the micro - contacts inside each -sized area treated as a single effective macro - contact . for this `` -contact ''the parameters can be evaluated by solving a specific me .numerics also showed , in addition , that a large fraction of the extra inter - contact force concentrates behind and in front of the snapped contact , implying that effectively the interface can be treated as a one - dimensional chain of -contacts , at least approximately .now , if a -contact undergoes elastic instability , namely if at a certain threshold stress the -contact snaps and then advances , then the surrounding -contacts acquire an extra chance to also fail : this mechanism results in a sequence of snaps propagating forward and backward along the interface as in a domino effect .the resulting dynamics of the chain of -contacts could then be described by the frenkel - kontorova model of sec .[ fk : sec ] above , but replacing the sinusoidal substrate potential with a sawtooth profile , i.e. a periodically - repeated array of inclined lines .this approach allows one to calculate the maximum and minimum shear stress for the propagation of this self - healing crack ( the minimum shear stress coincides with the griffith threshold ) , and also the dependence of the crack velocity on the applied stress .when a - sized contact fails at some point along the chain subject to a uniform shear stress , two self - healing cracks leave the initial snapping contact , propagating in opposite directions as divergent solitons similar to the kink - antikink pair of fig .[ kinks ] , until these cracks either reach the boundary or meet another crack generated elsewhere . on the other hand , nonuniform shear stress is relevant for experiments such as those carried out by fineberg s group with the slider pushed at its trailing edge : at this location the shear stress is maximum and across the block it decreases with the distance from the trailing edge . in this system ,the leftmost -contact is the most likely to fail first , as the pushing force is increased .this failure will result in an increased stress concentrating on the successive -contact , which will fail as well .this process will repeat itself until a self - consistent stress remains below the breaking threshold everywhere in the slider . as a result , the self - healing crack initiated at the trailing edgewill propagate along the interface for a certain distance ( that can be calculated ) , releasing the stress at its tail side , while accumulating extra stress at its forward side .when the pushing force further increases , a second crack starts at the trailing edge , and can trigger a failure sequence in the pre - formed stressed state , thus propagating to some extra distance .these multiple cracks repeat themselves until they reach the slider leading edge , resulting in a major collective slip .thus , at the sliding onset , several cracks advance along the interface , with the whole slider undergoing multiple small forward slips ( the so - called precursors ) . in experiment these precursors could be detected and could help predicting the eventual large `` earthquake '' .as mentioned at the beginning of this section , the eq model was formulated initially to explain earthquakes .actual earthquakes follow two approximate empirical laws : the one named after gutenberg - richter states that the number of earthquakes with the magnitude scales with the corresponding magnitudes according to a power law ; the omori law states that aftershocks occur with a frequency decreasing roughly with the inverse time after the main shock .eq - like models discussed above can provide a rationale for both these laws of seismology .specifically , the gutenberg - richter law can be understood as a direct consequence of contact ageing ; the omori law can be interpreted in terms of cracks propagating to a finite distance after a major earthquake , the stress is not released in full : a certain amount of stress remains stored at a distance from the main shock , where an aftershock is likely to occur some time later .the fascinating and multidisciplinary topic of microscopic friction , where physics , engineering , chemistry and materials science meet to study the process of converting mechanical energy irreversibly into heat , still lacks fundamental understanding , and increasingly calls for well - designed experiments and simulations carried out at well - characterized interfaces .although afm , sfa , and qcm setups are providing insight in the high nonlinear out - of - equilibrium interface processes at the small length scales , these advanced experimental techniques still provide averaged tribological data .overall physical quantities , such as the average static and kinetic friction , the mean velocity and slip times , do not allow to tackle easily the problem of relating the mesoscopic frictional response of a driven system to the detailed microscopic dynamics and structural rearrangements occurring at the confined interface under shear . in this respect , by explicitly following and analyzing the dynamics of all degrees of freedom at play in controlled numerical `` experiments '' with interface geometry , sliding conditions , and interparticle interactions can be tuned ,mathematical modeling and computer simulations have proven remarkably useful in the investigation of tribologic processes at the atomic scale and are likely to extend their role in future frictional studies . even though a large number of open questions remains to be addressed, these modeling frameworks have provided effective insight in the nonlinear microscopic mechanisms underlying the complex nature of static and kinetic friction , the role of metastability , of crystalline incommensurability , and of the interface geometry .each theoretical approach , from simplified descriptions , to extended realistic md and hybrid multiscale simulations , has limitations and strengths , with specific abilities to address specific aspects of the physical problem under consideration .thus , a robust prior understanding of the theoretical background is a basic first step in deciding which modeling features deserve a specific attention and which ones are rather irrelevant details , and then in selecting the best methodological approach for a given problem . concluding , it is worth recalling that novel experimental approaches address the intrinsic tribological difficulties of dealing with a buried interface with a very limited control of the physical parameters of the frictional system : artificial systems consisting in optically trapped charged particles , either cold ions in empty space or colloidal particles in a fluid solvent , forced to slide over a laser - generated periodic potential profile .indeed , especially in 2d colloid sliding it is possible to follow each particle in real time , like in md simulations . by knowing and , on top of that , tuning the properties of a sliding interface, our physical understanding can expand significantly and open up possibilities to control friction in nano- and micro - sized systems and devices , with serious possibilities of bridging between nanoscale and mesoscale sizes and phenomena .useful discussion and collaboration with s. zapperi m. urbakh , j. scheibert , m. peyrard , b. persson , g.e .santoro , r. capozza , a.r .bishop , and a. benassi , is gratefully acknowledged .this work is partly funded by the erc advanced grant no .320796-modphysfrict , the swiss national science foundation sinergia crsii2_136287 , and by cost action mp1303 .o.b . acknowledges partial support from the egide / dnipro grant no .28225uh and from the nasu `` resurs '' program .
the nonlinear dynamics associated with sliding friction forms a broad interdisciplinary research field that involves complex dynamical processes and patterns covering a broad range of time and length scales . progress in experimental techniques and computational resources has stimulated the development of more refined and accurate mathematical and numerical models , capable of capturing many of the essentially nonlinear phenomena involved in friction .
the optimal base problem is all about finding an efficient representation for a given collection of positive integers .one measure for the efficiency of such a representation is the sum of the digits of the numbers .consider for example the decimal numbers .the sum of their digits is 25 .taking binary representation we have and the sum of digits is 13 , which is smaller .taking ternary representation gives with an even smaller sum of digits , 12 .considering the _ mixed radix _base , the numbers are represented as and the sum of the digits is 9 .the optimal base problem is to find a ( possibly mixed radix ) base for a given sequence of numbers to minimize the size of the representation of the numbers . when measuring size as `` sum of digits '' , the base is indeed optimal for the numbers of . in this paperwe present the optimal base problem and illustrate why it is relevant to the encoding of pseudo - boolean constraints to sat .we also present an algorithm and show that our implementation is superior to current implementations .pseudo - boolean constraints take the form , where are integer coefficients , are boolean literals ( i.e. , boolean variables or their negation ) , and is an integer .we assume that constraints are in pseudo - boolean normal form , that is , the coefficients and are always positive and boolean variables occur at most once in .pseudo - boolean constraints are well studied and arise in many different contexts , for example in verification and in operations research .typically we are interested in the satisfiability of a conjunction of pseudo - boolean constraints .since 2005 there is a series of pseudo - boolean evaluations which aim to assess the state of the art in the field of pseudo - boolean solvers .we adopt these competition problems as a benchmark for the techniques proposed in this paper .pseudo - boolean constraint satisfaction problems are often reduced to sat .many works describe techniques to encode these constraints to propositional formulas .the pseudo - boolean solver minisat ( , cf.http://minisat.se ) chooses between three techniques to generate sat encodings for pseudo - boolean constraints .these convert the constraint to : ( a ) a bdd structure , ( b ) a network of sorters , and ( c ) a network of ( binary ) adders .the network of adders is the most concise encoding , but it has the weakest propagation properties and often leads to higher sat solving times than the bdd based encoding , which , on the other hand , generates the largest encoding .the encoding based on sorting networks is often the one applied and is the one we consider in this paper .r42 mm to demonstrate how sorters can be used to translate pseudo - boolean constraints , consider the constraint where the sum of the coefficients is 8 . on the right , we illustrate an sorter where are each fed into a single input , into two of the inputs , and into three of the inputs .the outputs are .first , we represent the sorting network as a boolean formula , , which in general , for inputs , will be of size .then , to assert we take the conjunction of with the formula .but what happens if the coefficients in a constraint are larger than in this example ?how should we encode ?how should we handle very large coefficients ( larger than 1,000,000 ) ? to this end , the authors in generalize the above idea and propose to decompose the constraint into a number of interconnected sorting networks .each sorter represents a digit in a mixed radix base .this construction is governed by the choice of a suitable mixed radix base and the objective is to find a base which minimizes the size of the sorting networks . herethe optimal base problem comes in , as the size of the networks is directly related to the size of the representation of the coefficients .we consider the sum of the digits ( of coefficients ) and other measures for the size of the representations and study their influence on the quality of the encoding . in minisat the search for an optimal baseis performed using a brute force algorithm and the resulting base is constructed from prime numbers up to 17 . the starting point for this paperis the following remark from ( footnote 8) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this is an ad - hoc solution that should be improved in the future .finding the optimal base is a challenging optimization problem in its own right ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in this paper we take the challenge and present an algorithm which scales to find an optimal base consisting of elements with values up to 1,000,000 .we illustrate that in many cases finding a better base leads also to better sat solving time .section [ section : obp ] provides preliminary definitions and formalizes the optimal base problem .section [ sec : encoding ] describes how minisat a pseudo - boolean constraint with respect to a given mixed radix base to generate a corresponding propositional encoding , so that the constraint has a solution precisely when the encoding has a model .section [ section:4 ] is about ( three ) alternative measures with respect to which an optimal base can be found .sections [ sec : ob1][sec : ob3 ] introduce our algorithm based on classic ai search methods ( such as cost underapproximation ) in three steps : heuristic pruning , best - first branch and bound , and base abstraction .sections [ sec : exp ] and [ relwork ] present an experimental evaluation and some related work . section [ sec : conc ] concludes .proofs are given in the appendix .in the classic base radix system , positive integers are represented as finite sequences of digits where for each digit , and for the most significant digit , .the integer value associated with is .a mixed radix system is a generalization where a base is an infinite radix sequence of integers where for each radix , and for each digit , .the integer value associated with is where and for , .the sequence specifies the weighted contribution of each digit position and is called the _weight sequence of . a finite mixed radix base is a finite sequence with the same restrictions as for the infinite case except that numbers always have digits ( possibly padded with zeroes ) and there is no bound on the value of the most significant digit , .in this paper we focus on the representation of finite multisets of natural numbers in finite mixed radix bases .let denote the set of finite mixed radix bases and the set of finite non - empty multisets of natural numbers .we often view multisets as ordered ( and hence refer to their first element , second element , etc . ) . for a finite sequence or multiset of natural numbers ,we denote its length by , its maximal element by , its element by , and the multiplication of its elements by ( if is the empty sequence then ) .if a base consists of prime numbers only , then we say that it is a prime base .the set of prime bases is denoted .let with .we denote by the representation of a natural number in base .the most significant digit of , denoted , is .if then we say that is redundant for .let with .we denote the matrix of digits of elements from in base as .namely , the row in is the vector .the most significant digit column of is the column of the matrix and denoted .if , then we say that is redundant for .this is equivalently characterized by .[ def : nrb ] let .we denote the set of non - redundant bases for , .the set of non - redundant prime bases for is denoted .the set of non - redundant ( prime ) bases for , containing elements no larger than , is denoted ( ) .the set of bases in // , is often viewed as a tree with root ( the empty base ) and an edge from to if and only if is obtained from by extending it with a single integer value .[ def : sumdigits ] let and .the sum of the digits of the numbers from in base is denoted .the usual binary `` base 2 '' and ternary `` base 3 '' are represented as the infinite sequences and . the finite sequence and the empty sequence are also bases .the empty base is often called the `` unary base '' ( every number in this base has a single digit ) . let . then , , , , and . let .a cost function for is a function which associates bases with real numbers .an example is . in this paperwe are concerned with the following _ optimal base problem_. let and a cost function .we say that a base is an _optimal base for _ with respect to , if for all bases , .the corresponding _ optimal base problem _ is to find an optimal base for .the following two lemmata confirm that for the cost function , we may restrict attention to non - redundant bases involving prime numbers only .[ l1 ] let and consider the cost function . then , has an optimal base in .[ lem : primes ] let and consider the cost function . then , has an optimal base in .how hard is it to solve an instance of the optimal base problem ( namely , for ) ?the following lemma provides a polynomial ( in ) upper bound on the size of the search space .this in turn suggests a pseudo - polynomial time brute force algorithm ( to traverse the search space ) .[ zeta ] let with .then , where and where is the riemann zeta function .chor _ et al ._ prove in that the number of ordered factorizations of a natural number is less than .the number of bases for all of the numbers in is hence bounded by , which is bounded by .this section presents the construction underlying the sorter based encoding of pseudo - boolean constraints applied in minisat .it is governed by the choice of a mixed radix base , the optimal selection of which is the topic of this paper .the construction sets up a series of sorting networks to encode the digits , in base , of the sum of the terms on the left side of a constraint .the encoding then compares these digits with those from from the right side .we present the construction , step by step , through an example where and .+ r30 mm the coefficients of form a multiset and their representation in base , a matrix , , depicted on the right .the rows of the matrix correspond to the representation of the coefficients in base .representing the coefficients as four digit numbers in base and considering the values of the digit positions , we obtain a decomposition for the left side of : to encode the sums at each digit position ( ) , we set up a series of four sorting networks as depicted below .given values for the variables , the sorted outputs from these networks represented unary numbers such that the left side of takes the value .for the outputs to represent the digits of a number in base , we need to encode also the `` carry '' operation from each digit position to the next .the first 3 outputs must represent valid digits for , i.e. , unary numbers less than respectively . in our examplethe single potential violation to this restriction is , which is represented in 6 bits . to this endwe add two components to the encoding : ( 1 ) each third output of the second network ( and in the diagram ) is fed into the third network as an additional ( carry ) input ; and ( 2 ) clauses are added to encode that the output of the second network is to be considered modulo 3 .we call these additional clauses a _ normalizer_. the normalizer defines two outputs and introduces clauses specifying that the ( unary ) value of equals the ( unary ) value of .the outputs from these four units now specify a number in base , each digit represented in unary notation . this number is now compared ( via an encoding of the lexicographic order ) to ( the value from the right - hand side of ) .we now return to the objective of this paper : for a given pseudo - boolean constraint , how can we choose a mixed radix base with respect to which the encoding of the constraint via sorting networks will be optimal ?we consider here three alternative cost functions with respect to which an optimal base can be found .these cost functions capture with increasing degree of precision the actual size of the encodings .the first cost function , as introduced in definition [ def : sumdigits ] , provides a coarse measure on the size of the encoding .it approximates ( from below ) the total number of input bits in the network of sorting networks underlying the encoding .an advantage in using this cost function is that there always exists an optimal base which is prime .the disadvantage is that it ignores the carry bits in the construction , and as such is not always a precise measure for optimality . in ,the authors propose to apply a cost function which considers also the carry bits .this is the second cost function we consider and we call it .[ cost2 ] let , with and the corresponding matrix of digits .denote the sequences ( sums ) and ( carries ) defined by : for , , and for + .the `` sum of digits with carry '' cost function is defined by the equation on the right . the following example illustrates the cost function and that it provides a better measure of base optimality for the ( size of the ) encoding of pseudo - boolean constraints .[ runningd ] consider the encoding of a pseudo - boolean constraint with coefficients with respect to bases : , , and .figure [ fig:3cost ] depicts the sizes of the sorting networks for each of these bases .the upper tables illustrate the representation of the coefficients in the corresponding bases . in the lower tables ,the rows labeled `` sum '' indicate the number of bits per network and ( to their right ) their total sum which is the cost .with respect to the cost function , all three bases are optimal for , with a total of 9 inputs .the algorithm might as well return .the rows labeled `` carry '' indicate the number of carry bits in each of the constructions and ( to their right ) their totals . with respect to the cost function , bases and are optimal for , with a total of bits while involves bits .the algorithm might as well return .the following example shows that when searching for an optimal base with respect to the cost function , one must consider also non - prime bases .consider again the pseudo boolean constraint from section [ sec : encoding ] .the encoding with respect to results in 4 sorting networks with 10 inputs from the coefficients and 2 carries .so a total of 12 bits .the encoding with respect to is smaller .it has the same 10 inputs from the coefficients but no carry bits .base is optimal and non - prime .we consider a third cost function which we call the cost function .sorting networks are constructed from `` comparators '' and in the encoding each comparator is modeled using six cnf clauses. this function counts the number of comparators in the construction .let denote the number of comparators in an sorting network . for small values of , takes the values and respectively which correspond to the sizes of the optimal networks of these sizes . for larger values, the construction uses batcher s odd - even sorting networks for which .[ num_comparators ] consider the same setting as in definition [ cost2 ] .then , consider again the setting of example [ runningd ] .in figure [ fig:3cost ] the rows labeled `` comp '' indicate the number of comparators in each of the sorting networks and their totals .the construction with the minimal number of comparators is that obtained with respect to the base with 10 comparators .it is interesting to remark the following relationship between the three cost functions : the function is the most `` abstract '' .it is only based on the representation of numbers in a mixed radix base .the function considers also properties of addition in mixed - radix bases ( resulting in the carry bits ) .finally , the function considers also implementation details of the odd - even sorting networks applied in the underlying minisat construction . in section [ sec : exp ]we evaluate how the alternative choices for a cost function influence the size and quality of the encodings obtained with respect to corresponding optimal bases .this section introduces a simple , heuristic - based , depth - first , tree search algorithm to solve the optimal base problem .the search space is the domain of non - redundant bases as presented in definition [ def : nrb ] .the starting point is the brute force algorithm applied in minisat . for a sequence of integers ,minisat applies a depth - first traversal of to find the base with the optimal value for the cost function .our first contribution is to introduce a heuristic function and to identify branches in the search space which can be pruned early on in the search .each tree node encountered during the traversal is inspected to check if given the best node encountered so far , , it is possible to determine that all descendants of are guaranteed to be less optimal than . in this case, the subtree rooted at may be pruned .the resulting algorithm improves on the one of minisat and provides the basis for the further improvements introduced in sections [ sec : ob2 ] and [ sec : ob3 ] .we need first a definition .let , , and a cost function .we say that : ( 1 ) extends , denoted , if is a prefix of , ( 2 ) is a partial cost function for if , and ( 3 ) is an admissible heuristic function for and if .the intuition is that signifies a part of the cost of which will be a part of the cost of any extension of , and that is an under - approximation on the additional cost of extending ( in any way ) given the partial cost of .we denote .if is a partial cost function and is an admissible heuristic function , then is an under - approximation of .the next lemma provides the basis for heuristic pruning using the three cost functions introduced above .[ lem : h ] the following are admissible heuristics for the cases when : 1 . : . : . : . in the first two settings we take .+ in the case of we take the trivial heuristic estimate /*input*/ & multiset s + /*init*/ & base bestb = + /*dfs*/ & depth - first traverse + & at each node , for the next value p b.extenders(s ) do + & base newb = b.extend(p ) + & if ( ) * prune * + & else if ( ) bestb = newb + /*output*/ & return bestb ; + the algorithm , which we call ` dfshp ` for depth - first search with heuristic pruning , is now stated as figure [ fig : alg1 ] where the input to the algorithm is a ` multiset ` of integers ` s ` and the output is an optimal base .the algorithm applies a depth - first traversal of in search of an optimal base .we assume given : a cost function , a partial cost function and an admissible heuristic .we denote .the abstract data type ` base ` has two operations : ` extend(int ) ` and ` extenders(multiset ) ` . for a base ` b ` and an integer ` p ` , ` b.extend(p ) ` is the base obtained by extending ` b ` by ` p ` . for a multiset ` s ` , ` b.extenders(s ) ` is the set of integer values ` p ` by which ` b ` can be extended to a non - redundant base for ` s ` , i.e. , such that .the definition of this operation may have additional arguments to indicate if we seek a prime base or one containing elements no larger than .initialization ( /*init*/ in the figure ) assigns to the variable ` bestb ` a finite binary base of size .this variable will always denote the best base encountered so far ( or the initial finite binary base ) . throughout the traversal , when visiting a node ` newb ` we first check if the subtree rooted at ` newb ` should be pruned .if this is not the case , then we check if a better `` best base so far '' has been found . once the entire ( with pruning )search space has been considered , the optimal base is in ` bestb ` . to establish a bound on the complexity of the algorithm , denote the number of different integers in by and .the algorithm has space complexity , for the depth first search on a tree with height bound by ( an element of will have at most elements ) .for each base considered during the traversal , we have to calculate which incurs a cost of . to see why , consider that when extending a base by a new element giving base , the first columns of are the same as those in ( and thus also the costs incurred by them ) .only the cost incurred by the most significant digit column of needs to be recomputed for due to base extension of to .performing the computation for this column , we compute a new digit for the different values in .finally , by lemma [ zeta ] , there are bases and therefore , the total runtime is . given that , we can conclude that runtime is bounded by .in this section we further improve the search algorithm for an optimal base .the search algorithm is , as before , a traversal of the search space using the same partial cost and heuristic functions as before to prune the tree .the difference is that instead of a depth first search , we maintain a priority queue of nodes for expansion and apply a best - first , branch and bound search strategy .figure [ fig : alg2 ] illustrates our enhanced search algorithm .we call it ` b&b ` . the abstract datatype ` priority_queue ` maintains bases prioritized by the value of .operations ` popmin ( ) ` , ` push(base ) ` and ` peek ( ) ` ( peeks at the minimal entry ) are the usual .the reason to box the text `` ` priority_queue ` '' in the figure will become apparent in the next section .base findbase(multiset s ) + /*1*/ base bestb = ; q = ; + /*2*/ while ( ) + /*3*/ base b = q.popmin ( ) ; + /*4*/ foreach ( p b.extenders(s ) ) + /*5*/ base newb = b.extend(p ) ; + /*6*/ if ( ) + /*7*/ \ { q.push(newb ) ; if ( ) bestb = newb ; } + /*8*/ return bestb ; + on line /*1*/ in the figure , we initialize the variable ` bestb ` to a finite binary base of size ( same as in figure [ fig : alg1 ] ) and initialize the queue to contain the root of the search space ( the empty base ) . as long as there are still nodes to be expanded in the queue that are potentially interesting ( line /*2*/ ) , we select ( at line /*3*/ ) the best candidate base ` b ` from the frontier of the tree in construction for further expansion .now the search tree is expanded for each of the relevant integers ( calculated at line /*4*/ ) . for each child ` newb ` of ` b ` ( line /*5*/ ) , we check if pruning at ` newb ` should occur ( line /*6*/ ) and if not we check if a better bound has been found ( line /*7*/ ) finally , when the loop terminates , we have found the optimal base and return it ( line /*8*/ ) .this section introduces an abstraction on the search space , classifying bases according to their product . instead of maintaining ( during the search ) a priority queue of all bases ( nodes ) that still need to be explored , we maintain a special priority queue in which there will only ever be at most one base with the same product .so , the queue will never contain two different bases and such that . in case a second base , with the same product as one already in ,is inserted to the queue , then only the base with the minimal value of is maintained on the queue .we call this type of priority queue a _ hashed priority queue _ because it can conveniently be implemented as a hash table .the intuition comes from a study of the cost function for which we can prove the following * property 1 * on bases : consider two bases and such that and such that . then for any extension of and of by the same sequence , .in particular , if one of or can be extended to an optimal base , then can .a direct implication is that when maintaining the frontier of the search space as a priority queue , we only need one representative of the class of bases which have the same product ( the one with the minimal value of ) .a second * property 2 * is more subtle and true for any cost function that has the first property : assume that in the algorithm described as figure [ fig : alg2 ] we at some stage remove a base from the priority queue .this implies that if in the future we encounter any base such that , then we can be sure that and immediately prune the search tree from .our third and final algorithm , which we call ` hashb&b ` ( best - first , branch and bound , with hash priority queue ) is identical to the algorithm presented in figure [ fig : alg2 ] , except that the the boxed priority queue introduced at line /*1*/ is replaced by a . the abstract data type ` hash_priority_queue ` maintains bases prioritized by the value of . operations ` popmin ( ) ` and ` peek ( ) ` are as usual . operation ` push(b_1 ) ` works as follows : ( a ) if there is no base in the queue such that , then add . otherwise , ( b ) if then do not add .otherwise , ( c ) remove from the queue and add . [ algisgood ] + ( 1 ) the cost function satisfies * property 1 * ; and ( 2 ) the ` hashb&b ` algorithm finds an optimal base for any cost function which satisfies * property 1*. we conjecture that the other cost functions do not satisfy * property 1 * , and hence can not guarantee that the ` hashb&b ` algorithm always finds an optimal base .however , in our extensive experimentation , all bases found ( when searching for an optimal prime base ) are indeed optimal .a direct implication of the above improvements is that we can now provide a tighter bound on the complexity of the search algorithm .let us denote the number of different integers in by and .first note that in the worst case the hashed priority queue will contain elements ( one for each possible value of a base product , which is never more than ) .assuming that we use a fibonacci heap , we have a cost ( amortized ) per ` popmin ( ) ` operation and in total a cost for popping elements off the queue during the search for an optimal base .now focus on the cost of operations performed when extracting a base from the queue . denoting , has at most children ( integers which extend it ) . for each childwe have to calculate which incurs a cost of and possibly to insert it to the queue . pushingan element onto a hashed priority queue ( in all three cases ) is a constant time operation ( amortized ) , and hence the total cost for dealing with a child is . finally , consider the total number of children created during the search which corresponds to the following sum : so , in total we get .when we restrict the extenders to be prime numbers then we can further improve this bound to by reasoning about the density of the primes . a proof can be found in the appendixexperiments are performed using an extension to minisat where the only change to the tool is to plug in our optimal base algorithm .the reader is invited to experiment with the implementation via its web interface .all experiments are performed on a quad - opteron 848 at 2.2 ghz , 16 gb ram , running linux .our benchmark suite originates from 1945 pseudo - boolean evaluation instances from the years 20062009 containing a total of 74,442,661 individual pseudo - boolean constraints . after normalizing and removing constraints with coefficientswe are left with 115,891 different optimal base problems where the maximal coefficient is .we then focus on 734 pb instances where at least one optimal base problem from the instance yields a base with an element that is non - prime or greater than 17 .when solving pb instances , in all experiments , a 30 minute timeout is imposed as in the pseudo - boolean competitions . when solving an optimal base problem , a 10 minute timeout is applied .[ [ experiment-1-impact - of - optimal - bases ] ] experiment 1 ( impact of optimal bases ) : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the first experiment illustrates the advantage in searching for an optimal base for pseudo - boolean solving .we compare sizes and solving times when encoding w.r.t . the binary base vs. w.r.t . an optimal base ( using the ` hashb&b ` algorithm with the cost function ) .encoding w.r.t . the binary base , we solve 435 pb instances ( within the time limit ) with an average time of 146 seconds and average cnf size of 1.2 million clauses . using an optimal base ,we solve 445 instances with an average time of 108 seconds , and average cnf size of 0.8 million clauses .[ [ experiment-2-base - search - time ] ] experiment 2 ( base search time ) : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + here we focus on the search time for an optimal base in six configurations using the cost function . configurations ` m17 ` , ` dfshp17 ` , and ` b&b17 ` , are respectively , the minisat implementation , our ` dfshp ` and our ` b&b ` algorithms , all three searching for an optimal base from , i.e. , with prime elements up to 17 .configurations ` hashb&b1,000,000 ` , ` hashb&b10,000 ` , and ` hashb&b17 ` are our ` hashb&b ` algorithm searching for a base from with bounds of 1,000,000 , 10,000 , and 17 , respectively .results are summarized in fig .[ fig : results1 ] which is obtained as follows .we cluster the optimal base problems according to the values where is the maximal coefficient in a problem .then , for each cluster we take the average runtime for the problems in the cluster .the value is chosen to minimize the standard deviation from the averages ( over all clusters ) .these are the points on the graphs .configuration ` m17 ` times out on 28 problems . for ` dfshp17 ` , the maximal search time is 200 seconds .configuration ` b&b17 ` times out for 1 problem .the ` hashb&b ` configurations have maximal runtimes of 350 seconds , 14 seconds and 0.16 seconds , respectively for the bounds 1,000,000 , 10,000 and 17 .-axis with 50k ms on the left and 8k ms on the right .configuration ` dfshp17 ` ( yellow ) is lowest on left and highest on right , setting the reference point to compare the two graphs ., title="fig : " ] -axis with 50k ms on the left and 8k ms on the right .configuration ` dfshp17 ` ( yellow ) is lowest on left and highest on right , setting the reference point to compare the two graphs ., title="fig : " ] fig .[ fig : results1 ] shows that : ( left ) even with primes up to 1,000,000 , ` hashb&b ` is faster than the algorithm from minisat with the limit of 17 ; and ( right ) even with primes up to 10,000 , the search time using ` hashb&b ` is essentially negligible .[ [ experiment-3-impact - on - pb - solving ] ] experiment 3 ( impact on pb solving ) : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + fig .[ fig : results2 ] illustrates the influence of improved base search on sat solving for pb constraints .both graphs depict the number of instances solved ( the -axis ) within a time limit ( the -axis ) . on the left ,total solving time ( with base search ) , and on the right , sat solving time only .[ fig : results2 ] both graphs consider the instances of interest and compare sat solving times with bases found using five configurations .the first is minisat configuration ` m17 ` , the second is with respect to the binary base , the third to fifth are ` hashb&b ` searching for bases from with cost functions : , , and , respectively .the average total / solve run - times ( in sec ) are 150/140 , 146/146 , 122/121 , 116/115 and 108/107 ( left to right ) .the total number of instances solved are 431 , 435 , 442 , 442 and 445 ( left to right ) .the average cnf sizes ( in millions of clauses ) for the entire test set / the set where all algorithms solved / the set where no algorithm solved are 7.7/1.0/18 , 9.5/1.2/23 , 8.4/1.1/20 , 7.2/0.8/17 and 7.2/0.8/17 ( left to right ) .the graphs of fig .[ fig : results2 ] and average solving times clearly show : * ( 1 ) * sat solving time dominates base finding time , * ( 2 ) * minisat is outperformed by the trivial binary base , * ( 3 ) * total solving times with our algorithms are faster than with the binary base , and * ( 4 ) * the most specific cost function ( comparator cost ) outperforms the other cost functions both in solving time and size . finally , note that sum of digits with its nice mathematical properties , simplicity , and application independence solves as many instances as cost carry . [ [ experiment-4-impact - of - high - prime - factors ] ] experiment 4 ( impact of high prime factors ) : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + r73 mm this experiment is about the effects of restricting the maximal prime value in a base ( i.e. the value of minisat ) .an analysis of the our benchmark suite indicates that coefficients with small prime factors are overrepresented . to introduce instances where coefficients have larger prime factors we select 43 instances from the suite and multiply their coefficients to introduce the prime factor 31 raised to the power .we also introduce a slack variable to avoid gcd - based simplification .this gives us a collection of 258 new instances .we used the ` b&b ` algorithm with the cost function applying the limit ( as in minisat ) and .results indicate that for , both cnf size and sat - solving time are independent of the factor introduced for .however , for , both measures increase as the power increases .results on cnf sizes are reported in fig.[fig : resultsgencactus ] which plots for 4 different settings the number of instances encoded ( -axis ) within a cnf with that many clauses ( -axis ) .recent work encodes pseudo - boolean constraints via `` totalizers '' similar to sorting networks , determined by the representation of the coefficients in an underlying base . herethe authors choose the standard base 2 representation of numbers .it is straightforward to generalize their approach for an arbitrary mixed base , and our algorithm is directly applicable . in the author considers the cost function and analyzes the size of representing the natural numbers up to with ( a particular class of ) mixed radix bases .our lemma [ lem : primes ] may lead to a contribution in that context .it has been recognized now for some years that decomposing the coefficients in a pseudo - boolean constraint with respect to a mixed radix base can lead to smaller sat encodings .however , it remained an open problem to determine if it is feasible to find such an optimal base for constraints with large coefficients . in lack of a better solution, the implementation in the minisat tool applies a brute force search considering prime base elements less than 17 . to close this open problem ,we first formalize the optimal base problem and then significantly improve the search algorithm currently applied in minisat .our algorithm scales and easily finds optimal bases with elements up to 1,000,000 .we also illustrate that , for the measure of optimality applied in minisat , one must consider also non - prime base elements .however , choosing the more simple measure , it is sufficient to restrict the search to prime bases . with the implementation of our search algorithm it is possible , for the first time , to study the influence of basing sat encodings on optimal bases .we show that for a wide range of benchmarks , minisat does actually find an optimal base consisting of elements less than 17 .we also show that many pseudo - boolean instances have optimal bases with larger elements and that this does influence the subsequent cnf sizes and sat solving times , especially when coefficients contain larger prime factors .we thank daniel berend and carmel domshlak for useful discussions .10 o. bailleux , y. boufkhad , and o. roussel .a translation of pseudo boolean constraints to sat ., 2(1 - 4):191200 , 2006 .o. bailleux , y. boufkhad , and o. roussel .new encodings of pseudo - boolean constraints into cnf . in _ proc . theory and applications of satisfiability testing ( sat 09 ) _ , pages 181194 , 2009 .p. barth . .kluwer academic publishers , norwell , ma , usa , 1996 .k. e. batcher .sorting networks and their applications . in _afips spring joint computing conference _ , volume 32 of _ afips conference proceedings _ , pages 307314 .thomson book company , washington d.c ., 1968 .r. e. bixby , e. a. boyd , and r. r. indovina . : a test set of mixed integer programming problems ., 25:16 , 1992 .r. e. bryant , s. k. lahiri , and s. a. seshia .deciding clu logic formulas via boolean and pseudo - boolean encodings .in _ proc .workshop on constraints in formal verification ( cfv 02 ) _ , 2002 .b. chor , p. lemke , and z. mador . on the number of ordered factorizations of natural numbers . , 214(1 - 3):123133 , 2000 .n. en and n. srensson .translating pseudo - boolean constraints into sat ., 2(1 - 4):126 , 2006 .d. knuth . .addison - wesley , 1973 .v. m. manquinho and o. roussel .the first evaluation of pseudo - boolean solvers ( pb05 ) ., 2(1 - 4):103143 , 2006 .n. sidorov .sum - of - digits function for certain nonstationary bases ., 96(5):36093615 , 1999 .let and let be an optimal base for with .let be the base obtained by removing the last element from . we show that and that .the claim then follows .assume falsely that for , .then we get the contradiction . from the definition of and the above contradiction we deduce that .[ pnib ] let be a base with and let .then , the unique representation of in is obtained as such that : ( 1 ) , ( 2 ) for , and ( 3 ) .[ pprimesonly ] let and let and be bases which are identical except that at some position , two consecutive base elements in are replaced in by their multiplication .formally : for , , and for .then , the sum of the digits in is greater or equal to the sum of the digits in .let be a base with elements of the form where the element at position is non - prime and .so , taking , we are in the setting of proposition [ pprimesonly ] .the result follows : consider the notation of definition [ cost2 ] .let , with and . denote the sequences ( sums ) and ( carries ) defined by : for , , and for .we denote also .[ psubbase2 ] let , and a base .then for every such that there exists let be any monotonically increasing function such that for , .let .then given by : let and be as defined above and , bases such that , , and . for every base we can see that ( the size of a finite set is a natural number ) .therefore it is sufficient to prove that . from proposition [ psubbase1 ]we get that . from the definition of and of we see that for , denote . + let .by proposition [ psubbase2 ] there exists such that and therefore .this implies that in total we have the proof that is similar noting that .( of lemma [ lem : h ] ) + if , then . by proposition [ lhc ] both definitions give a heuristic cost function .the proof for the case of and is of the same structure as proposition [ lhc ] .the case of is the most complicated one .( * property 1 * ) [ dbme ] a heuristic cost function is called _ base mul equivalent _ if for every and for every bases , such that and the following holds : [ pbestrep ] let and let be a base mul equivalent heuristic cost function . for every base extracted from the queue and for every base such that then .assume that the claim is true for every and assume that during the run of the algorithm we extract a base from the queue with .assume falsely that there exists a base such that and .this means that was not evaluated yet ( otherwise ) .therefore the father of ( in the tree of bases ) was never extracted from the queue .let be the closest ancestor of that was extracted . denote by the child of which is also the ancestor of ( potentially itself ) .so , was evaluated .observe that and are unique because the search space ( of bases ) is a tree and .so , and that is a contradiction to the existence of . for ,first notice that .this follows directly from the definition of admissible heuristics ( for the case ) .hence , . from propositions [ pnib ] and [ pbaseextenstion ] , we have that for , we define .so by the complete induction assumption for we get that . by the fact that we can deduce that . by the definition of admissible heuristics for : therefore , . combining it with the fact that we have that .finally from the inductive assumption we get that .\2 ) let a base mul equivalent heuristic cost function and .we denote by the best base found by the algorithm at each point of the run .let be the first base extracted from the queue such that .this is the condition that terminates the main loop of the algorithm , so we need to prove that is the optimal base for .assume falsely that there exists an optimal base such that .let be the nearest ancestor of such that the its base equivalence class representative was extracted from queue ( otherwise ) . by proposition [ pbestrep ]we know that and by * property 1 * that for any base c .in particular for the case where . by choice of get that the equivalence class representative of was not extracted ( and it is the same class of ) .therefore , , which is a contradiction .we use the prime number theorem which states that the density of the primes near is .the number of prime bases evaluated in the worst case scenario is : but and so the total number of bases evaluated during a run of the algorithm is bounded by and the overall complexity is .let with .it is standard to define and as natural numbers such that and . in our proofswe note that is the maximal number such that there exists with . by definition if , and .then , , and .now , because otherwise it would be a contradiction to the maximality of . if then .assume that .then and from that we deduce ( otherwise it would be a contradiction to the maximality of ) . on the other hand , . from the definition of modulus we get that and so , and therefore . in totalwe get that and because and are natural numbers we get the equality .let , , , and . by definitionwe can see that , , and . therefore and this is true by proposition [ pdiv ] .assume the assumption is true for every base such that .let such that .define the base with .we can see that for , , and that for , . from thiswe can see that for , and also that . back to the main claim by the inductionwe know that . therefore . + by proposition [ pdivmod ] we can see that . andtherefore we get that .( of proposition [ psubbase1 ] ) + first we notice that because then for we get that and therefore . by proposition [ pnib ]we see that . by definition of and by proposition [ pnib ] and the definition of we prove this proposition by induction on .( of proposition [ psubbase2 ] ) + let and be as defined above .let such that .if there exists an such that then we get that and in any case ( or ) .otherwise let be the maximal index such that .if then .consider the case when then such that and so by dividing both sides by we get that , which means that . 1 . for by definition which means that and therefore . for we get that and which again means that ) .we can see that . by proposition [ pnib ]we know that + + + + therefore + because we deduce that + and in total we get that + ( of proposition [ pbaseextenstion ] ) + let and , bases such that .let ( , otherwise there is no such index ) . for by proposition [ pnib ] we get that .if then by proposition [ pnib ] and definition of we can deduce .
this paper formalizes the _ optimal base problem _ , presents an algorithm to solve it , and describes its application to the encoding of pseudo - boolean constraints to sat . we demonstrate the impact of integrating our algorithm within the pseudo - boolean constraint solver minisat . experimentation indicates that our algorithm scales to bases involving numbers up to 1,000,000 , improving on the restriction in minisat to prime numbers up to 17 . we show that , while for many examples primes up to 17 do suffice , encoding with respect to optimal bases reduces the cnf sizes and improves the subsequent sat solving time for many examples .
consider a trader who wants to buy or sell a european option on asset with maturity and payoff .the trader wants to hedge this position , but the underlying asset is illiquid .however , some liquid proxies of are available in the marketplace .first , there is a financial index ( or simply an _ index _ ) ( such as e.g. s&p500 or cdx.na ) whose market price is correlated with . in addition , there is another correlated asset which has a liquidly traded option with a payoff similar to that of , and with the same maturity .the market price of is also known .our trader realizes that hedging -derivative with the index alone may not be sufficient for a number of reasons .first , she might be faced with a situation where correlation coefficients ( which for simplicity are assumed to be constant ) are such that . in this casewe would intuitively expect a better hedge produced by using or as the hedging instruments .second , if we bear in mind a stochastic volatility - type dynamics for , the stochastic volatility process may be `` unspanned '' , i.e. the volatility risk of the option may not be traded away by hedging in option s underlying .if that is the case , one might want to hedge the unspanned stochastic volatility by trading in a `` similar '' option with on the proxy asset .so our trader is contemplating a hedging strategy that would use both and . to capture an `` unspanned '' stochastic volatility, the trader wants to use a derivative written on rather than asset directly . as transaction costs are usually substantially higher for options than for underlyings, our trader sets up a static hedge in and a dynamic hedge in .the static hedging strategy amounts to selling units of options at time . an _ optimal _ hedging strategy would be composed of a pair where is the optimal static hedge , and ( where ) is an optimal dynamic hedging strategy in index .the pair should be obtained using a proper model .the same model should produce the highest / lowest price for which the trader should agree to buy or sell the -option . in this paperwe develop a model that formalizes the above scenario by supplementing it with the specific dynamics for asset prices and , and providing criteria of optimality for pricing options . for the former ,we use a standard correlated log - normal dynamics . for the latter ,we employ the utility indifference framework with an exponential utility , pioneered by and others , see e.g. for a review . as will be shown below , this results in a tractable formulations with analytical ( in quadratures ) expressions for optimal hedges and option prices . as the above setting of pricing and hedging an illiquid option position using a pair of liquid proxies ( e.g. a stock and an option on a different underlying ) is quite general, one could visualize its potential applications for various asset classes such as equities , commodities , fx etc .for definiteness , in this paper we concentrate on a problem of practical interest for counterparty credit risk management .namely , we consider the problem of pricing and hedging an exposure to a counterparty with an illiquid debt , and in the absence of liquidly traded cds referencing this counterparty .for such situation , no market - implied spreads are available for the counterparty in question . instead, one should rely on a model to come up with _ theoretical _ credit spreads for the counterparty . to this end, we use a version of the classical merton equity - credit model which is set up in a multi - name setting , and under the physical ( i.e. `` real '' , not `` risk - neutral '' ) measure .most importantly , unlike the classical merton s model , we do not intend to use firm s equity to hedge firm s debt . instead ,illiquid debt is hedged with a proxy liquid debt , and a proxy credit index . in what follows , to differentiate our framework from that of the classical merton model, we will refer to it as the hedged incomplete - market merton s dynamics , or himd for short .our model unifies three strands in the literature on indifference pricing .the fist strand deals with hedging an option with a proxy asset , as developed in , and others . in this setting , one typically hedges an option on an illiquid underlying with a liquid proxy asset. the second strand develops generalizations of the classical merton credit - equity model to an incomplete market setting .typically , this is achieved by de - correlating asset value and equity price at the level of a single firm , see e.g. .as long as we do not use firms equity to hedge firm s debt but instead use a liquid proxy bond as a hedge , such modification of the merton model is not needed in our setting .the third strand is presented by who develop a static - dynamic indifference hedging approach for barrier options .ilhan and sircar considers hedging a barrier option under stochastic volatility using static hedges in vanilla options on the same underlying plus a dynamic hedge in the underlying .this results in a two - dimensional hamilton - jacobi - bellmann ( hjb ) equation .our construction is similar but our hedges are a proxy asset and a proxy option , while volatility is taken constant for simplicity .borrowing from an approach of for a similar ( but not identical ) setting , we now show how the method of indifference utility pricing can be generalized to incorporate our scenario of a mixed dynamic - static hedge . to this end , let be the final payoff of the portfolio consisting of our option positions , i.e. as long as both european options pay at the same maturity , we can view this as the payoff of a combined ( `` static hedge portfolio '' ) option , which involves payoffs and of both derivatives and .such option may be priced using the standard utility indifference principle .the latter states that the derivative price is such that the investor should be indifferent to the choice between two investment strategies . with the first strategy, the investor adds the derivatives to her portfolio of bonds and stocks ( or indices in the setting of the merton s optimal investment problem . ] ) , thus taking from , and adding to her initial cash . with the second strategy, the investor stays with the optimal portfolio containing bonds and the stocks / indices .the value of each investment is measured in terms of the _ value function _ defined as the conditional expectation of utility of the terminal wealth optimized over trading strategies . in this work, we use an exponential utility function where is a risk - aversion parameter . in our case, the terminal wealth is given by the following expression : with be the total wealth at time in bonds and index . in turn, the value function reads \ ] ] where is a set of admissible trading strategies that require holding of initial cash .the expectation in the eq.([value_function ] ) is taken under the `` real - world '' measure . for a portfolio made exclusively of stocks / indices and bonds ,the value function for the exponential utility is known from the classical merton s work : where , is the risk free interest rate assumed to be constant , and is the stock sharpe ratio . in our setting , in addition to bonds and stocks / indices , we want to long option and short units of option to statically hedge our position , or , equivalently , buy the option .the value function in our problem of optimal investment in bonds , index and the composite option has the following form : where is a cash equivalent of the total wealth in bonds and the index at time .we represent it in a form similar to eq.([merton_vv ] ) : where function will be calculated in the next sections .the indifference pricing equation reads plugging this in eq.([merton_vv ] ) and eq.([vv_him ] ) and re - arranging terms , we obtain the highest price of the -derivative is given by choosing the optimal static hedge given by the number of the -derivatives , i.e. where we temporarily introduced subscript in to emphasize that the value function depends on through a terminal condition .to use eq.([indif_price_2 ] ) and thus be able to compute both the option price and optimal static hedge , we need to find the `` reduced '' value function . to this end, we first derive the hamilton - jacobi - bellman ( hjb ) equation for our model , and then obtain its analytical ( asymptotic ) solution .let be the dynamic investment strategy in the index at time starting with the initial cash , and be the markov generator of price dynamics corresponding to strategy .both the optimal dynamic strategy and the value function should be obtained as a solution of the hjb equation we assume that all state variables follow a geometric brownian motion process with constant drifts and volatilities if our total wealth at time is and we invest amount of this wealth into the index and the rest in a risk - free bond , the stochastic differential equation for is obtained as follows : then reads where is defined on the domain ] . the initial condition for this equationis obtained from eq.([valuefunction ] ) . inwhat follows , we choose a specific payoff of the form eq.([payoff ] ) with that corresponds to a portfolio of bonds of firms and with notionals within the merton credit - equity model .then the terminal condition for reads \ ] ] where and .we were not able to find a closed form solution of eq.([gpdews ] ) with the initial condition eq.([initcondws ] ) .on the other hand , a numerical solution of this equation is expensive , especially when it should be used many times for calibration to market data .therefore , we proceed with aasymptotic solutions of eq.([gpdews ] ) .we suggest two approaches to construct asymptotic solutions .as we want to statically hedge option with options on another underlying , we look for an asset that is strongly correlated with asset .further , if we have a `` similar '' option on asset ( i.e. similar maturity , type , strike , etc . ) , we expect that such option provides a good static hedge for our option . , but . we may want to hedge an illiquid option with strike ( say , deep otm ) with a liquid option on the same underlying but with a different strike . under this setup, we have , i.e. a prefect correlation case . ] therefore , a natural assumption would be to consider to be a small parameter under our setup .utilizing this idea we represent the solution of eq.([gpdews ] ) as a formal perturbative expansion in powers of : where is the small parameter to be precisely defined in the next section . as we shown below , eq.([gpdews ] ) can be solved analytically ( in quadratures ) to any order of this expansion , thus significantly reducing the computation time .we start with a change of variables defined as follows : simultaneously , we change the dependent variable as follows : using eq.([gpdews ] ) , eq.([uv ] ) and eq.([changeoffun1 ] ) , we obtain the following pde for function : further we will show that is a slow ( `` adiabatic '' ) variable of our asymptotic method , while becomes a `` fast '' variable . in what follows we need an inverse of eq.([uv ] ) at : using this in eq.([initcondws ] ) , we obtain the initial condition in variables for the function : \ ] ] where for any real .recall that a correlation matrix of assets can be represented as a gram matrix with matrix elements where are unit vectors on a dimensional hyper - sphere . using the 3d geometry , it is easy to establish the following _ cosine law _ for correlations between three assets : with being an angle between and its projection on the plane spanned by . as discussed e.g. by ,three variables are independent , but are not .therefore , one of them , e.g. , has to be found using eq.([cosine ] ) given .further we define as and also define the following constants using eq.([cosine ] ) and definitions in eq.([eps ] ) , eq.([theta_12 ] ) , coefficients at and in eq.([phi_uv ] ) are evaluated as follows : accordingly using this notation eq.([phi_uv ] ) takes the form in the limit this equation does not contain any derivatives wrt , therefore enters the equation only as a parameter ( since is a function of ) .we call this limit the _ adiabatic limit _ in a sense that will be explained below .it should be noted that our expansion in powers of can diverge if is very small .we exclude such situations on the `` financial '' grounds assuming that all pair - wise correlations in the triplet are reasonably high ( of the order of 0.4 or higher in practice ) , for our hedging set - up to make sense in the first place .thus , parameter is treated as is always and positive , parameter could be both positive and negative for typical values of correlations .for example , if , then , while for it is .the cosine law can also be used to find proper values of correlation parameters in the limit .to this end , we first use eq.([cosine ] ) to convert the estimated triplet into a triplet of independent variables , and then take the limit while keeping and constant . ] .it turns out that the last equation of the previous section could be further simplified . introducing new independent variables we can transform eq.([phi_uv_2 ] ) into the following equation it is seen that in new variables the mixed derivative drops from from the equation , as so does .however , further let us formally introduce a multiplier in the term which transforms eq.([phi_vv_2 ] ) into let us also formally assume that is small under certain conditions .the idea of this trick is as follows .one way to construct an asymptotic solution of the eq.([phi_vv_2 ] ) is to assume that all derivatives are of the order , and then estimate all the coefficients . if one manages to find a coefficient which is , then it is possible to build an asymptotic expansion using that coefficient ( or itself ) as a small parameter .if , however , all the coefficients in the considered pde are of order we need to check if perhaps some of the derivatives in the eq.([phi_vv_2 ] ) are small , e.g. . if this is the case , in order to apply standard asymptotic methods we formally have to add a small parameter as a multiplier to the derivative which is , , where make an asymptotic expansion on , solve the obtained equations in every order on , and at the end in the final solution put .that is exactly the way we want to proceed with .this means that instead of the eq.([expan ] ) , we now have the following expansion : to find conditions when could be small as compared with the other terms in the eq.([phi_vv_2 ] ) , we use an inverse map at this is a regular map . ] and rewrite the payoff function eq.([initcondws ] ) in the form , \\\omega_1 & = \bar{v}\frac{\theta_1}{\sqrt{\theta_2 } } , \qquad \omega_2 = \bar{v } \frac{\theta_3}{\beta \sqrt{\theta _ 2 } } , \qquad \zeta = \frac{1}{\sqrt{\theta_2 } } \nonumber\end{aligned}\ ] ] suppose that or .differentiating the payoff twice by and twice by and computing the ratio of the first and second terms in the rhs of the eq.([phi_vv_2 ] ) , one can see that in the limit this ratio becomes .typical values give rise to , therefore the second term is small as compared with the first one , and is a good small parameter .this , however changes if is small or / and is close to 1 , and then . still in this casewe have which can be used as a small parameter .therefore , our approach is as follows : 1 . if we use eq.([phi_vv_2 ] ) and find its asymptotic solutions using eq.([expan1 ] )this is better than using eq.([phi_uv_2 ] ) because first , is typically smaller then , and second , the term in our first method eq.([expan ] ) appears only in the second order of approximation while in the second method it is taken into account already in the first order on .if , however , , then is not anymore a small parameter , therefore we use eq.([phi_uv_2 ] ) and solve it asymptotically using eq.([expan ] ) . in general , this argument can not be applied if or because then both derivatives of the payoff vanish .however , the above argument is intended to provide an intuition as to why could be much smaller that the other terms in the rhs of eq.([phi_vv_2 ] ) .this intuition can be verified numerically , and our test examples clearly demonstrate that smallness of often takes place .below we discuss under which conditions this could occur . note that at the first glance , the described method looks similar to the quasi - classical approximation in quantum mechanics ( ) .the similarity comes from the observation that transformation eq.([uchange ] ) is singular in which is similar to the quasi - classical limit . if we would construct an asymptotic expansion on we would expand the rhs of the eq.([phi_vv_2 ] ) on , but not the payoff function .after getting the solution of eq.([phi_vv_2 ] ) in zero - order approximation on as a function of , we would apply the inverse transformation which is non - singular .therefore , the final result would not contain any singularity .since is defined via and , it could seem that , and we face a `` quasi - classical '' situation .however , as explained above , the independent parameters are and . therefore , by definition does nt depend on , or on .thus , our two methods actually correspond to different assumptions .the first one utilizes a strong correlation between assets z and y. the second assumes a strong correlation between index and asset while at the same time the vector of correlation in 3d space is not collinear to the vector of correlation . by financial sensethis means ( see eq.([cosine ] ) ) the following . 1 .either is about 1 and , therefore , . in other words ,index strongly correlates to asset z , so is almost , therefore correlation of y and z ( ) is close to correlation of y and x ( ) . that , in turn , means that asset z can be dynamically hedged with , and extra static hedge with y does nt bring much valuein contrast , under the former assumption static hedge plays an essential role . which means that . for instance , .this is an interesting case , since it differs from two previously considered assumptions on high value of either or .indeed , all correlations could be relatively moderate while providing a smallness of . in what follows , we describe in detail the asymptotic solutions for zero and first order approximations in , and outline a generalization of our approach to an arbitrary order in .asymptotic solutions in are constructed in a very similar way and are given in appendix [ a ] . also to make our notation lighter in the next section wewill us instead of since that should not bring any confusion .in the zero order approximation we set , so that eq.([phi_vv_2 ] ) does not contain derivatives wrt : where .therefore , dependence of the solution on is determined by the terminal condition . in other words ,our system changes along variable , but it remains static ( i.e. of the order of slow ) in variable .using analogy with physics , we call this limit the _ adiabatic limit_. the last equation can be solved by a change of dependent variable ( closely related to the hopf - cole transform , see e.g. ) : ^{1/(1- \bar{\rho}_{xz}^2)}\ ] ] which reduces eq.([zeroeq ] ) to the heat equation subject to the initial condition .the latter can be obtained from the eq.([initconduv1 ] ) if one replaces with the `` correlation - adjusted '' risk aversion parameter .it can also be written as a piece - wise analytical function having a different form in different intervals of -variable . if , we have , & u < - \omega_1 \\ \exp\left [ - \bar{\gamma } \left(k_z - \alpha k_y e^ { \zeta \beta \sigma_y\left(\omega_2 + u\right)}\right ) \right ] , & - \omega_1 \le u < -\omega_2 \\ \exp\left [ - \bar{\gamma } \left(k_z - \alpha k_y\right ) \right ] , & u \ge -\omega_2 , \end{cases}\end{aligned}\ ] ] while for we have , & u < -\omega_2\\ \exp \left [ - \bar{\gamma } \left(k_z e^{\zeta \sigma_z\left(\omega_1 + u \right ) } - \alpha k_y \right ) \right ] , & -\omega_2 \le u < - \omega_1 \\ \exp\left [ - \bar{\gamma } \left(k_z - \alpha k_y \right ) \right ] , & u \ge - \omega_1 \\ \end{cases}\end{aligned}\ ] ] using the well - known expression for the green s function of our heat equation ( see e.g. ) , the solution of eq.([linpde2 ] ) is then the explicit zero - order solution thus reads ^{1/(1-\bar{\rho}_{xz}^2)}\ ] ] note that eq.([phi_0_sol ] ) provides the general zero - order solution for the hjb equation with arbitrary initial conditions at . for our specific initial conditions eq.([init_cond_quand2 ] ) , the solution is readily obtained in closed form in terms of the error ( or normal cdf ) function ( see appendix [ a1 ] ) .however , for numerical efficiency it might be better to use another method which is based on a simple observations that the expression in square brackets in the eq.([phi_0_sol ] ) is just a gauss transform of the payoff .this transform can be efficiently computed using a fast gauss transform algorithm which in our case is with being the number of grid points in space . for the first correction in eq.([phi_vv_2 ] ) we obtain the following pde with .this is an inhomogeneous _ linear _ pde with variable coefficients .as long as our zero - order solution of eq.([zeroeq ] ) already satisfies the initial condition , this equation has to be solved subject to the zero initial condition .this considerably simplifies the further construction .we look for a solution to eq .( [ first2 ] ) in the form ^{\bar{\rho}_{xz}^2 } h(u , v , \tau)\ ] ] this gives rise to an inhomogeneous heat equation for function subject to zero initial condition thus , using the duhamel s principle ( ) we obtain there exists a closed form approximation of the internal integral ( see appendix [ b2 ] ) .the second order equation has the same form as the eq.([first2 ] ) where \ ] ] as this equation has to be solved also subject to zero initial conditions , the solution is obtained in the same way as above : this shows that in higher order approximations in both the type of the equation and boundary conditions stay the same .therefore , the solution to the -th order approximation reads where can be expressed via already found solutions of order and their derivatives on and .the exact representation for follows combinatorial rules and reads where is a coefficient at in the expansion of , \\\beta_1 & = \sum_{i=1}^\infty \mu^i \phi_{i , u}/\phi_{0,u } , \qquad \beta_2 = \sum_{i=1}^\infty \mu^i \phi_{i}/\phi_{0}\end{aligned}\ ] ] in series on .this could be easily determined using any symbolic software , e.g. mathematica .for instance reads the explicit representation of the solutions of an arbitrary order in quadratures is important because , per our definition of , convergence of eq.([expan ] ) is expected to be relatively slow .indeed , if one wants the final precision to be about at ( ) , the number of important terms in expansion eq.([expan1 ] ) could be rawly calculated as , which gives , while at precision this yields .note that all integrals with do not admit a closed form representation and have to be computed numerically .again , this could be done in an efficient manner using the fast gauss transform .to verify quality of our asymptotic method we compare two sets of results .one is obtained using zero and first order approximations ( being computed via a series representation and functions given in appendix [ a1 ] in eq.([jdef ] ) , eq.([omega ] ) and eq.([omega1 ] ) , or using the fast gaussian transform ) .our tests showed that the number of terms in the double sum that should be kept is small , namely truncating the upper limit in from infinity to produces nearly identical results .therefore , the total complexity of calculation is about 45 computations of and functions which is very fast . a typical time required for this at a standard pc withthe cpu frequency 2.3 ghz ranges from 0.68 sec ( test 1 ) to 0.35 sec ( test 2 ) ( see below ) .the other test is performed using a numerical solution of eq.([phi_vv_2 ] ) . in doingso we use an implicit finite difference scheme built in a spirit of .after the original non - linear equation is discretized to obtain the value function at the next time level , we need to solve a 2d algebraic system of equations each of which contains a non - linear term .this could be done e.g. by applying a fixed point iterative method ( ) .in other words , at the first iteration as an initial guess we plug - in into the non - linear term the solution obtained at the previous level of time .this reduces the equation to a linear one since the non - linear term is explicitly approximated at this iteration .next we solve the resulting 2d system of equations with a block - band matrix using a 2d lu factorization . at the second iteration, the solution obtained in such a way is substituted into the non - linear term again , so again it is approximated explicitly .then the new system of linear equations is solved and the new approximation of the solution of the original non - linear equation is obtained .we continue this process until it converges . the number of iterations to needed for numerical convergence depends on gradients of the value function , which are considerably influenced by the value of . for small values of ( about 0.03 )we need about 1 - 2 iterations , while for , 5 - 7 iterations might be necessary . for higher values of , the fixed point iteration scheme could even diverge, so another method has to be used instead .it is also important to note that we solve the non - linear equation using the dependent variable , rather than to reduce relative gradients of the solution .note that for _ linear _2d parabolic equations with mixed derivatives more efficient splitting schemes exist , see e.g. . in principle, such schemes could be adopted to solve eq.([phi_vv_2 ] ) .though eq.([phi_vv_2 ] ) is a _ non - linear _ equation , the presence of the non - linear term does not change its type ( it is still a parabolic equation ) , and second , does not affect stability of the scheme if we approximate it implicitly and solve the resulting non - linear algebraic equations .a more detailed description of this modification of the splitting scheme of hout and welfert will be presented elsewhere .implementation of the numerical algorithm is done similar to . in typical testswe use a non - uniform finite difference grid in and of size 50x50 nodes , where .the number of steps in time depends on maturity , because we use a fixed time step 0.1 yrs .typical computational time for yrs on the same pc at = 0.03 is 17 sec . in fig [ testuv3d ]a 3d plot of the value function is presented for the initial parameters marked in table [ tab ] as test 1. we also use .it is seen that quickly goes to constant outside of a narrow region around and . & & & & & & & & & & & & 1 & 0.04 & 0.25 & 0.02 & 0.8 & 110 & 0.05 & 0.2 & 0.4 & 100 & 90 & 0.03 & 0.3 & 0.3 & 100 2 & 0.04 & 0.25 & 0.02 & 0.8 & 110 & 0.05 & 0.3 & 0.3 & 50 & 90 & 0.03 & 0.3 & 0.2 & 100 in fig [ comp1 ] the same quantities are computed for = 1.22 and yrs . herethe first plot presents comparison of the numerical solution with zero and 0 + 1 approximations .the second plot compares the zero and first order approximation .it is seen that the first approximation makes a small correction to the zero one in the region closed to . also both 0 " and 0 + 1 approximations fit the exact numerical solution relatively well .this proves that our asymptotic closed form solution is robust .results obtained with a second set of parameters ( test 2 in the table [ tab ] ) are shown in fig [ comp2 ] . in fig [ price ]we present price computed in tests 1,2 as a function of , where is black - scholes put price with parameters of the corresponding tests .note that for values of parameters used above the nonlinear term is small , therefore the solution is closed to the solution of the linear 2d heat equation obtained from the eq.([phi_uv_2 ] ) by omitting the nonlinear term . that is because is small in our test 1 . to make testing more interesting , we changed to .results obtained for = 3 yrs are presented in fig [ compgamma3 ] for test 1 . and in fig [ compgamma4 ] for test 2 .it is seen that in this case the 0 + 1 analytical approximation still fits the numerical solution .the computational time in these tests is higher because the matrix root solver converges slower .the typical time at =3yrs and = 1 is 24 secs .computation of the analytical approximation requires the same time as before which is about 1 sec .if the exponent of the payoff function is positive , e.g. at = 2 and other parameters as in test 1 , then the solution looks like a delta function . under such conditions any numerical method experiences a problem being unable to resolve very high gradients within just few nodes .therefore , in this case one has to exploit a non - uniform grid saturated close to the peak of the value function .this brings extra complexity to the numerical scheme while our analytical approximation is free of such problems . on the other handit is natural to statically hedge option with the option with same or close moneyness .this significantly reduces the exponent and allows one to use the same mesh even at high values of the risk aversion parameter . as the numerical performance of the model depends on the value of , a few comments are due at this point . though we do not address calibration of our model in the present paper ( leaving it for future work ) , the value of should be found by calibrating the model to market data .it is not entirely obvious how to do this since we use an illiquid asset , and moreover the price of our complex option is not an additive sum of its components for incomplete markets .it therefore makes sense to calibrate the model to another set of instruments that are both liquid and strongly correlated with the original instruments . in the context of equity options , such calibration of done in , giving rise to values of .while it is not exactly clear how found in this setting is related to of our original problem , we expect the latter to be of the same order of magnitude .note that the results in fig.[compgamma4 ] clearly demonstrate that the proposed method is just asymptotic .indeed , the first correction to the solution in fig.[compgamma4 ] is a few times larger then the zero - order solution .as usual , there exists an optimal number of terms in the asymptotic series that fit the exact solution best .we do not pursue such analysis in this paper , however due to our definition of via this is not an interesting case . ] . to characterize sharpness of the peak in the value functionit is further convenient to introduce a new parameter .if is high , e.g. the value function is almost a delta function , so the asymptotic solution as well as its numerical counterpart are not expected to produce correct results unless they are further modified . based on the results of one can see that the value of calibrated to the market varies from 0.01 to 15 .therefore , in our test 2 we used = 20 .both our numerical and asymptotic methods work with no problem for these values of . in fig [ alphastar ] the optimal hedge computed based on eq.([indif_price_2 ] ) which was solved using brent s method ( ) .the initial parameters correspond to test 1 in table [ tab ] in the first plot , and test 2 in the second plot .note that for ( the first plot ) and ( the second plot ) , eq.([indif_price_2 ] ) does not have a minimum , so the maximum is obtained at the edge of the chosen interval of .the latter could be defined based on some other preferences of the trader , for instance , the total capital she wants to invest into this strategy etc .finally , in fig [ pricealphagamma ] price is presented as a function of for various .it is seen that this function is convex which was first showed in in a different setting .note that these results were obtained using a new numerical method mentioned above .it combines strang s splitting with the fast gaussian transform , and accelerates calculations approximately by factor 40 as compared with a non - linear version of the 2d crank - nicholson scheme .a detailed description of the method will be given elsewhere .we thank peter carr and attendees of the `` global derivatives usa 2011 '' conference for useful comments . i.h .would like to thank andrew abrahams and julia chislenko for support and interest in this work .we assume full responsibility for any remaining errors .benth , f.e . ,groth , m. , & lindberg , c. 2010 .the implied risk aversion from utility indifference option pricing in a stochastic volatility model ., * 16*(m10 ). dash , j. 2004 . .world scientific .davis , m. 1997 .option pricing in incomplete markets .dempster , m. , & pliaka , stanley ( eds ) , _ mathematics of derivative securities_. cambridge : cambridge university press .davis , m. 1999 .option valuation and basis risk .djaferis , t.e . , & shick , l.c .( eds ) , _ system theory : modelling , analysis and control_. academic publishers .henderson , v. , & hobson , d. 2002 . substitute hedging ., * 15*(5 ) , 7175 .henderson , v. , & hobson , d. 2009 .utility indifference pricing : an overview .carmona , r. ( ed ) , _ indifference pricing_. princeton university press .hodges , s.d ., & neuberger , a. 1989 . optimal replication of contingent claims under transaction costs . , * 8 * , 222239 .ilhan , a. , & sircar ., r. 2006 .optimal static - dynamic hedges for barrier options ., * 16*(2 ) , 359385 ., k. j. , & welfert , b. d. 2007 .stability of adi schemes applied to convection - diffusion equations with mixed derivative terms ., * 57 * , 1935 .itkin , a. , & carr , p. 2011 .jumps without tears : a new splitting technology for barrier options . , *8*(4 ) , 667704 .jaimungal , s. , & sigloch , g. 2009 . .working paper .khiari , n. , & omrani , k. 2011 .finite difference discretization of the extended fisher - kolmogorov equation in two dimensions . , * 62 * , 41514160 .landau , l.d . , lifshitz , e.m .pergamon press , new york .merton , r. 1974 . on the pricing of corporate debt : the risk structure of interest rates ., * 29 * , 449470 .musiela , m. , & zariphopoulou , t. 2004 .an example of indifference prices under exponential preferences ., * 8 * , 229239 .polyanin , a.d .chapman & hall / crc .salleh , s. , zomaya , a. , & bakar , s. 2007 . .wiley - interscience .t. leung , r. sircar , & zariphopoulou , t. 2008 .credit derivatives and risk aversion . , * 22 * , 275291 .trolle , a.b . , & schwartz , e.s . 2009 .unspanned stochastic volatility and the pricing of commodity derivatives . , * 22*(11 ) , 44234461 .here we describe in more detail the solutions for the zero and first order approximation in , and outline a generalization of our approach to an arbitrary order in . in the zero order approximation the eq.([phi_uv_2 ] ) does not contain derivatives wrt : the solution of this equation proceeds along similar lines to sect .[ sect : zeroorder ] using a change of dependent variable ^{1/(1-{\rho}_{xz}^2)}\ ] ] which reduces eq.([zeroeq2 ] ) to the heat equation subject to the initial condition .the explicit form for the latter coincides with ( [ init_cond_quand2 ] ) provided we substitute , , and .the explicit zero - order solution thus reads ( compare with eq.([phi_0_sol ] ) ) ^{1/(1-{\rho}_{xz}^2)}\ ] ] note that eq.([phi_0_sol2 ] ) provides the general zero - order solution for the hjb equation with arbitrary initial conditions at . for our specific initial conditions eq.([init_cond_quand2 ] ) , the solution is readily obtained in closed form in terms of the error ( or normal cdf ) function ( see appendix [ a1 ] ) . for the first correction in the eq.([phi_uv_2 ] ) we obtain the following pde where .this equation coincides with eq.([first2 ] ) except that the free term is different , and the correlation parameter is rather than .the solution proceeds as in sect .[ sect : firstorder ] , resulting in the following expression : the double integral that enters this expression can be split out in two parts .one of them could be found in closed form , while the other one requires numerical computation ( see appendix [ a2 ] ) .our numerical tests show that for many sets of the initial parameters the first integral is much higher than the second one , so the latter could be neglected .however , we were not able to identify in advance at which particular values of the parameters this could be done . moreover , for some other initial parameters we observe an opposite situation .the second order equation has the same form as eq.([first ] ) where as this equation has to be solved also subject to zero initial conditions , the solution is obtained in the same way as above : this shows that in higher order approximations in the type of the equation to solve does nt change as well as the initial conditions .therefore , the solution to the -th order approximation reads where can be expressed via already found solutions of order and their derivatives on and .the exact representation for follows combinatorial rules and reads where is a coefficient at in the following expansion ^ 2 , \qquad \beta_3 = \sum_{i=1}^\infty \varepsilon^i \phi_i/\phi_0 \nonumber\end{aligned}\ ] ] in particular , reads the explicit representation of the solutions of an arbitrary order in quadratures is important because per our definition of the convergence of the eq.([expan ] ) is expected to be slow .indeed , if one wants the final precision to be about 0.1 at , the number of important terms in the expansion eq.([expan ] ) could be rawly calculated as , which gives .all the integrals with do not admit a closed form representation and have to be computed numerically .since our payoff is a piece - wise function , the integral in eq.([phi_0_sol ] ) can be represented as a sum of three integrals .we denote and represent the zero - order solution as follows : ^{\frac{1}{1-\rho_{xz}^2 } } , \qquad \zeta = { \rm sign}(v ) \\\ ] ] where sign means that , and sign - that , and \\ & = \omega\left(-\omega , - \bar{\gamma } k_z e^{\sigma_z\frac{\rho_{xz}}{\rho_{xy}}\omega } , \sigma_z \frac{\rho_{xz}}{\rho_{xy } } , \bar{\gamma } \alpha k_y , \sigma_y\right ) , \nonumber \\j_2^{(+ ) } & = \frac{1}{\sqrt{2 \pi \tau}}\int_{- \omega}^{0}d u ' e^{-\frac{(u ' - u)^2}{2 \tau } } \exp\left[ - \bar{\gamma } \left(k_z - \alpha k_y e^{\sigma_y u'}\right ) \right ] , \nonumber \\ & = \omega\left(0 , - \bar{\gamma } k_z , 0 , \bar{\gamma } \alpha k_y , \sigma_y\right ) - \omega\left(-\omega , - \bar{\gamma } k_z , 0 , \bar{\gamma } \alpha k_y , \sigma_y\right ) , \nonumber \\j_3^{(+ ) } & = \frac{e^{- \bar{\gamma } \left(k_z - \alpha k_y\right)}}{\sqrt{2 \pi \tau}}\int_{0}^{\infty}d u ' e^{-\frac{(u ' - u)^2}{2 \tau } } = e^{- \bar{\gamma } \left(k_z - \alpha k_y\right ) } \left[1 -\frac{1}{2 } { \rm erfc}\left(\frac{u}{\sqrt{2\tau}}\right)\right ] , \nonumber \\ j_1^{(- ) } & = \frac{1}{\sqrt{2 \pi \tau}}\int_{- \infty}^{0}d u ' e^{-\frac{(u ' - u)^2}{2 \tau } } \exp \left [ - \bar{\gamma } \left(k_z e^{\sigma_z \frac{\rho_{xz}}{\rho_{xy } } ( \omega + u ' ) } - \alpha k_y e^{\sigma_y u ' } \right ) \right ] \nonumber \\ & = \omega\left(0 , - \bar{\gamma } k_z e^{\sigma_z\frac{\rho_{xz}}{\rho_{xy}}\omega } , \sigma_z \frac{\rho_{xz}}{\rho_{xy } } , \bar{\gamma } \alpha k_y , \sigma_y\right ) , \nonumber \\j_2^{(- ) } & = \frac{1}{\sqrt{2 \pi \tau}}\int_{0}^{-\omega}d u ' e^{-\frac{(u ' - u)^2}{2 \tau } } \exp\left [ - \bar{\gamma } \left ( k_z e^{\sigma_z \frac{\rho_{xz}}{\rho_{xy}}(\omega + u ' ) } - \alpha k_y \right ) \right ] \nonumber \\ & = \omega\left(-\omega , - \bar{\gamma } k_z e^{\sigma_z\frac{\rho_{xz}}{\rho_{xy}}\omega } , \sigma_z \frac{\rho_{xz}}{\rho_{xy } } , \bar{\gamma } \alpha k_y , 0\right ) - \omega\left(0 , - \bar{\gamma } k_z e^{\sigma_z\frac{\rho_{xz}}{\rho_{xy}}\omega } , \sigma_z \frac{\rho_{xz}}{\rho_{xy } } , \bar{\gamma } \alpha k_y , 0\right ) , \nonumber \\j_3^{(- ) } & = \frac{e^{- \bar{\gamma } \left(k_z - \alpha k_y \right)}}{\sqrt{2 \pi \tau}}\int_{-\omega}^{\infty}d u ' e^{-\frac{(u ' - u)^2}{2 \tau } } = \frac{1}{2 } e^{- \bar{\gamma } \left(k_z - \alpha k_y \right ) } { \rm erfc}\left(-\frac{u+\omega}{\sqrt{2\tau}}\right ) .\nonumber\end{aligned}\ ] ] here is the complementary error function , and u ' } e^{-\frac{(u ' - u)^2}{2 \tau } } du ' \nonumber \\ & = \frac{1}{2}\sum _ { i=0}^\infty \sum _ { j=0}^i \frac{\delta^{i - j } \beta^j}{j ! ( i - j ) ! } e^{a\left(u+a\frac{\tau}{2}\right)}{\rm erfc}\left(\frac{u - a + a \tau}{\sqrt{2\tau } } \right ) , \nonumber \\ a & = i p + j ( q - p ) .\nonumber\end{aligned}\ ] ] since the complementary error function quickly approaches zero with , or 1 with , the number of terms one has to keep in the above sums should not be high .we also need the following function when computing the first order approximation . using a general form of the payoff , ) . ]it can be represented in the form where and , \\\lambda(b ) & \equiv e^{a(b)\left[u+a(b ) \frac{\tau}{2}\right]}{\rm erfc}\left(\frac{u - a + a(b ) \tau}{\sqrt{2\tau } } \right ) , \qquad a(b ) \equiv i p + j ( q - p ) + b. \nonumber\end{aligned}\ ] ]the first order approximation is given by eq.([firstordersol ] ) where the zero - order solution has been already computed in appendix [ a1 ] .we plug in this solution into eq.([firstordersol ] ) to obtain \nonumber\end{aligned}\ ] ] where the integrals are defined in eq.([jdef ] ) .the internal integral can be simplified .indeed , since the internal integral in eq.([int1app ] ) can be represented as a sum of two integrals , where as we already mentioned the second integral could be either smaller or larger than the second one depending on the parameters .this is illustrated in fig .[ figj2 ] .the initial parameters are taken from test 1 in table [ tab ] with = 1 and = 3 yrs . in fig .[ figj1 ] the same calculation is shown for the parameters corresponding to test 2 in table [ tab ] .the first integral can be rewritten using the integral representation of in eq.([phi_0_sol ] ) substituting this into eq.([int1app ] ) and integrating over , we find thus , the first correction to the solution obtained in the zero order approximation on is approximately , and the full solution in the `` 0 + 1 '' approximation reads first order approximation is given by eq.([firstordersol ] ) .it is assumed that the zero - order solution is already computed by using either the method presented in appendix [ a1 ] , or using the fast gauss transform .we plug in this solution into eq.([firstordersol ] ) to obtain \nonumber\end{aligned}\ ] ] where the integrals are defined in eq.([jdef ] ) .it can be seen that eq.([int1app2 ] ) is similar to eq.([int1app ] ) if one replaces with and with .therefore , we can use the same idea as in appendix [ a2 ] to further simplify this integral .the first integral can be modified using the integral representation of in eq.([phi_0_sol ] ) substituting this into eq.([int1app2 ] ) and integrating over , we find
this work addresses the problem of optimal pricing and hedging of a european option on an illiquid asset z using two proxies : a liquid asset s and a liquid european option on another liquid asset y. we assume that the s - hedge is dynamic while the y - hedge is static . using the indifference pricing approach we derive a hjb equation for the value function , and solve it analytically ( in quadratures ) using an asymptotic expansion around the limit of the perfect correlation between assets y and z. while in this paper we apply our framework to an incomplete market version of the credit - equity merton s model , the same approach can be used for other asset classes ( equity , commodity , fx , etc . ) , e.g. for pricing and hedging options with illiquid strikes or illiquid exotic options .
hensel lifting techniques are at the basis of several polynomial factoring algorithms that are fast in practice .the classical algorithms are designed for generic bivariate polynomials over finite fields without reference to sparsity ( e.g. ) .the polytope method of is intended to factor sparse polynomials more efficiently , by exploiting the structure of their newton polygon .it promises to be significantly fast when the polygon has a few decompositions , and can help factor families of polynomials which possess the same newton polytope . while the pre - processing stages of the polytope method benefit from the sparsity of the input in reference to its newton polygon , the hensel lifting phase that pursues the boundary factorisations does not do so .our chain of work in reveals that the inner workings of hensel lifting remain oblivious to the sparsity of the input as well as fluctuations in the sparsity of intermediary output , so long as one is designing the hensel lifting phase using the dense model for polynomial representation .in contrast , the sparse distributed representation considers the problem size to be a function of the number of non - zero terms of the polynomails treated , which captures the fluctuation in sparsity throughout the factorisation process . in , we revised the analysis of the hensel lifting phase when polynomials are in sparse distributed representation .we derived that the asymptotic performance in work , space , and cache complexity is critically affected not only by the degree of the input polynomial , but also by the following factors : ( i ) the sparsity of each polynomial multiplication , and ( ii ) the sparsity of the resulting polynomial products to be merged into a final summand .we further showed that even with advanced additive ( merging ) data structures like the cache aware tournament tree or the cache oblivious -merger , the asymptotic performance of the serialised version in all three metrics is still poor .this was a result of the straightforward implementation which performed polynomial products first off , to be followed by sums of those products , a process that we dubbed _serialised_. we remedied this by re - engineering the hensel lifting phase such that sums of polynomial products are computed simultaneously using a max priority queue .this generalises the approach of for a single polynomial multiplication .we derived orders of magnitude reduction in work , space , and cache complexity even against a serialised version that employs many possible enhancements , and succeeded in evading expression swell .hereafter , we label the serialised and the priority queue versions of hensel lifting as ser - hl and pq - hl respectively . more specifically and with regard s to the latter algorithm , we will denote by pq - hl the version that uses binary heap as a priority queue , and by pq - hl the version that uses funnel heap instead .our experiments in demonstrate that the polytope method is now able to adapt significantly more efficiently to sparse input when its newton polygon consists of a few edges , something not to have been observed when employing ser - hl . in , we shifted to enhancing the overlapping algorithm pq - hl .the motivation lies in the fact that binary heap is not scalable , which , on a serial machine , is interpreted to say that its performance will deteriorate once data no longer fits in in - core memory , thus restricting the number of non - zero terms that input and intermediary output polynomials are permitted to possess . by performing priority queue operations using optimal cache complexity and in a cache oblivious fashion , funnel heap beats binary heap at large scale .the fact that funnel heap assumes no knowledge of the underlying parameters such as memory level , memory level size , or word length , makes it ideal for applications where polynomial arithmetic is susceptible to fluctuations in sparsity .however , all of those features can also be observed when adopting an alternate cache oblivious priority queue ( see for example , ) . as such, we pursued funnel heap for further attributes that can improve on its asymptotic performance , as well as exploit it at small scale , specifically for hensel lifting . in , we addressed the chaining optimisation , and how funnel heap can be tailored to implement it in a highly efficient manner .we exploited that funnel heap is able to identify equal order monomials `` for free '' as part of its inner workings whilst it re - organises itself over sufficiently many updates during one of its special operations known as the `` sweep '' . by thiswe were able to eliminate entirely the requirement for searching from the chaining process .we designed a batched mode for chaining that gets overlapped with funnel heap s mechanism for emptying its in - core components .in addition to also managing expression swell and irregularity in sparsity , batched chaining is sensitive to the number of distinct monomials residing in funnel heap , as opposed to the number of replicas chained .this allows the overhead due to batched chaining to decrease with increasing replicas . for sufficiently large input size with respect to the cache - line length , and also sufficiently sparse input and intermediary polynomials ,batched chaining that is `` search free '' leads to an implementation of hensel lifting that exhibits optimal cache complexity in the number of replicas found in funnel heap , and one that achieves an order of magnitude reduction in space , as well as a reduction in the logarithmic factor in work and cache complexity , when comparing against pq - hl of .we label as fh - hl the enhancement of hensel lifting using funnel heap and batched chaining .this paper extends all of the above work in garnering the prowess of funnel heap . to this end , we incorporate analytical as well as experimental algorithmics techniques as follows : * in section [ proofs ] , we provide proofs of results introduced in pertaining to properties of funnel heap , several of which are of independent worth extending beyond hensel lifting . for example , we provide complete proofs for the following : * * we establish where the replicas will reside immediately after each insertion into funnel heap .* * we determine the number of times one is expected to call sweep on each link of funnel heap throughout a given sequence of insertions .* * given an upper bound on the maximum constituency of funnel heap at any one point in time across a sequence of operations , we compute the total number of links required by funnel heap . * * we establish that the cache complexity by which one performs batched chaining within fh - hl is optimal . * in section [ rank ] , we exploit that the work and cache complexity of an insert operation using funnel heap can be refined to depend on the rank of the inserted monomial product , where rank corresponds to its lifetime in funnel heap . by optimising on the pattern by which insertions and extractions occur during the hensel lifting phase of the polytope method , we are able to obtain an adaptive funnel heap that minimises all of the work , cache , and space complexity of this phase .this , in turn , maximises the chances of having all polynomial arithmetic performed in the innermost levels of the memory hierarchy , and observes _ nearly optimal _ spatial locality .we show that the asymptotic costs of such preprocessing can be embedded in the overall costs to perform hensel lifting with batched chaining ( fh - hl ) , independently of the amount of minimisation taking place .we call the resulting algorithm fh - rank . * in section [ experimental ] ,we develop the experimental algorithmics component to our work addressing various facets : * * we conduct a detailed empirical study confirming the scalability of funnel heap over the generic binary heap . by simulating out of core behaviour ,funnel heap is superior once swaps to external memory begin to take place , despite that it performs considerably more work than binary heap .this supports the notion that funnel heap should be employed even when performing a single polynomial multiplication or division once data grows out of core . * * we support the theoretical analysis of the cache and space complexity in using accounts of cache misses and memory consumption of fh - hl .this can be seen as an extension of , as the performance measures presented there capture only the real execution time . ** we benchmark fh - rank against several other variants of hensel lifting , which include pq - hl , pq - hl with the chaining method akin to , pq - hl , and fh - hl .our empirical account of time , space and cache complexity of fh - rank confirm the predicted asymptotic analysis in all three metrics . * * we demonstrate that funnel heap is a more efficient merger than the cache oblivious -merger , which fails to achieve its optimal ( and amortised ) cache complexity when used for performing sums of products . we attribute this to the fact that the polynomial streams to be merged during hensel lifting can not be guaranteed to be of equal size ( as a result of fluctuating sparsity ) .this provides an empirical proof of concept that the overlapping approach for performing sums of products using one global funnel heap is more suited than the serialised approach , even when the latter uses the best merging structures available .we now begin with the following section on background literature and results .in the remainder of this paper , we will consider that in - core memory is of size .it is organised using cache lines ( disk blocks ) , respectively , each consisting of consecutive words .all words in a single line are transferred together between in - core and out - of - core memory in one round ( i / o operation ) referred to as a cache miss ( disk block transfer ) .funnel heap implements insert and extract - max operations in a cache oblivious fashion .for elements , funnel heap can perform these operations using amortised ( and optimal ) cache misses . at the innermost level ,funnel heap is first constructed using simple binary mergers .each binary merger processes two input sorted streams and produces their final merge .the heads of the input streams and the tail of the output stream reside in buffers of a limited size .a binary merger is _ invoked _ using a fill function when merge steps are repetitively performed until its output buffer is full or both its input streams are exhausted .one can construct binary merge trees by letting the output buffer of one merger be an input buffer of another merger . now let for .a -merger is a binary merge tree with exactly input streams .the size of the output buffer is , and the sizes of the remaining buffers are defined recursively in a van emde boas fashion ( see ) .funnel heap consists of a sequence of -mergers , where increases doubly exponentially across the sequence .the s are linked together in a list , with the help of extra binary mergers and buffers at each juncture of the list . in fig .[ funnelheap ] , the circles are binary mergers , rectangles are buffers , and triangles are -mergers .link in the linked list consists of a binary merger , two buffers and , and a merger with input buffers labeled as .link has an associated counter for which .initially , .it will be an invariant that are empty .the first structure in funnel heap is a buffer of extremely small size , dedicated for insertion .this buffer occupies in - core memory at all times .funnel heap is now laid out in memory in the order , link , link , etc . within link the layout order is , , , , , , , .+ the linked list of buffers and mergers constitute one binary tree with root and with sorted sequences of elements on the edges .this tree is heap - ordered : when traversing any path towards the root , elements will be passed in increasing order .if buffer is non - empty , the maximum element will reside in or in .the smaller mergers in funnel heap are meant to occupy primary memory , and can process sufficiently many insertions and extractions in - core before an expensive operation is encountered .in contrast , the larger mergers tend to be out of core , and contain elements that are least likely to be accessed in the near future . to perform an extract - max, we call fill on if buffer is empty . we return the largest element residing in both and . to insert into funnel heap , an element has to be inserted into .if is full , a sweep function is called .its purpose is to free the insertion buffer together with all the heavily occupied links in funnel heap which are closer to in - core memory . during a sweep , all elements residing in those dense links are extracted then merged into one single stream . this streamis then copied sufficiently downwards in funnel heap , towards the first link which has at least one empty input buffer . as a result of sweep ,the dense links are now free and funnel heap operations are resumed within in - core memory .the sweep kernel is considerably expensive , yet , sufficiently many insertions and all the extractions can be accounted for between any two sweeps .let denote a finite field of characteristic , and consider a polynomial ] .let denote the newton polygon of defined as the convex hull of the support vector of .one identifies suitable subsets of edges belonging to , such that all lattice points can be accounted for by a proper translation of this set of edges .one then specialises terms of along each edge .those specialisations are derived from the nonzero terms of whose exponents make up integral points on each , and we label them as .these can be transformed into laurent polynomials in one variable . for at least one , the associated edge polynomials ought to be squarefree , for all .one then begins lifting using the boundary factorisations given by , for all .for each boundary factorisation , we determine the associated s and s that satisfy the hensel lifting equation for . in we revised the analysis associated with the bottleneck in computation arising in eq .( [ maineq2 ] ) , using the sparse distributed representation . in this model of representation ,a polynomial is exclusively represented as the sum of its non - zero terms , sorted upon some decreasing monomial ordering .( [ maineq2 ] ) can be modeled using the input and output requirements shown in alg . 1 : [ local - iter ] an integer designating one iterative step in the hensel lifting process .two sets of univariate polynomials over , , , in sparse distributed monomial order representation .the polynomial , where .compute .compute .we distinguish between the serialised approach ( ser - hl ) and the overlapping approach ( pq - hl ) for performing the required arithmetic . in the serialised version ,one performs all polynomial multiplications first , and then merges all the resulting polynomial products . in the overlapping approach, one handles all arithmetic simultaneously using a single max priority queue . in , we analysed the work , space , and cache complexity , when polynomials are in sparse distributed representation .we derived that the performance of the serialised version in all three metrics is critically affected not only by the degree of the input polynomial , but also by the following factors : ( i ) the sparsity of each polynomial multiplication , and ( ii ) the sparsity of the resulting polynomial products to be merged into a final summand .we further showed that this remains the case even with advanced additive ( merging ) data structures like the cache aware tournament tree or the cache oblivious -merger , for performing the sums of resulting polynomial products , and that the serialised approach is not able to fully exploit the cache efficiency of these structures . in the overlapping approach, the priority queue is initialised using the highest order monomial products generated from each product .then , terms of are produced in decreasing order of degree , via successive invocations of extract - max upon the priority queue . in , we pursued funnel heap as an alternative to the generic binary heap for implementing the overlapping approach .beyond its cache oblivious nature and optimal cache complexity , we showed that funnel heap allows for a mechanism of chaining that significantly improves its overall performance .chaining replicas outside the priority queue following insertions is a well known technique ( e.g. see ) for the case of single polynomial multiplication using binary heap ) .it helps reduce several parameters tied to performance , such as the total number of extractions required to perform a single polynomial multiplication and the size of the priority queue . in turn , the latter results in reducing the number of monomial comparisons as well as the cache complexity required to perform each priority queue operation . in the straight - forward implementation, one has to search for a replica immediately after an insertion and then chain the newly inserted element to the end of a linked list tied to that replica in the priority queue .when using binary heap , chaining hinders performance critically .each insertion into the linked list denoting the chain incurs a random miss , whereas a single search query may require traversing the entire heap .it follows that the work and cache complexity of a single insertion amounts to that of traversal of elements for a heap of size .when employed in the priority queue that is implementing sums of products arising in hensel lifting , chaining becomes daunting as the size of the queue and the amount of replication change irregularly from one iteration to the other .in we showed how to exploit the expensive sweep kernel of funnel heap in order to develop a cache friendly batched chaining mechanism ( batched - chain ) that gets intertwined with the sweep s internal operations .the crux behind our approach lies in delaying chaining and performing it in batches , somehow at the `` right time '' . in the interim ,a prescribed amount of replication is tolerated , whose effect is shown to be insignificant at scale . here , we restrict chaining to only two specific phases in funnel heap s operations . if one is inserting a monomial product into the ( sorted ) insertion buffer , a replica that resides in is immediately identified and chaining can take place .one does not attempt to find a replica outside of .if such a replica exists , chaining will be deferred until is full .that is when sweep is invoked upon some link as well as one of its input buffers . in the duration of sweep, one is forming the stream which contains the merged output of all elements in the buffers leading from to together with all elements in links . during the merge, the replicas residing in those specified regions of funnel heap will be aligned consecutively and thus identified .one can then chain them all and at once outside of funnel heap .batched - chain eliminates entirely the need for searching for replicas , and lesser links would be allocated to funnel heap , which reduces garbage collection .batched - chain is further sensitive to the number of distinct monomials in funnel heap , and not the number of replicas chained .this can be understood to mean that the overhead due to chaining decreases with increasing replicas , which is intuitively appealing , since chaining is likely to be disabled once the number of replicas is lower than an acceptable threshold .when incorporating funnel heap and batched - chain into the priority queue algorithm for sums of products , alg .fh - hl was shown to be significantly fast .the timings reported in correspond to overall run - time , with the following percentages of improvement recorded , attained with increasing input size : about 90%-98% ( fh - hl to magma 2.18 - 7 ) , about 90%- 99% ( fh - hl to ser - hl ) , about 10%-60% ( fh - hl to pq - hl ) .the dramatic reduction in run - time over ser - hl is largely attributed to substantial expression swell , and that over pq - hl is attributed to batched - chain .in this section we revisit several claims made in and provide their complete proofs .those results pertain to the behaviour of funnel heap in general and not necessarily only in relation to hensel lifting , and thus are of independent worth .unless otherwise stated , all lemmas and corollaries in this specific section are stated in .we begin by the following invariant which identifies where the replicas will reside immediately after each insert into funnel heap : let denote the index of the last link in funnel heap . using batched - chain , and immediately following each insertion , there will be no replication within the constituency of any buffer . as a result , a given element in some buffer only be replicated at most once in each of the preceding buffers in its own link or in each of the buffers in the larger links .[ noreplicas ] consider the case when one is inserting immediately into the insertion buffer .batched - chain ensures that chaining is happening immediately , and so there will be no replicas in this particular buffer .now consider a random for .we know that one can only write elements to upon a call onto .this call produces the stream which merges the content of all links together with the content of the path leading from down to .since batched - chain employs chaining during the formation of , buffer will not contain any replicas .now , by the first claim above , each buffer in funnel heap contains distinct elements . when , it is straightforward to see that since has no buffers which precede it , each of its elements is replicated at most once in each of the following buffers .now take .we know that once sweep is called onto , each buffer in the link must be empty .also , as we form the end of which is written to we exclude the elements residing in each buffer that is also in the same link as but which precede it in that link .it follows that the only possible replicas of each element in will be in each of the buffers preceding it in its own link , as well as each of the buffers in the larger links .the following result captures the number of times one is expected to call sweep on each link of funnel hap throughout a given sequence of insertions and extractions : let denote the index of the last link in funnel heap and let denote the total number of times sweep is called , across a given sequence of insertions and extractions .then [ totalsweeps ] we proceed by backward induction on .take .link has input buffers .since this is the last link , not all of its input buffers may be written onto using sweep .in fact , exaclty of them will be so .we thus have .we now show that assuming the property holds for .observe that before any sweep on link has occurred , there should have preceded it exactly sweeps , in order to fill each of the input buffers in link . also , by the inductive hypothesis , the total number of sweeps on link is given by .combining , we get that there are sweeps on link .given an upper bound on the maximum constituency of funnel heap at any one point in time across a sequence of operations , we now determine the total number of links the heap requires : let denote the index of the last link in funnel heap .then where designates the maximum number of elements residing in funnel heap at any point in time .[ last - versus - total ] from we invoke the following proven results which we require for our proof : 1 .the space usage of each input buffer in link satisfies , where is the number of input buffers in link .the space usage of link is , i.e. it is dominated by the space usage of all of its input buffers . since link is the last link required by funnel heap to host all elements of its elements , those elements will consume at least one path leading to the first input buffer of link , and at most all such possible paths . by ( 2 )above , the space usage of each such path is dominated by the size of the input buffer itself and we thus have and .by we have : where the last equality follows by ( 3 ) above and by unrolling the recursive relation down to the base case . using and composing the logarithm function on the two bases 2 and 4/3 respectively ,we get .taking one can proceed analogously as above and obtain .this concludes the proof . as in ,reasoning in the sparse distributed representation produces worst - case versus best case polynomial multiplication , depending on the structure of the output . in the worst case ,a given multiplication is sparse as it yields a product with non - zero terms , an incidence of a memory bound computation . at best , the multiplication is dense as it yields a product with terms .when the product has significantly fewer terms due to cancelation of terms , the operation is said to suffer from expression swell .we now establish that the cache complexity by which one performs batched - chain within fh - hl is optimal . for this, we require a few notations from that will be helpful in the forthcoming sections as well . let and , which denote the maximum number of non - zero monomials comprising each and respectively .let denote the fraction of reduction in the size of the heap during chaining , such that the largest size the priority queue attains during the lifting step is .let denote the fraction of replication in the total number of monomial products such that the total number of replicas chained during the hensel lifting step is .the two parameters and reflect , in an asymptotic sense , the changes in the size of the queue as a function of the amount of replicas . particularly , the bounds on and are as follows .when no replicas are encountered at all during any one lifting step , we have that and . in contrast , when each polynomial in the pair is totally dense and all resulting products in one lifting step are of the same degree , the heap will contain only one element , leading to and .we now have the following : assume the sparse distributed representation for polynomials .assume further that .in the worst case analysis when each polynomial multiplication is sparse , the cache complexity by which one performs batched - chain within fh - hl is optimal .[ cor1 ] following the analysis in prop .3.6 of , the cache complexity of fh - hl is split into two major parts .the first part accounts for all the insertions into funnel heap using cache misses .the second part accounts for the cost to perform batched - chain using cache misses .when , we get that the second summand in the cache complexity incurred by batched - chain is dominated by the cost to perform all the insertions into funnel heap , or that the cost for batched - chain is dominated by , where denotes the total number of replicas chained .it follows that the cache complexity of batched - chain corresponds to that of traversal , and hence is optimal . in the following ,we provide a detailed proof that fh - hl , and thanks to batched - chain , outperforms pq - hl ( and thus by transitivity , also pq - hl ) .in other words , performing sums of products using funnel heap with batched - chain is provably more efficient in work , space , and cache complexity than if we were to resort to a standalone funnel heap implementation .assume the sparse distributed representation for polynomials , and assume further the conditions in cor .[ cor1 ] . in the worst case analysiswhen each polynomial multiplication is sparse , fh - hl achieves an order of magnitude reduction in space , as well as a reduction in the logarithmic factor in work and cache complexity , over pq - hl .[ cor2 ] from , alg .pq - hl requires the following costs : [ cols="^,^,^",options="header " , ] [ k - ksq ]in this paper we presented a comprehensive design and analysis that extends the work in and .fh - rank exploits all the features of funnel heap for implementing sums of products arising in hensel lifting of the polytope method , when polynomials are in sparse distributed representation .those features involve a batched mechanism for chaining replicas as well as optimising on the sequence of insertions and extractions in order to minimise the size of the priority queue as well as the work and cache complexity .the competitive asymptotics are validated by empirical results , which , in addition to asserting the high efficieny of fh - rank whether or not data fits in in - core memory , help us derive two other main conclusions .firstly , we confirm that at a large scale , all polynomial arithmetic employing a priority queue will benefit substantially from using funnel heap over binary heap , even without the proposed mechanisms for chaining and/or optimising the sequence of insertions / extractions .secondly , funnel heap is confirmed to be superior in practice as a merger when tested against the provably optimal -merger structure , despite having a higher work complexity .this is attributed to its ability to adapt to merging input streams of fluctuating density , which in turn , makes funnel heap ideal for performing polynomial arithmetic in the sparse distributed representation , where such fluctuation affects overall performance .this supports our argument that one should resort to the overlapping approach using a single priority queue , as opposed to handling each of the the local multiplications separately using a local priority queue , to be followed by additive merging of all polynomial streams .this conclusion remains valid whether or not expression swell is taking place .we thank the lebanese national council for scientific research and the university research board american university of beirut , for supporting this work .50 natexlab#1#1url # 1`#1`urlprefix abu salem , f. k. , el - harake , k. , gemayel , k. , 2015 . cache oblivious sparse polynomial factoring using the funnel heap . in : proc .pasco 15 . vol .8660 of lecture notes in computer science .springer , pp .715 .brodal , g. s. , fagerberg , r. , meyer , u. , zeh , n. , 2004 .cache - oblivious data structures and algorithms for undirected breadth - first search and shortest paths . in : proc . of swat 04 . vol .3111 of lecture notes in computer science .springer , pp . 480492 .monagan , m. , pearce , r. , 2007 .polynomial division using dynamic arrays , heaps , and packed exponent vectors . in : proc . of casc 07 .4770 of lecture notes in computer science .springer , pp .
this work is a comprehensive extension of that investigates the prowess of the funnel heap for implementing sums of products in the polytope method for factoring polynomials , when the polynomials are in sparse distributed representation . we exploit that the work and cache complexity of an insert operation using funnel heap can be refined to depend on the rank of the inserted monomial product , where rank corresponds to its lifetime in funnel heap . by optimising on the pattern by which insertions and extractions occur during the hensel lifting phase of the polytope method , we are able to obtain an adaptive funnel heap that minimises all of the work , cache , and space complexity of this phase . this , in turn , maximises the chances of having all polynomial arithmetic performed in the innermost levels of the memory hierarchy , and observes _ nearly optimal _ spatial locality . we provide proofs of results introduced in pertaining to properties of funnel heap , several of which are of independent worth extending beyond hensel lifting . additionally , we conduct a detailed empirical study confirming the superiority of funnel heap over the generic binary heap once swaps to external memory begin to take place . we support the theoretical analysis of the cache and space complexity in using accounts of cache misses and memory consumption , and compare the run - time results appearing there against adaptive funnel heap . we further demonstrate that funnel heap is a more efficient merger than the cache oblivious -merger , which fails to achieve its optimal ( and amortised ) cache complexity when used for performing sums of products . this provides an empirical proof of concept that the overlapping approach for performing sums of products using one global funnel heap is more suited than the serialised approach , even when the latter uses the best merging structures available . our main conclusion is that funnel heap will outperform binary heap for performing sums of products , whether data fits in in - core memory or not . hensel lifting , newton polytopes , polynomial factorisation , cache oblivious algorithms and data structures , cache complexity , priority queues , funnel heap