article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
the reaction rate of the (,) reaction provides a route from hot cno - cycles to the nena and mgal cycles and finally to the -process at typical temperatures of e.g. about ( ) in x - ray bursters ( xrb ) .it is expected that this reaction is the dominating route in the low temperature range .an alternative route from hot cno - cycles to the -process may be the (,) reaction .the relatively high temperatures correspond to most effective energies of about 1.3 to 2.1mev for the (,) reaction which are experimentally well accessible .however , experiments remain very difficult because of the short - living nucleus ( ) and the limited intensity of radioactive beam facilities .thus , besides the direct approach of measuring the (,) reaction cross section , the reverse (,) reaction has been studied very recently and in an earlier unpublished experiment , and the resonance energies have been determined from various transfer experiments populating states in the compound mg nucleus .the focus of the present paper is the comparison of the latest experiments by groombridge _et al . _( hereafter : gro ) , salter _ et al . _ ( sal ) , chae _ et al . _ ( cha ) , and matic _et al . _ ( mat ) .the earlier direct data of have been improved and extended by the same group leading to the gro data .the sal data are the only published data for the inverse reaction ; a brief comparison to the unpublished data measured at argonne national laboratory ( anl ) is also provided .the mat transfer data have by far the best energy resolution which is essential for a precise determination of the resonance energies . additional measurements of angular distributions in cha lead to a new spin assignment only in few cases ( see table iii of cha ) .the reaction rate factor for the (,) reaction is given by the sum over the contributing resonances : with the reduced mass in units of amu , the resonance energies in mev , and the resonance strengths in mev .in general , resonance energies are given as in the center - of - mass ( c.m.)system without index ; excitation energies are given as in this paper .the resonance strength for the (,) reaction is given by with the resonance spin , the partial widths and , and the total width .in most cases it can be expected that , and thus .the application of the simple formula for narrow resonances in eq .( [ eq : rate ] ) is justified because the resonance widths are much smaller than the resonance energies . in the followingwe first briefly review the various experimental approaches and discuss the resulting uncertainties in the determination of the reaction rate factor .next we check whether the experimental results of gro , mat , cha , sal , and anl are compatible with each other .finally , the reaction rate factors of the different studies are compared .note that of different studies may differ not only from discrepant resonance energies and resonance strengths or cross sections , but also from a different number of considered resonances in eq .( [ eq : rate ] ) .various transfer experiments have been performed in the last decade to study properties of the compound nucleus mg .a detailed comparison of the results is provided in mat and is not repeated . herewe briefly summarize the mat results and some modifications resulting from the cha data .transfer data are able to provide excitation energies and spin and parity of states in mg . however , from the transfer data it is not possible to determine resonance strengths which are the second ingredient for the calculation of the rate factor in eq .( [ eq : rate ] ) . in the matapproach the (,) mg reaction is used to populate excited states in the compound nucleus mg at proton energies of slightly below 100mev .the experiment has been performed using the grand raiden spectrometer at rcnp , osaka .the excellent energy resolution of about 13kev allows a precise determination of excitation energies which enter exponentially into in eq .( [ eq : rate ] ) via ( with the separation energy of the particle in mg of ) and are thus the main source of uncertainties .( is taken from the new audi and meng compilation ; the small difference to the earlier result of does practically not affect the rate factor in the relevant temperature range around . )in addition to the excitation energies , the total widths can be determined from these data by fitting the observed peak widths .most of the observed states are much broader than the experimental resolution , and thus the required unfolding procedure leads only to minor additional uncertainties for the derived width .the results are listed in table [ tab : width ] . as can be seen from table [ tab : width ] , practically all resonances fulfill the criterion of which is often used as definition for narrow resonances ( although also more stringent definitions for narrow resonances can be found in literature ) . as we will show in sect .[ sec : cha ] , the simple formula for narrow resonances in eq .( [ eq : rate ] ) provides the reaction rate factor for the (,) reaction with sufficient accuracy . in this sense the resonances in table [ tab : width ] can be considered generally as narrow resonances ..[tab : width ] excitation energy , resonance energy , spin and parity , total width , and resonance strength for excited states in mg from the (,) mg experiment in .later revisions for individual states are marked by `` '' and `` '' ; these revisions are based on the replacement of the experimental resonance strengths of gro by calculated resonance strengths and on revised ( but still tentative ) spin assignments by cha ( see also table [ tab : mod ] ) .the finally recommended strengths will be slightly lower by a factor of 0.55 ( see discussion in sect .[ sec : matsal ] and [ sec : rate ] ) .[ cols="^,^ , > , > , < , > , < " , ]the present knowledge of the reaction rate factor of the (,) reaction has been summarized . for this purpose experimental results from different experimental techniques are combined .transfer reactions provide the best determination of the excitation energy and spin and parity of states in mg which appear as resonances in the (,) reaction ; however , transfer reactions can not provide the required resonance strengths .these strengths have to be taken from theory or from direct experiments which are however extremely difficult and require the combination of a radioactive beam and a helium gas target .complementary information has been derived from the experimental study of the reverse (,) reaction using a radioactive beam and a solid ch target .a basic prerequisite for the comparison of results from various experimental techniques is the availability of total widths for the resonances under study .the total widths were determined from a reanalysis of the peak widths in the mat experiment .a compatibility test between the results from various experimental techniques shows that there is no contradiction between the various experimental data except the disagreement between the direct gro data and the reverse reaction data from sal and anl .this leads to the conclusion that the most likely explanation is a problem in the normalization of the gro data .consequently , resonance strengths from gro have been replaced by theoretical resonance strengths in the calculation of the rate factor .the calculation of for the (,) reaction from transfer data requires theoretical resonance strengths , and the calculation of from the reverse (,) reaction data requires a theoretical estimate of the ( , ) ground - state branching .both calculations are based on simple but reasonable arguments , and the corresponding uncertainties should not exceed a factor of two .this leads to a relatively narrow overlap region between the higher calculated from transfer and the lower calculated from the reverse reaction data .this narrow overlap region is considered as the new recommended reaction rate factor .the uncertainty of the recommended rate factor is about a factor of 1.8 ( uncertainty ) .for a theoretical prediction lies within this error band , but the theoretical temperature dependence of the rate factor is somewhat steeper than the new recommendation .the new recommended rate factor is slightly lower than the mat rate factor at low temperatures and significantly smaller at higher temperatures , and the new rate factor exceeds the sal result by about a factor of 5 .the strong conclusion of sal ( based on their lower limit for the rate factor ) that `` the breakout from the hcno cycle via the (,) reaction is delayed and occurs at higher temperatures than previously predicted '' can not be supported . instead , because of the only minor deviations of from the mat result at low temperatures around , the earlier conclusions of mat should remain valid in general . further astrophysical network calculations with the new recommended rate factor are required to study the relevance of the modified temperature dependence of the rate factor in detail .99 h. schatz and k. e. rehm , nucl .* a777 * , 601 ( 2006 ) . m. wiescher , j. grres , h. schatz , j. phys .g * 25 * , r133 ( 1999 ) .w. bradfield - smith _ et al ._ , * 59 * , 3402 ( 1999 ). d. groombridge _ et al ._ , * 66 * , 055802 ( 2002 ) .p. j. c. salter _et al . _ , * 108 * , 242701 ( 2012 ) .s. sinha _ et al ._ , anl annual report 2005 , p.6 - 7 .a. a. chen , r. lewis , k. b. swartz , d. w. visser , p. d. parker , * 63 * , 065807 ( 2001 ) .j. a. caggiano _ et al ._ , * 66 * , 015804 ( 2002 ) . g. p. a. berg _ et al ._ , nucl .phys . * a718 * , 608 ( 2003 ) . k. y. chae _ et al . _ , * 79 * , 055804 ( 2009 ) .a. matic _ et al ._ , * 80 * , 055804 ( 2009 ) .a. matic , phd thesis , rijksuniversiteit groningen , 2007 ; available online at _ http://dissertations.ub.rug.nl / faculties / science/2007/_. ; g. audi , f. g. kondev , m. wang , b. pfeiffer , x. sun , j. blachot , m. maccormick , chin .c * 36 * , 1157 ( 2012 ) ; g. audi , m. wang , a. h. wapstra , f. g. kondev , m. maccormick , x. xu , b. pfeiffer , chin . phys .c * 36 * , 1287 ( 2012 ) ; m. wang , g. audi , a. h. wapstra , f. g. kondev , m. maccormick , x.xu , b. pfeiffer , chin . phys .c * 36 * , 1603 ( 2012 ) .g. audi , a. h. wapstra , c. thibault , nucl .phys . * a729 * , 337 ( 2003 ) . b. a. brown and b. h. wildenthal , ann. rev .nucl . part .sci . * 38 * , 29 ( 1988 ) ; b. a. brown , _ http://www.nscl.msu.edu/~brown / resources / resources.html_. t. rauscher , v2.1 ( _ http://nucastro.org/codes.html_ ) . t. rauscher and f .- k .thielemann , at .data nucl .data tables * 75 * , 1 ( 2000 ) .h. abele and g. staudt , * 47 * , 742 ( 1993 ) .s. wilmes , v. wilmes , g. staudt , p. mohr , j. w. hammer , * 66 * , 065802 ( 2002 ) .
the (,) reaction is one key for the break - out from the hot cno - cycles to the -process . recent papers have provided reaction rate factors which are discrepant by at least one order of magnitude . the compatibility of the latest experimental results is tested , and a partial explanation for the discrepant is given . a new rate factor is derived from the combined analysis of all available data . the new rate factor is located slightly below the higher rate factor by matic _ et al . _ at low temperatures and significantly below at higher temperatures whereas it is about a factor of five higher than the lower rate factor recently published by salter _ et al . _
we consider an lsv model with stochastic interest rates and jumps by introducing stochastic dynamics for variables .we assume that it could include both diffusion and jumps components , as follows : dt + \xi_v v_t^a w_v + v_t dl_{v_t , t } , \nonumber \\ d r_t & = \kappa_r(t)(\theta_r(t ) - r_t ) dt + \xi_r r_t^b w_r + r_t dl_{r_t , t}. \nonumber\end{aligned}\ ] ] here is the continuous dividend , is the time , is the local volatility function , are correlated brownian motions , such that ] .also define a one - sided _ backward _ discretization of , denoted as /h ] , and _ forward _ approximation /(2 h) ] . since in our experiments , . ] at every step in we run this scheme for and then use linear interpolation to . at obvious solution is . at a 3d parabolic equation that can be solved using our implicit version of the hv scheme .indeed , it can be re - written in the form = v(\tau ) , \qquad k_n = \dfrac{a^2}{4 \pi^2(n-1/2)^2 ( \dtau)^2}.\ ] ] as usually is small , e.g. , in , so even for .now using the pde approximation theory we can re - write this equation as therefore , if we omit the last term , the total second order approximation of the scheme in time is preserved .this latter equation is equivalent to ,\ ] ] which has to be solved at the time horizon ( maturity ) .since is small and usually less than we may solve it in one step in time . andwhen increases , this conclusion remains to be true as well .once this solution is obtained we proceed to the next .thus , this scheme runs in a loop starting with and ending at some .similar to how we did it for the idiosyncratic jumps we choose based on the argument of , namely : i ) the high order derivatives of the option price drop down pretty fast in value , and ii ) first 10 terms of the sum approximate the whole sum with the accuracy of 1% .the solution obtained after steps is the final solution .overall , the whole splitting algorithm contains 11 steps .the complexity of each step is linear in since at every step we solve some parabolic equation with a tridiagonal or pentadiagonal matrix .thus , the total complexity of the method is where is the number of grid nodes in the -th dimension , and is some constant coefficient , which is about 276 ( 18 systems for one diffusion step if the implicit modification of the hv scheme is used times 2 diffusion steps , so totally 36 ; 10 systems per a 1d jump step times 2 steps times 3 variables , so totally 60 ; 18 steps per a single 3d parabolic pde solution for common jumps times 10 steps , so totally 180 ) . still this could be better than a straightforward application of the fft ( in case the fft is applicable , e.g. , the whole characteristic function is known in closed form which is not the case if one takes into account local volatility , etc . ) which usually requires the number of fft nodes to be a power of 2 with a typical value of 2 .it is also better than the traditional approach which considers approximation of the linear non - local jump integral on some grid and then makes use of the fft to compute a matrix - by - vector product . indeed , when using fft for this purpose we need two sweeps per dimension using a slightly extended grid ( with , say , the tension coefficient ) to avoid wrap - around effects , .therefore the total complexity per time step could be at least which for the fft grid with , and is 2.5 times slower than our method .also the traditional approach experiences some other problems for jumps with infinite activity and infinite variation , see survey in and references therein . also as we have already mentioned using fast gauss transform for the common jump step could significantly reduce the time for this most time - consuming piece of the splitting scheme .due to the splitting nature of our entire algorithm represented by , each step of splitting is computed using a separate numerical scheme .all schemes provide second order approximation in both space and time , are unconditionally stable and preserve positivity of the solution . in our numerical experiments for the steps which include mixed derivatives terms we used the suggested fully implicit version of the hundsdorfer - verwer scheme .this allows one to eliminate any additional damping scheme of the lower order of approximation , e.g. , implicit euler scheme ( as this is done in the rannacher method ) , or do scheme with the parameter ( as this was suggested in ) .a non - uniform finite - difference grid is constructed similar to in and domains , and as described in in the domain . in case of barrier optionswe extended the grid by adding 2 - 3 ghost points either above the upper barrier or below the lower barrier , or both with the same boundary conditions as at the barrier ( rebate or nothing ) .construction of the jump grid , which is a superset of the finite - difference grid used at the first ( diffusion ) step is also described in detail in .normally the diffusion grid contained 61 nodes in each space direction .the extended jump grid contained extra 20 - 30 nodes . if a typical spot value at time is =100 , the full grid ended up at .we computed our results in matlab at a standard pc with intel xeon e5620 2.4 ghz cpu .a typical elapsed time for computing one time step for the pure diffusion model with no jumps is given in the table [ elapsed ] : .elapsed time in secs for 1 step in time to compute the advection - diffusion problem . [ cols="^,^,^,^,^,^ " , ] plane at various values of .,scaledwidth=80.0%,height=288 ] plane at various values of .,scaledwidth=80.0%,height=288 ] plane at various values of .,scaledwidth=80.0%,height=288 ]in this paper we apply the approach of for pricing credit derivatives to various option pricing problems ( vanilla and exotic ) where as an underlying model we use local stochastic volatility model with stochastic interest rates and jumps in every stochastic driver .it is important that all jumps as well as the brownian motions are correlated .here we solve just the backward problem ( solving the backward kolmogorov equation , e.g. , for pricing derivatives ) , while the forward problem ( solving the forward kolmogorov equation to find the density of the underlying process ) can be treated in a similar way , see . in test exampleswere given for the kou and merton models , while the approach is in no way limited by these models .therefore , in this paper we demonstrate how a similar approach can be used together with the meixner model . again, this model is chosen only as an example , because , in general , the approach in use is rather universal .we provide an algorithm and results of numerical experiments .the second contribution of the paper is a new fully implicit modification of the popular hundsdorfer and verwer and modified craig - sneyd finite - difference schemes which provides second order approximation in space and time , is unconditionally stable and preserves positivity of the solution , while still keeps a linear complexity in the number of grid nodes .this scheme has extended damping properties , and , therefore , allows to eliminate any additional damping scheme of a lower order of approximation , e.g. , implicit euler scheme ( as this is done in the rannacher method ) , or do scheme with the parameter ( as this was proposed in ) .we prove unconditional stability of the scheme , second order of approximation in space and time and positivity of the solution .the results of our numerical experiments demonstrate the above conclusions . to the best of authors knowledge both approaches have not been considered yet in the literature , so the main results of the paper are new .the model in use is rather general , in a sense that if considers two ( or even three ) cev processes for all the diffusion components and a wide class of the lvy processes for the jump components .therefore , a stable , accurate and sufficiently fast finite - difference approach for pricing derivatives using this model , which is proposed in this paper , could be beneficial for practitioners .we thank peter carr , alex lipton and alex veygman for their useful comments and discussions . also comments and suggestions of two anonymous refereesare highly appreciated .we assume full responsibility for any remaining errors .ballotta , l. and bonfiglioli , e. ( 2014 ) .multivariate asset models using lvy processes and applications . , ( doi:10.1080/1351847x.2013.870917 ) .bates , d. ( 1996 ) . jumps and stochastic volatility - exchange - rate processes implicit in deutschemark options . , 9:69107 .boyarchenkoa , s. and levendorskii , s. ( 2013 ) .american options in the heston model with stochastic interest rate and its generalizations . , 20(1):2649 .carr , p. and wu , l. ( 2004 ) .time - changed lvy processes and option pricing . , 71:113141 .chen , r. and scott , l. ( 2004 ) .stochastic volatility and jumps in interest rates : an international analysis .ssrn , 686985 .chiarella , c. and kang , b. ( 2013 ) .the evaluation of american compound option prices under stochastic volatility and stochastic interest rates ., 17(1):7192 .chiarella , c. , kang , b. , mayer , g. , and ziogas , a. ( 2008 ) .the evaluation of american option prices under stochastic volatility and jump - diffusion dynamics using the method of lines . technical report research paper 219 , quantitative finance research centre , university of technology , sydney .cont , r. and tankov , p. ( 2004 ) . .financial matematics series , chapman & hall /crcl .das , s. r. ( 2002 ) .the surprise element : jumps in interest rates ., 106(1):2765 .dash , j. ( 2004 ) . .world scientific ., y. , forsyth , p. a. , and vetzal , k. r. ( 2005 ) .robust numerical methods for contingent claims under jump diffusion processes . , 25:87112 .doffou , a. and hillard , j. ( 2001 ) . pricing currency options under stochastic interest rates and jump - diffusion processes ., 25(4):565585 .durhama , g. and park , y. ( 2013 ) . beyond stochastic volatility and jumps in returns and volatility ., 31(1):107121 .gatheral , j. ( 2008 ) .consistent modeling of spx and vix options . in _fifth world congress of the bachelier finance society_. giese , a. ( 2006 ) . on the pricing of auto - callable equity structures in the presence of stochastic volatility and stochastic interest rates , . in _mathfinance workshop _ ,available at http://www.mathfinance.com/workshop/2006/papers/giese/slides.pdf .grzelak , l. a. and oosterlee , c. w. ( 2011 ) . on the heston model with stochastic interest rates , . , 2:255286 .haentjens , t. and int hout , k. j. ( 2012 ) . alternating direction implicit finite difference schemes for the heston hull white partial differential equation ., 16:83110 .halperin , i. and itkin , a. ( 2013 ) .: unspanned stochastic local volatility model .available at http://arxiv.org/abs/1301.4442 .hilpisch , y. ( 2011 ) .fast monte carlo valuation of american options under stochastic volatility and interest rates .available at http://www.google.com/url?sa=t&rct=j&q=&esrc=s&frm=1&source=web&cd=1&cad=rja&uact=8&ved=0cb4qfjaa&url=http%3a%2f%2farchive.euroscipy.org%2ffile%2f4145%2fraw%2fesp11-fast_montecarlo_paper.pdf&ei=8pr8vicxbsqbnpiygvgb&usg=afqjcnglwa5_casosfotjkruduod5v8jow .homescu , c. ( 2014 ) .local stochastic volatility models : calibration and pricing .ssrn , 2448098 .ikonen , s. and toivanen , j. ( 2007 ) .componentwise splitting methods for pricing american options under stochastic volatility . , 10:331361 .ikonen , s. and toivanen , j. ( 2008 ) .efficient numerical methods for pricing american options under stochastic volatility . , 24:104126 . , k. j. and foulon , s. ( 2010 ) .finite difference schemes for option pricing in the heston model with correlation . , 7(2):303320 . , k. j. and welfert , b. d. ( 2007 ) .stability of adi schemes applied to convection - diffusion equations with mixed derivative terms ., 57:1935 .itkin , a. ( 2013 ) .new solvable stochastic volatility models for pricing volatility derivatives . , 16(2):111134 .itkin , a. ( 2014a ) .efficient solution of backward jump - diffusion pides with splitting and matrix exponentials ., forthcoming .electronic version is available at http://arxiv.org/abs/1304.3159 .itkin , a. ( 2014b ) .splitting and matrix exponential approach for jump - diffusion models with inverse normal gaussian , hyperbolic and meixner jumps ., 3:233250 .itkin , a. ( 2015 ) . ., 18(5):15500311 155003124 .itkin , a. and carr , p. ( 2011 ) .jumps without tears : a new splitting technology for barrier options ., 8(4):667704 .itkin , a. and lipton , a. ( 2015 ) .efficient solution of structural default models with correlated jumps and mutual obligations ., doi : 10.1080/00207160.2015.1071360 .johannes , m. ( 2004 ) . the statistical and economic role of jumps in continuous - time interest rate models . ,lix(1):227260 .lipton , a. ( 2002 ) .the vol smile problem ., pages 6165 .mcdonough , j. m. ( 2008 ) . .university of kentucky .available at http://www.engr.uky.edu/~acfd/me690-lctr-nts.pdf .medvedev , a. and scaillet , o. ( 2010 ) .pricing american options under stochastic volatility and stochastic interest rates ., 98:145159 .pagliarani , s. and pascucci , a. ( 2012 ) .approximation formulas for local stochastic volatility with jumps .ssrn , 2077394 .rannacher , r. ( 1984 ) .finite element solution of diffusion equation with irregular data , ., 43:309327 .salmi , s. , toivanen , j. , and von sydow , l. ( 2014 ) .an imex - scheme for pricing options under stochastic volatility models with jumps ., 36(4):b817b834 .schoutens , w. ( 2001 ) .meixner processes in finance .technical report , k.u.leuveneurandom .schoutens , w. and teugels , j. ( 1998 ) .processes , polynomials and martingales . , 14(1,2):335349 .sepp , a. ( 2011a ) .efficient numerical pde methods to solve calibration and pricing problems in local stochastic volatility models . in _global derivatives_. sepp , a. ( 2011b ) .parametric and non - parametric local volatility models : achieving consistent modeling of vix and equities derivatives . in _quant congress europe_. sepp , a. ( 2014 ) .log - normal stochastic volatility model : new insight and closed - form solution for vanilla options .technical report , baml .shirava , k. and takahashi , a. ( 2013 ) .pricing basket options under local stochastic volatility with jumps .ssrn , 2372460 .toivanen , j. ( 2010 ) . a componentwise splitting method for pricing american options under the bates model . in _ computational methods in applied sciences _ ,pages 213227 .springer .wade , b. , a.q.m.khaliq , m.siddique , and m.yousuf ( 2005 ) . smoothing with positivity - preserving pde schemes for parabolic problems with nonsmooth data ., 21(3):553573 .recall that given the call option and positive correlation we want to prove that the finite - difference scheme : is unconditionally stable in time step , approximates with and preserves positivity of the vector if , where are the grid space steps correspondingly in and directions , and the coefficient must be chosen to obey the condition : .\ ] ] first , let us show how to transform to .observe , that can be re - written in the form , \\\alpha & = pq - q\sdt \rho_{s , v } w(s ) \triangledown_s + p\sdt w(v ) \triangledown_v . \nonumber\end{aligned}\ ] ] according to , . also based on the proposition statement , , therefore = o\left((\delta \tau)^2\right) ] . now due to introduce a coefficient such that .\ ] ] from we have thus, it is always possible to provide the condition by an appropriate choice of .accordingly , this gives rise to the condition $ ] .the latter means that the spectral norm , and , thus , the map is contractual .this is the sufficient condition for the picard iterations in to converge .unconditional stability follows .other details about em - matrices and necessary lemmas again can be found in .for the first line of we claim the same statement , i.e. , that the matrix is an em - matrix .the main diagonal elements of are also positive , namely the remaining proof again can be done based on definitions and lemma a.2 in . since both steps on converge in the spectral norm , and are unconditionally stable , the unconditional stability and convergence of the whole scheme follows .it also follows that the whole scheme preserves non - negativity of the solution . in the lhsis approximated with the second order in , while the first line in the rhs part uses the first order approximation of the first derivative .as , and in the first line of the rhs of we have a product , the order of the ignored terms is .so , rigorously speaking , the whole scheme provides this order of approximation . as compared with the proposition [ proppos ] , this scheme has the only modification .namely , instead of the first step in we now use ^{-1 } , \qquad \alpha^{+ } = ( pq+1 ) i_v & - q\sdt \rho_{s , v } w(s ) a_{2,s}^b + p\sdt w(v ) a_{2,v}^f . \nonumber\end{aligned}\ ] ] this system can be solved by using the following fractional steps : by construction , each matrix and is an em - matrix , see appendix a in .therefore , steps 1,3,4 in provide unconditional stability , and positivity of the solution . at step 2 ,the positivity can be provided by taking the time step sufficiently small .also , the second order of approximation follows from the definition of and .moreover , in this case the approximation of is , so the total approximation is determined by matrix .the whole fd scheme in the proposition [ proppos2 ] in addition to also includes the second step which is same as in the proposition [ proppos ] .since this step has the second order approximation in spatial variables , the whole scheme also provides the second order .the prove of convergence of the whole scheme is similar to that in proposition [ proppos ] .
pricing and hedging exotic options using local stochastic volatility models drew a serious attention within the last decade , and nowadays became almost a standard approach to this problem . in this paper we show how this framework could be extended by adding to the model stochastic interest rates and correlated jumps in all three components . we also propose a new fully implicit modification of the popular hundsdorfer and verwer and modified craig - sneyd finite - difference schemes which provides second order approximation in space and time , is unconditionally stable and preserves positivity of the solution , while still has a linear complexity in the number of grid nodes . pricing and hedging exotic options using local stochastic volatility ( lsv ) models drew a serious attention within the last decade , and nowadays became almost a standard approach to this problem . for the detailed introduction into the lsv among multiple available references we mention a recent comprehensive literature overview in . note , that the same model or its flavors appear in the literature under different names , such as stochastic local volatility model , universal volatility model of , unspanned stochastic local volatility model ( uslv ) of , etc . despite lsv has a lot of attractive features allowing simultaneous pricing and calibration of both vanilla and exotic options , it was observed that in many situations , e.g. , for short maturities , jumps in both the spot price and the instantaneous variance need to be taken into account to get a better replication of the market data on equity or fx derivatives . this approach was pioneered by who extended the heston model by introducing jumps with finite activity into the spot price ( a jump - diffusion model ) . then further extended this approach by considering local stochastic volatility to be incorporated into the jump - diffusion model ( for the extension to an arbitrary lvy model , see , e.g. , ) . later investigated exponential and discrete jumps in both the underlying spot price and the instantaneous variance , and concluded that infrequent negative jumps in the latter are necessary to fit the market data on equity options . more flexible models which consider the vol - of - vol power to be parameter of calibration , , might not need jumps in . see also and the discussion therein . ] . in a similar approach was proposed to use general jump - diffusion equations for modeling both and . note , that in the literature jump - diffusion models for both and are also known under the name svcj ( stochastic volatility with contemporaneous jumps ) . these models as applied to pricing american options were intensively studied in , for basket options in . another way to extend the lsv model is to assume that the short interest rates could be stochastic . under this approach jumps are ignored , but instead a system of three stochastic differential equations ( sde ) with drifts and correlated diffusions is considered , see and references therein . as we have already mentioned , accounting for jumps could be important to calibrate the lsv model to the market data . and making the interest rate stochastic does nt violate this conclusion . moreover , jumps in the interest rate itself could be important . for instance , in a stochastic volatility model with jumps in both rates and volatility was calibrated to the daily data for futures interest rates in four major currencies which provided a better fit for the empirical distributions . also the results in obtained using treasury bill rates find evidence for the presence of jumps which play an important statistical role . also in was found that jumps generally have a minor impact on yields , but they are important for pricing interest rate options . in fx world there exist some variations of the discussed models . for instance , in foreign and domestic interest rates are stochastic with no jumps while the exchange rate is modeled by jump - diffusion . in both domestic and foreign rates were represented as a lvy process with the diffusion component using a time - change approach . the diffusion components could be correlated in contrast to the jump components . in the bond market , as shown in , the information surprises result in discontinuous interest rates . in that paper a class of poisson gaussian models of the fed funds rate was developed to capture the surprise effects . it was shown that these models offer a good statistical description of a short rate behavior , and are useful in understanding many empirical phenomena . jump ( poisson ) processes capture empirical features of the data which would not be captured by gaussian models . also there is strong evidence that the existing gaussian models would be well - enhanced by jump and arch - type processes . overall , it would be desirable to have a model where the lsv framework could be combined with stochastic rates and jumps in all three stochastic drivers . we also want to treat these jumps as general lvy processes , so not limiting us by only the jump - diffusion models . in addition , we consider brownian components to be correlated as well as the jumps in all stochastic drivers to be correlated , while the diffusion and jumps remain uncorrelated . finally , since such a model is hardly analytically tractable when parameters of the model are time - dependent ( which is usually helpful to better calibrate the model to a set of instruments with different maturities , or to a term - structure of some instrument ) , we need an efficient numerical method for pricing and calibration . for this purpose in this paper we propose to exploit our approach first elaborated on in for modeling credit derivatives . in particular , in the former paper we considered a set of banks with mutual interbank liabilities whose assets are driven by correlated lvy processes . for every asset , the jumps were represented as a weighted sum of the common and idiosyncratic parts . both parts could be simulated by an arbitrary lvy model which is an extension of the previous approaches where either the discrete or exponential jumps were considered , or a lvy copula approach was utilized . we provided a novel efficient ( linear complexity in each dimension ) numerical ( splitting ) algorithm for solving the corresponding 2d and 3d jump - diffusion equations , and proved its convergence and second order of accuracy in both space and time . test examples were given for the kou model , while the approach is in no way limited by this model . in this paper we demonstrate how a similar approach can be used together with the metzler model introduced by . it is built based on the meixner distribution which belongs to the class of the infinitely divisible distributions . therefore , it gives rise to a lvy process - the meixner process . the meixner process is flexible and analytically tractable , i.e. its pdf and cf are known in closed form ( in more detail see , e.g. , and references therein ) . the meixner model is known to be rich and capable to be calibrated to the market data . again , this model is chosen only as an example , because , in general , the approach in use is rather universal . we also propose a new fully implicit modification of the popular hundsdorfer and verwer and modified craig - sneyd finite - difference schemes which provides second order approximation in space and time , is unconditionally stable and preserves positivity of the solution , while still has a linear complexity in the number of grid nodes . this modification allows elimination of first few rannacher steps as this is usually done in the literature to provide a better stability ( see survey , e.g. , in ) , and provides much better stability of the whole scheme which is important when solving multidimensional problems . the rest of the paper is organized as follows . in the next section we describe the model . section [ solsect ] consists of two subsections . the first one introduces the new splitting method , which treats mixed derivatives terms implicitly , thus providing a much better stability . the second subsection describes how to deal with jumps if one uses the meixner model . however , by no means this approach is restricted just by this model as , e.g. , in we used the kou jump models using the same treatment of the jump terms . so here the meixner model is taken as another example . section [ numexamp ] presents the results of some numerical experiments where prices of european vanilla and barrier options were computed using these model and numerical method . the final section concludes .
the research on the next generation of wireless networks is proceeding at an intense pace , both in industry and in academia . focusing on the physical layer, there is wide agreement that fifth - generation ( 5 g ) wireless networks will be based , among the others , on three main innovations with respect to legacy fourth - generation systems , and in particular ( a ) the use of large scale antenna arrays , a.k.a .massive mimo ; ( b ) the use of small - size cells in areas with very large data request ; and ( c ) the use of carrier frequencies larger than 10ghz . indeed , focusing on ( c ) , the use of the so - called millimeter wave ( mmwave ) frequencies has been proposed as a strong candidate approach to achieve the spectral efficiency growth required by 5 g wireless networks , resorting to the use of currently unused frequency bands in the range between and . in particular , the e - band between and provides of free spectrum which could be exploited to operate 5 g networks .it is worth underlining that mmwave are not intended to replace the use of lower carrier frequencies traditionally used for cellular communications , but rather as additional frequencies that can be used in densely crowded areas for short - range communications . until now, the use of mmwave for cellular communications has been neglected due to the higher atmospheric absorption that they suffer compared to other frequency bands and to the larger values of the free - space path - loss .however , recent measurements suggest that mmwave attenuation is only slightly worse than in other bands , as far as propagation in dense urban environments and over short distances ( up to about 100 meters ) is concerned . additionally , since antennas at these wavelengths are very small , arrays with several elements can be packed in small volumes , in principle also on mobile devices , thus removing the traditional constraint that only few antennas can be placed on a smartphone and benefiting of an array gain at both edges of the communication link with respect to traditional cellular links .another peculiar feature of cellular communications at mmwave that has been found is that these are mainly noise - limited and not interference - limited systems , and this will simplify the implementation of interference - management and resource - scheduling policies .based on this encouraging premises , a large body of work has been recently carried out on the use of mmwave for cellular communications .one of the key questions about the use of mmwave is about the type of modulation that will be used at these frequencies . indeed , while it is not even sure that 5 g systems will use orthogonal frequency division multiplexing ( ofdm ) modulation at classical cellular frequencies , there are several reasons that push for 5 g networks operating a single - carrier modulation ( scm ) at mmwave .first of all , the propagation attenuation of mmwave make them a viable technology only for small - cell , dense networks , where few users will be associated to any given base station , thus implying that the efficient frequency - multiplexing features of ofdm may not be really needed .additionally , the large bandwidth would cause low ofdm symbol duration , which , coupled with small propagation delays , means that the users may be multiplexed in the time domain as efficiently as in the frequency domain .finally , mmwave will be operated together with massive antenna arrays to overcome propagation attenuation .this makes digital beamforming unfeasible , since the energy required for digital - to - analog and analog - to - digital conversion would be huge .thus , each user will have an own radio - frequency beamforming , which requires users to be separated in time rather than frequency .in light of these considerations , scm formats are being seriously considered for mmwave systems . for efficient removal of the intersymbol interference induced by the frequency - selective nature of the channel , the use of scm coupled witha cyclic prefix has been proposed , so that fft - based processing might be performed at the receiver in , the null cyclic prefix single carrier ( ncp - sc ) scheme has been proposed for mmwave .the concept is to transmit a single - carrier signal , in which the usual cyclic prefix used by ofdm is replaced by nulls appended at the end of each transmit symbol .the block scheme is reported in fig .[ fig : cp - scm ] .this paper is concerned with the evaluation of the achievable spectral efficiency ( ase ) of scm schemes operating over mimo links at mmwave frequencies .we consider two possible transceiver architectures : ( a ) scm with linear minimum mean square error ( lmmse ) equalization in the time domain for intersymbol interference removal and symbol - by - symbol detection ; and ( b ) scm with cyclic prefix and fft - based processing and lmmse equalization in the frequency domain at the receiver . by adopting , inspired by , a modified statistical mimo channel model for mmwave frequencies , and using the simulation - based technique for computing information - rates reported in , we thus provide a preliminary assessment of the achievable spectral efficiency ( ase ) that can be reasonably expected in a scenario representative of a 5 g environment .our results show that , for distances less than 100 meters , and with a transmit power around 0dbw , mmwave links exhibit good performance and may provide good spectral efficiency ; for larger distances instead , either large values of the transmit power or a large number of antennas must be employed to overcome the distance - dependent increased attenuation .the rest of this paper is organized as follows .next section contains the system model , with details on the two considered transceiver architectures and on the pulse shapes considered in the paper .section iii explains the used technique for the evaluation of the ase , while extensive numerical results are illustrated and discussed in section iv .finally , section v contains concluding remarks .we consider a transmitter - receiver pair that may be representative of either the uplink or the downlink of a cellular system .we denote by and the number of transmit and receive antennas , respectively .denote by a column vector containing the data - symbols ( drawn from a qam constellation with average energy ) to be transmitted : ^t \ ; , \ ] ] with denoting transpose .we assume that , where is an integer and is the number of information symbols that are simultaneously transmitted by the transmit antennas in each symbol interval .the propagation channel is modeled in discrete - time as a matrix - valued finite - impulse - response ( fir ) filter ; in particular , we denote by the sequence , of length , of the -dimensional matrices describing the channel .the discrete - time versions of the impulse response of the transmit and receive shaping filters are denoted as and , respectively ; these filters are assumed to be both of length .we focus on two different transceiver architectures , one that operates equalization in the time - domain and one that works in the frequency domain through the use of a cyclic prefix .we refer to the discrete - time block - scheme reported in fig .[ fig : scenario1 ] .the qam symbols in the vector are fed to a serial - to - parallel conversion block that splits them in distinct -dimensional vectors .these vectors are pre - coded using the the -dimensional precoding matrix ; we thus obtain the -dimensional vectors the vectors are fed to a bank of identical shaping filters , converted to rf and transmitted . at the receiver , after baseband - conversion , the received signals are passed through a bank of filters matched to the ones used for transmission and sampled at symbol - rate .we thus obtain the -dimensional vectors , which are passed through a post - coding matrix , that we denote by , of dimension . denoting by the matrix - valued fir filter representing the composite channel impulse response ( i.e. , the convolution of the transmit filter , actual matrix - valued channel and receive filter ) , it is easy to show that the generic -dimensional vector at the output of the post - coding matrix , say , is written as with denoting conjugate transpose . in ( [ eq : received_signal_1 ] ), is the length of the matrix - valued composite channel impulse response , while is the additive gaussian - distributed thermal noise at the output of the reception filter . regarding the choice of the pre - coding and post - coding matrices and , letting , with denoting the frobenius norm , we assume here that contains on its columns the left eigenvectors of the matrix corresponding to the largest eigenvalues , and that the matrix contains on its columns the corresponding right eigenvectors . in order to combat the intersymbol interference ,an lmmse equalizer is used . in particular , to obtain a soft estimate of the data vector , the observables are stacked into a single -dimensional vector , that we denote by , and processed as follows : where is a ] .the reported results are to be considered as an ideal benchmark for the ase since we are neglecting the interference , and we are considering digital pre - coding and post - coding , whereas due to hardware constraints mmwave systems will likely operate with hybrid analog / digital beamforming strategies .we focus here on the performance of the tde transceiver , since our tests showed that the fde structure is worse than the tde scheme . fig.s[ fig : fig5 ] , [ fig : fig7 ] and [ fig : fig9 ] report the asemhz . ] versus the distance between the transmitter and the receiver , assuming that the transmit power is , while fig .[ fig : fig6 ] reports the ase versus the transmit power ( varying in the range $]dbw ) , assuming a link length m. inspecting the figures , the following remarks are in order : * results , in general , improve for increasing transmit power , for decreasing distance between transmitter and receiver and for increasing values of the number of transmit and receive antennas . *in particular , good performance can be attained for distances up to 100 - 200 m , whereas for we have a steep degradation of the ase . in this region , all the advantages given by increasing the modulation cardinality or the number of antennas are essentially lost or reduced at very small values .of course , this performance degradation may be compensated by increasing the transmit power . * regarding the multiplexing index , it is interesting to note from fig .[ fig : fig7 ] that for short distances the system benefits from a large multiplexing order , while , for large distances ( which essentialy corresponds to low signal - to - noise ratio ) , the ase corresponding to is larger than that corresponding to the choise .* for a reference distance of 30 m ( which will be a typical one in small - cell 5 g deployments for densely crowded areas ) , a trasnmit power around 0dbw is enough to grant good performance and to benefit from the advantages of increased modulation cardinality , size of the antenna array , and multiplexing order .dbw ; ; verying . ]this paper has provided a preliminary assessment of the ase for a mimo link operating at mmwave frequencies with scm .two different transceiver architectures have been considered , one with time - domain equalization and one with cyclic prefix plus frequency domain equalization .results have been shown with reference to the tde structure , which was found to outperform the fde structure . for distances up to 100 m and for values of the transmit power around 0dbwa good performance level can be attained , with ase values up to 1.8 bit / s / hz , which , for a bandwidth of 500mhz , leads to a bit - rate of up to almost 1gbit / s .the present study can be generalized and strengthened in many directions .first of all , the impact of hybrid analog / digital beamforming should be evaluated ; moreover , the considered analysis might be applied to a point - to - multipoint link , wherein the presence of multiple antennas at the transmitter is used for simultaneous communication with distinct users ( the so - called multiuser mimo technique ) . additionally , since ,as already discussed , the reduced wavelength of mmwave permits installing arrays with many antennas in small volumes , an analysis , possibly through asymptotic analytic considerations , of the very large number of antennas regime could also be made .last , but not least , energy - efficiency considerations should also be made : both the ase and the transceiver power consumption increase for increasing transmit power and increasing size of the antenna arrays ; if we focus on the ratio between the ase and the transceiver power consumption , namely on the system energy efficiency , optimal trade - off values for the transmit power and size of the antenna arrays should be found .these topics are certainly worth future investigation .v. jungnickel , k. manolakis , w. zirwas , b. panzner , v. braun , m. lossow , m. sternad , r. apelfrojd , and t. svensson , `` the role of small cells , coordinated multipoint , and massive mimo in 5 g , '' _ ieee commun . mag ._ , vol .52 , no . 5 , pp . 4451 , may 2014 .t. s. rappaport , s. sun , r. mayzus , h. zhao , y. azar , k. wang , g. n. wong , j. k. schulz , m. samimi , and f. gutierrez , `` millimeter wave mobile communications for 5 g cellular : it will work ! '' _ieee access _ ,vol . 1 , pp . 335349 , may 2013 .a. ghosh , t. a. thomas , m. cudak , r. ratasuk , p. moorut , f. w. vook , t. rappaport , j. g. r maccartney , s. sun , and s. nie , `` millimeter wave enhanced local area systems : a high data rate approach for future wireless networks , '' _ieee j. select .areas commun ._ , vol . 32 , no . 6 , pp .1152 1163 , jun . 2014 .t. s. rappaport , f. gutierrez , e. ben - dor , j. murdock , y. qiao , and j. i. tamir , `` broadband millimeter - wave propagation measurements and models using adaptive - beam antennas for outdoor urban cellular communications , '' _ ieee trans .antennas and prop ._ , vol .61 , no . 4 , pp . 18501859 , apr . 2013 .p. banelli , s. buzzi , g. colavolpe , a. modenini , f. rusek , and a. ugolini , `` modulation formats and waveforms for 5 g networks : who will be the heir of ofdm ? '' _ ieee signal processing mag ._ , vol . 31 , no . 6 , pp .8093 , nov .2014 .o. el ayach , s. rajagopal , s. abu - surra , z. pi , and r. heath , `` spatially sparse precoding in millimeter wave mimo systems , '' _ ieee trans . on wireless commun ._ , vol .13 , no . 3 , pp .14991513 , march 2014 .a. alkhateeb and r. w. heath , jr , `` frequency selective hybrid precoding for limited feedback millimeter wave systems , '' _ arxiv e - prints _ , oct . 2015 .[ online ] .available : http://arxiv.org/abs/1510.00609 d. m. arnold , h .- a .loeliger , p. o. vontobel , a. kavi , and w. zeng , `` simulation - based computation of information rates for channels with memory , '' _ ieee trans .theory _ , vol .52 , no . 8 , pp . 34983508 , aug .2006 .a. viholainen , m. bellanger , and m. huchard , `` prototype filter and structure optimization , '' d3.1 of physical layer for dynamic access and cognitive radio ( phydyas ) , fp7-ict future networks , tech .jan . 2008 .s. singh , m. kulkarni , a. ghosh , and j. andrews , `` tractable model for rate in self - backhauled millimeter wave cellular networks , '' _ieee j. select .areas commun . _ ,33 , no .21962211 , oct 2015 .
future wireless networks will extensively rely upon bandwidths centered on carrier frequencies larger than 10ghz . indeed , recent research has shown that , despite the large path - loss , millimeter wave ( mmwave ) frequencies can be successfully exploited to transmit very large data - rates over short distances to slowly moving users . due to hardware complexity and cost constraints , single - carrier modulation schemes , as opposed to the popular multi - carrier schemes , are being considered for use at mmwave frequencies . this paper presents preliminary studies on the achievable spectral efficiency on a wireless mimo link operating at mmwave in a typical 5 g scenario . two different single - carrier modem schemes are considered , i.e. a traditional modulation scheme with linear equalization at the receiver , and a single - carrier modulation with cyclic prefix , frequency - domain equalization and fft - based processing at the receiver . our results show that the former achieves a larger spectral efficiency than the latter . results also confirm that the spectral efficiency increases with the dimension of the antenna array , as well as that performance gets severely degraded when the link length exceeds 100 meters and the transmit power falls below 0dbw . nonetheless , mmwave appear to be very suited for providing very large data - rates over short distances .
processing large - scale real - world graphs has become significantly important for mining valuable information and learning knowledge in many areas , such as data analytics , web search , and recommendation systems .the most frequently used algorithmic kernels , including path exploration ( e.g. traversal , shortest paths computation ) and topology - based iteration ( e.g. page rank , clustering ) , are driven by graph structures .parallelization of these algorithms is intrinsically different from traditional scientific computation that appeals to a data - parallel model , and has emerged as so - called _ graph - parallel _ problems . [cols="^,^,^,^",options="header " , ] [ tab_comp_all ] in recent years we have witnessed an explosive growth of graph data . for example , the world wide web graph currently has at least 15 billion pages and one trillion urls . also , the social network of facebook has over 700 million users and 140 billion social links . even to store only the topology of such a graph ,the volume is beyond terabytes ( tb ) , let alone rich metadata on vertices and edges .the efficient processing of these graphs , even with linear algorithmic complexity , has scaled out capacity of any single commodity machine .thus , it is not surprising that distributed computing has been a popular solution to graph - parallel problems .however , since the scale - free nature of real - world graphs , we are facing the following two major challenges to develop high performance graph - parallel algorithms on distributed memory systems .* parallelism expressing . * _ graph - parallel algorithms often exhibit random data access , very little work per vertex and a changing degree of parallelism over the course of execution , making it hard to express parallelism efficiently_. note the facts that graph - parallel computation is data - driven or dictated by the graph topology , and real - world graphs are unstructured and highly irregular ( known as scale - free , e.g. low - diameter , power - law degree distribution ) .thus , on one hand , from the view of programming , graph - parallel computation ca nt fit well in traditional parallelization methods based on decomposing either computational structure or data structure .for example , mapreduce , the widely - used data - parallel model , ca nt efficiently process graphs due to the lack of support to random data access and iterative execution . on the other hand , from the view of performance ,the lack of locality makes graph - parallel procedures memory - bound on the shared memory system and network - bound on the distributed system .meanwhile , current computer architecture is evolving to deeper memory hierarchy and more processing cores , seeking more locality and parallelism in programs . considering both ease of programming and affinity to the system architecture, it is challenging to express parallelism efficiently for graph - parallel algorithms .* graph data layout . * _ real - world graphs are hard to partition and represent in a distributed data model . _the difficulty is primarily from graph s scale - free nature , especially the power - law degree distribution .the power - law property implies that a small fraction of vertices connect most of edges .for example , in a typical power - law graph with degree distribution where , 20% of vertices connect more than 80% of edges .as identified by previous work , this skew of edge distribution makes a balanced partitioning with low edge - cut difficult and often impossible for large - scale graphs in practice .reference thoroughly investigated streaming partitioning methods with all popular heuristics on various datasets .however , their released results show that for scale - free graphs the edge - cut rate is very high , only slightly lower than a random hashing method . as a consequence, graph distribution leads to high communication overhead and memory consumption . to address the above challenges ,several distributed graph - parallel frameworks have been developed .basically , a framework provides a specific compution abstraction with a functional api to express graph - parallel algorithms , leaving details of parallelization and data transmission to the underlying runtime system .representative frameworks include pregel , graphlab , powergraph and matrix - based packages .these efforts target three aspects of graph - parallel applications : _ graph data model , computation model _ and _ communication model_. table [ tab_comp_all ] summarizes key technical features of the three frameworks ( pregel , graphlab , powergraph ) , as well as gre to be presented in this paper .pregel acts as a milestone of graph - parallel computing .it firstly introduced the _ vertex - centric _ approach that has been commonly adopted by most of later counterparts including ours .this idea is so - called _ think like a vertex _ philosophy , where all active vertices perform computation independently and interact with their neighbors typically along edges .pregel organizes high - level computation in bulk synchronous parallel ( bsp ) super - steps and adopts message passing model for inter - vertex communication .graphlab supports asynchronous execution any more , but uses distributed shared memory such that the vertex can directly operates its edges and neighbors . for both pregel and graphlab, the vertex computation follows a common pattern where an active vertex 1 ) collects data from its in - edges , 2 ) updates its states , 3 ) puts data on its out - edges and signals the downwind neighbors .moreover , powergraph , the descent of graphlab , summarized the above vertex procedure as a gather - apply - scatter ( gas ) paradigm , and explicitly decomposes it into three split phases , which exposes potential edge - level parallelism .however , the gas abstraction inherently handles each edge in a two - sided way that requires two vertices involvement ( scatter and gather respectively ) , which leads to intermediate data storage and extra operations . instead , gre proposes a new scatter - combine computation model , where the previous two - sided edge operations are reduced to one active message .besides , compared to pure message passing or distributed shared memory model , active message has better affinity to both local multi - core processors and remote network communication .with respect to distributed graph data model , most of previous frameworks including pregel and graphlab use a simple hash - mapping strategy where each vertex and its edges are evenly assigned to a random partition . while being fast to load and distribute graph data , this method leads to a significantly high edge - cut rate .recently , powergraph introduces _ vertex - cut _ in which vertex rather than edge spans multiple partitions . _vertex - cut _ can partition and represent scale - free graphs with significantly less resulting communication .however , it requires to maintain strict data consistency between master vertex and its mirrors . instead , gre proposes a new agent - graph model , where data on agent is temporal so that the consistency issue is avoided . in a summary, gre inherits the _ vertex - centric _ programming model , and specifically makes the following major contributions : scatter - combine , a new graph - parallel computation model .it is based on active message and fine - grained edge - level parallelism .sec : comp_model ] ) agent - graph , a novel distributed directed graph model .it extends the original graph with _agent _ vertices . specifically , it has no more and typically much less communication than powergraph s _ vertex - cut_. ( sec .[ sec : graph_model ] ) implementation of an efficient runtime system for gre abstractions .it incorporates several key components of data storage , one - sided communication and fine - grained data synchronization .( sec . [ sec : runtime ] ) a comprehensive evaluation of three benchmark programs ( pagerank , sssp , cc ) and graph partitioning on real - world graphs , demonstrating gre s excellent performance and scalability . compared to powergraph , gre s performance is 2.5.0 times better on 8 machines .specifically , gre s pagerank takes seconds per iteration on 192 cores ( 16 machines ) while powergraph reported seconds on 512 cores ( 64 machines) .gre can process a large graph of one billion vertices on our machine with 768 gb memory while powergraph can not make it .[ sec : evaluation ] )in this section , we formalize the procedure of vertex - centric graph computation and then present motivation of this work . graph topology can be represented as , where is the set of vertices and the set of edges .associating with metadata on vertices and edges , we have a property graph , where and are properties of vertices and edges respectively .property graph is powerful enough to support all known graph algorithms . in this paper ,all edges are considered as directed ( one undirected edge can be transformed into two directed edges ) . for simplicity , we define the following operations : : return the set of a vertex s out - edges ; : return the set of a vertex s in - edges ; : return the source vertex of an edge ; : return the target vertex of an edge ; : filter a set of vertices or edges with rule , and return a subset . as an abstraction , the computation on some vertex can be described as : for example , an instance of pagerank computation on can be described as : in the vertex - centric approach , calculation of equation .[ e1 ] is encoded as so - called _ vertex - program_. each vertex executes its _ vertex - program _ and can communicate with its neighbors . depending on pre - defined synchronization policies ,active vertices are scheduled to run in parallel .a common pattern followed by _ vertex - program _ is gather - apply - scatter ( gas) ( or signal / collect ) .it translates equation .[ e1 ] to the following three conceptual steps : , as gas requires that the vertex computation can be factored into a generalized sum of products . ] * g*ather .vertex collects information from in - edges and upwind neighbor vertices by a generalized sum , resulting in : * a*pply .vertex recomputes and updates its state : * s*catter .vertex uses its newest state to update the out - edges states and signals its downwind neighbors : in steps of gather and scatter , the vertex communicates with its upwind and downwind neighbors respectively . conventionally , there are two methods to do the communication , that message passing ( pregel and other similar systems like spark , trinity ) and distributed shared memory ( graphlab and its descendant powergraph ) . in distributed shared memory , remote vertices are replicated in local machine as _ ghost_s or _mirror_s , and data consistency among multiple replications is maintained implicitly .although gas model provides a clear abstraction to reason about parallel program semantics , it may sacrifice storage and performance . note that gas conceptually split the information transferring on an edge into two phasesexecuted by two vertices , i.e. the source vertex changes the edge state in its scatter phase and then the target vertex reads the changed state in its gather phase .the above asynchrony of operations on the edge requires storage of intermediate edge states and leads to extra operations that hurt performance . in pregelthis happens across two super - steps and leads to the large storage of intermediate messages , while in graphlab it requires not only out - edge storage but also redundant in - edge storage and polling on all in - edges .however , we identify that in a message model the separation of gather and scatter is not necessary , as long as the operator in gather is commutative . to illustrate this point , again we take pagerank as an example and rewrite its vertex computation in equation . [ pr-1 ] as follow : equation .[ pr-2]a is the message sent to by vertex . in equation .[ pr-2]b , the operation is commutative , which means the order of computing does no matter .once a message comes , it can be immediately computed . in practice ,given the fact that _ graph - parallel computation is essentially driven by data flow on edges _ , the is naturally commutative . -6pt [ gre - arch ] from the above analysis , we can induce a dataflow execution model that leverages active message approach .an active message is a message containing both data and encoded operations .now the gather phase then can be broken into a series of discrete asynchronous active messages .an active message can be scheduled to run once it reaches the destination vertex .specifically , for a message on the edge , when and are owned by the same machine , the message can be computed in - place during s scatter phase .note that multiple active messages may operate on the same destination vertex simultaneously , leading to data race .as detailed in later sections , we shall handle this issue with a vertex - grained lock mechanism . basically , the dataflow model has two significant advantages .first , it transforms the two - sided communication to one - sided , bypassing the intermediate message storage and signaling .this optimization dramatically reduces the overhead in both shared memory and distributed environment .second , it enables fine - grained edge - level parallelism that takes advantage of multi - core architecture .therefore , we propose scatter - combine ( sec.[sec : comp_model ] ) , an alternative graph - parallel abstraction that is more performance - aware . besides, we notice that in equation .[ pr-2]b , the is also associative .in fact , for most graph - parallel problems , not only commutativity but associativity is also satisfied by the generalized sum in equation .[ gather ] .this fact has been widely realized as the basis of pregel s message _ combiner _ and powergraph s _ vertex - cut _ mechanism . based on the associativity and commutativity of , we develop agent - graph ( sec.[sec : graph_model ] ) , a novel distributed graph data model that can effectively partition and represent scale - free graphs in the distributed environment .agent - graph is closely coupled with active message approach .this evolution motivates the development of * g*raph * r*untime * e*ngine ( gre ) .in this section , we give an overview of gre system , as shown in fig .[ gre - arch ] .gre is implemented in c++ templates .it consists of graph loader , abstraction layer and underlying runtime layer .the essentials of gre are the scatter - combine computation model and distributed agent - graph data model .as noted in the previous sections , gre inherits the _ vertex - centric _ programming model .the programming interface is a set of simple but powerful functional primitives . to define a graph - parallel algorithm, users only need to define the vertex computation with these primitives . in fig .[ gre - arch ] , user - defined pagerank program is presented as an example , which is as simple as its mathematic form in equation .[ pr-2 ] . the user - defined program , as template parameters , is then integrated into gre framework and linked with runtime layer .internally , gre adopts distributed agent - graph to represent graph topology , and column - oriented storage to store vertex / edge property .the graph loader component loads and partitions graphs into internal representation .the runtime layer provides infrastructure supporting gre s abstractions . with thread pool ( or thread groups ) and fine - grained data synchronization , gre can effectively exploit the massive edge - level parallelism expressed in scatter - combine model . besides , gre adopts one - sided communication to support active message and override the network communication overhead with useful computation .gre models the graph - parallel computation in scatter- combine abstraction . as noted in section [ sub : motivation ] , it realizes the fact that for a broad set of graph algorithms , vertex - centric computation can be factored into independent edge - level functions , and thus transforms the bulk of vertex computation into a series of active messages . in scatter - combine , each active vertex is scheduled to compute independently and interacts with others by active messages .like in pregel and graphlab , the major work of gre programming is to define vertex - computation .scatter - combine provides four primitives : _ scatter _ , _ combine _ , _ apply _ and _ assert_to_halt_. their abstract semantics are specified in alg .[ alg : primitives ] , and run in the context of alg .[ alg : logics ] . to implement a specific algorithm, users only need to instantiate these primitives .each vertex alternately carries computation following the logics in alg .[ alg : logics ] , where the procedure is divided into two phases , scatter - combine and apply .each vertex implicitly maintains two state variables , one for scatter and the other for apply , to decide whether to participate in the computation of the relevant phase . during scatter - combine phase , a vertex , if being active to scatter , modifies its outgoing edges s states and scatters data to its downwind neighbors by active messages . as emphasized before, active message is the essential of scatter - combine computation .it is edge - grained and defined by primitives _ scatter _ and _ combine_. as shown in alg .[ alg : primitives ] , the _scatter _ primitive generates an active message that carries a _ combine _ operation , while the _ combine _ primitive defines how the message operates on its destination vertex . besides , _ combine _ is able to activate the destination vertex for a future _ apply_. note that unlike previous message passing in pregel , active message is one - sided .that is , when the vertex sends a message to , it directly operates on without s involvement .conceptually , the _ scatter _does nt necessarily wait its paired _ combine _ to return .an active vertex may execute _ scatter _ on all or a subset of its out - edges .after finishes all desired _ scatter _operations , the vertex calls user - defined _ assert_to_halt _ to deactivates itself optionally . generally , in traversal - based algorithms _assert_to_halt _ is defined to deactivate for scatter , and for iterative algorithms such as pagerank it is defined to keep the vertex active . during apply phase , a vertex , if being active to apply , executes an _ apply _ and then sets its apply - phase state inactive . inthe _ apply _ procedure , the vertex recomputes its state ( _ v.state _ ) with intermediate results ( _ v.sum _ ) accumulated in the previous phase , and optionally activates itself to participate in the scatter - combine phase of next round .gre adopts bulk synchronous parallel execution .like pregel , gre divides the whole computation into a sequence of conceptual super - steps . in each super - step, gre executes the above two phases in order . during each phase , all active vertices run in parallel .the computation is launched by initializing any subset of vertices as source that are activated for scatter . during the course of whole execution , more verticesare either activated to compute or deactivated . at the end of a super - step ,if no vertex is active for further scatter - combine , the whole computation terminates .note that there are essential differences between gas and scatter - combine .[ fig : compare - models ] illustrates their execution flow on an vertex in bulk synchronous parallel execution .we assume s state is computed in super step .in gas model , at super - step - , the upwind neighbors ( , and ) have executed _ scatter _ and put data on s in - edges ( , and ) . at super - step when is active , it _gather_s data by polling its in - edges and accumulates them in a local variable ( here ) .we can see that processing an edge needs a pair of _ scatter _ and _ gather _ executed by two vertices respectively , which crosses two super - steps and requires storage of all intermediate data on edges . as a significant progress , in scatter - combine ,the operation of _ gather _ is encoded in an active message that can automatically execute without the target vertex s involvement , namely _combine_. in this example , during scatter - combine phase of super - step , s upwind neighbors _ scatter _ active messages that directly execute _ combine _ on , and finally simply updates itself by an _ apply _ during the apply phase .programming with scatter - combine model is very convenient .for instance , to implement pagerank , we directly translate the formulas [ pr-2]-a , [ pr-2]-b and [ pr-2]-c in equation .[ pr-2 ] into primitives _ scatter _ , _combine _ , and _ apply _ , as shown in fig.[subfig : pr - code ] .besides , this figure presents implementations of other two algorithms that we use as benchmark in later evaluation .gre s sssp implementation , given in fig .[ subfig : sssp - code ] , is a variant of bellman ford label correcting algorithm .it is a procedure of traversal , starting from a given source vertex , visiting its neighbors and then neighbors s neighbors in a breadth first style , and continuing until no vertices change their states .when a vertex is visited , if its stored distance to source is larger than that of the new path , its path information is updated .gre implements connected components as an example of label propagation .[ subfig : cc - code ] shows the connected components on undirected graphs . for each connected component ,it is labeled by the smallest i d of its vertices . in the beginning , each vertex is initialized as a component labeled with its own vertex i d .the procedure then iteratively traverses the graph and combines new found connected components .its algorithmic procedure is similar to sssp , but initiates all vertices as sources and typically converges in fewer number of iterations .+ + [ fig : applications ] scatter - combine model can express most graph - parallel computation efficiently , including traversal - based path exploration and most iterative algorithms .in fact , since graph - parallel computation is internally driven by data flow on edges , edge - parameterized vertex factorization is widely satisfied .we found that all examples of pregel in satisfy vertex factorization , such as bipartite matching and semi - clustering , thus can be directly implemented in gre . for some graph algorithmsthat contain other computation except for graph - parallel procedure , extension to basic scatter - combine model is required .for example , with simple extension of backward traversal on transposed graphs , gre implements multi - staged algorithms like betweenness centrality and strong connected components . besides , with technologies that were proposed in reference to complement vertex - centric parallelism in pregel - like systems , gre is able to efficiently implement algorithms including minimum spanning forests , graph coloring and approximate maximum weight matching . however , like pregel , gre only supports bsp execution , and thus ca nt express some asynchronous algorithms in powergraph .in this section we propose the distributed agent - graph model that extends original directed graphs with vertices .agent - graph is coupled with the message model , and is able to efficiently partition and represent scale - free graphs .there is a consensus that the difficulty of partitioning a real - world graph comes from its scale - free property . for big - vertex whose either in - degree or out - degree is high ,amounts of vertices in remote machines send messages to it or it sends messages to amounts of remote vertices . lack of low - cut graph partition , these messages lead to heavy network communication and degrades performance significantly .to crack the big - vertex problem , gre introduces a new strategy _ agent_. the basic idea is demonstrated in fig .[ fig : agent ] .there are two kinds of agents , i.e. _ combiner _ and _ scatter_. in the given examples , there are two machines ( or graph partitions ) where is the big - vertex and owned by machine 2 . in fig .[ subfig_combiner ] , is a high in - degree vertex , and many vertices in machine 1 send messages to it . by introducing a _ combiner _ , now messages previously sent to are first _ combined _ on and later sends a message to . in this example, the _ combiner _ reduces network communication cost from three messages to one .similarly , in fig .[ subfig_scatter ] , no longer directly sends messages to remote vertices in machine 1 but only one message to a _ scatter _agent who then delivers messages to vertices in machine 1 .also , the _ scatter _ agent reduces messages on network from three to one .based on the idea of _ agent _ , we propose agent - graph , simply denoted as . treats _ agent_s as special vertices and extends the original graph topology . for simplicity , we call the vertex in original graph as _ master _ vertex . each _ master _ vertexis uniquely owned by one partition but can have arbitrary _agent_s in any other partitions .agent _ has an directed edge connected with its _master_. one thing to note is that the term of _ agent _ is completely transparent to programmers , and only makes sense to gre s runtime system .now we give the formal description of .we assume graph has been divided into parts , say . in , for any set of directed edges pointing to , if is not owned by , we set an agent for and do the following transformation : let the edges redirect to , and add a directed edge .then is a combiner of .a combiner may have arbitrary in - edges but only one out - edge that points to its _ master _vertex . in , for any set of directed edges that start from and point to a set of vertices owned by another partition , we set an agent on remote , and do the following transformation : move these edges to , and add a directed edge on .then is a scatter of .scatter _ may have arbitrary out - edges but only one in - edge that comes from its _ master _ vertex .let be the set of _ scatter_s and the set of _combiner_s , an agent - graph , where and .note that according to the definitions of _ scatter _ and _ combiner _ , an edge from to is allowed , but there never exist an edge from to .note that vertex - cut model in powergraph is another way to address distributed placement of scale - free graphs . in vertex - cut, a vertex can be cut into multiple replicas distributed in different machines , where the remote replica of vertex is called _mirror_. both _ agent _ and _ mirror _ are based on vertex factorization .conceptually , agent - graph can be built from vertex - cut partitions by simply splitting one _ mirror _ into one _ scatter _ and one _ combiner_. however , their mechanisms are fundamentally different .fig.[fig : compare - graph - models ] shows an example to illustrate difference of agent - graph and vertex - cut .we argue that _ agent _ has obvious advantage over _ mirror _ for expressing message model on directed graphs .first , _ agent _ has no overhead on maintaining its data consistency with _ master _ while _ mirror _ has to periodically synchronize data with _master_. this is because _ mirror _ holds an integrated copy of its _ master _s runtime states , while _ agent _ , as comparison , is a message proxy that can only temporally cache and forward messages in single direction .thus , for traversal - based algorithms on directed graphs , agent - graph has much less communication than vertex - cut .second , the communication cost of _ agent _ is lower than _ mirror _ s in most of cases . in vertex - cut , each _ master _ first accumulates all its _ mirror_s data , and then sends the new result to all its _ mirror_s .thus , the communication is ( r is the total number of all vertices replicas ) . in agent - graph ,one _ agent _ only involves in one direction message delivery , either receive ( _ scatter _ ) or send ( _ combine _ ) .thus , it has less communication cost since .take the example in fig .[ fig : compare - graph - models ] . in vertex - cut the _ master _vertex has to collect all changes of its _ mirror_s and then spread the newest value to them , while in agent - graph only receives messages from its agents ( combiners here ) and has no need to update agents .as shown by the number of dashed lines , agent - graph requires just half communication of that in vertex - cut . on agent - graph model, we thoroughly investigated various streaming and 2-pass semi - streaming partitioning methods in . with the agent - extension ,both traditional edge - cut and vertex - cut partitioning methods perform much better for scale - free graphs . in this paper, we only give the pure vertex - cut approach and a streaming partitioning method adapted from powergraph s greedy vertex - cut heuristics . in a pure vertex - cut model, none of edges in original graph is cut .all cut edges are extended edges , i.e. \{ } or \{ } .the extended edges represent communication overhead while original edges represent computational load .we construct agent - graph by loading edges from the original graph .assuming that the goal is partitions , we formalize the objective of k - way balanced partition objective as follow : where is an imbalance factor , and are sets of _ scatter_s and _ combiner_s of vertex respectively . the loader reads edge list in a stream way and greedily places an edge to the partition which minimizes number of new added _agents _ and keeps edge load balance .the current best of partition is calculated by the following heuristic : , and in default , each machines independently loads a subset of edges , partitions them into parts , and finally sends remote partitions to their owner machines . during the procedure, machines do nt exchange information of heuristic computing .this is the same with the _ oblivious _ mode in powergraph .also , gre supports the _ coordinated _ mode of powergraph , where partitioning information are periodically synchronized among all machines .gre s abstractions , scatter - combine computation model and distributed agent - graph model , are built on the runtime layer .the runtime system is designed for contemporary distributed systems in which each single machine has multiple multi - core processors sharing memory and is connected to other machines with high performance network .gre follows an owner - compute rule , that launches one single process on each machine and assigns a graph partition to it . within each machine , the process has multiple threads : one master thread in charge of inter - process communication and multiple worker threads that are forked and scheduled by the master thread to do actual computation .now , we describe the runtime design from three aspects , with an emphasis on how it bridges gre abstractions and the underlying platform . from the view of users , graph - parallel applications run on a directed property graph where each vertex is identified by an unique 64-bit integer .internally , however , gre stores runtime graph data in a distributed way .gre manages three types of in - memory data : graph topology , graph property and runtime states .each machine storers a partition of agent - graph .the local topology storage is compact and highly optimized for fast retrieve .it includes three parts : graph structure , vertex - id index and agent - extended edges .first , the graph structure stores all assigned ordinary edges in the csr ( compressed sparse row ) format , where all vertices are renumbered with local 32-bit integer ids .local vertex ids are assigned by the following rule .assuming that there are local vertices ( i.e. _ master _ ) , _master_s are numbered from 0 to - in order . both _scatter_s and _ combiner_s are then continuously numbered from .second , the vertex - id index provides bidirectional translation between local i d and global i d for all vertices .third , gre stores agent - extended edges implicitly . for any type edge, the induces it by retrieving a data structure recording all machines that hold its _ scatter_s . for any type edge ,the _ combiner _ induces it by local - to - global vertex - id index .graph property ( i.e. meta data associated with vertices and edges ) is decoupled with graph topology .it is separately stored in the column - oriented storage ( cos ) approach . in cos , each type of graph property is stored as a flat array of data items .the local vertex ( edge ) i d serves as primary key and can directly index the array .for example , in a social network , given one person s _ local _ i d , say , to retrieve his / her _ name _ we directly locate the information by _name_[ ] .cos provides fast data load / store between disk and memory , as well as optimizations like streaming data compression . with cos ,gre can load or store arbitrary types of graph property in need , and run multiple ad - hoc graph analysis continuously .vertex runtime states play a crucial role in implementing gre s scatter - combine computation model .like graph property , runtime states are stored in flat arrays and indexed by local vertex i d . conceptually , there are three types of runtime vertex states : _ vertex_data _ is the computing results only owned by _ master _vertex , and updated by _apply_. _scatter_data _ is the data that one vertex wants to _ scatter _ by messages , owned by _ master _ and _scatter_. _ combine_data _ is the data on which an active message executes _ combine _ , owned by _master _ and _ combiner_. + tabel .[ tab : runtime - states ] is the runtime state setting of algorithms in fig .[ fig : applications ] .note that for performance optimization , gre allows the _ vertex_data _ vector refer to _ scatter_data _ or _combine_data_. however , for data consistency , gre requires _ scatter_data _ and _ combine_data _ be physically different . [ tab : runtime - states ] data consistency of vertex states is automatically maintained by the specification of scatter - combine primitives .( 1 ) for an ordinary vertex(_master _ ) , its _scatter_data _ can only be updated by initialization or _apply _ , while for a _scatter - agent _ its _ scatter_data _ can only be updated by the message from its _master_. during scatter - combine phase , the _ scatter_data _ does nt change , and is valid only when the vertex is active for _ scatter_. ( 2 ) for a vertex , either _ master _ or _ combiner - agent _ , _ combine _ operation may change and can only change its _combine_data_. if the vertex is a , _ combine _ on it incurs a future _ apply_. if the vertex is a _ combiner - agent _ , in the future it will send an active message to its remote _ master _ and then reset its _ combine_data_. ( 3 ) during apply phase , each active _ master _ executes an _ apply _ in which it updates the _ vertex_data _, optionally recomputes _scatter_data _ , and resets the _ combine_data_. active message hides underlying details and difference of intra - machine shared - memory and inter - machine distributed memory . with one - sided communication and fine - grained data synchronization ,gre s runtime layer provides efficient support to active messages . a daemon thread ( master of local process ) keeps monitoring the network and receiving incoming data , meanwhile sending data prepared by _gre supports asynchronous communication of multiple message formats simultaneously .the communication unit is a memory block whose format is shown in fig.[fig : comm_protocol ] .it consists of two parts , header and messages .the header is a 64-bit structure that implements a protocol to support user - defined communication patterns .messages with the same format are combined into one buffer and the format registration information is encoded in the header . the fields _op_(8 bits ) and _ flag_(8 bits ) decide what actions to take when receive the message .the filed _count_(32 bits ) is the number of messages in the buffer .message is vertex - grain , containing destiny vertex and message data . besides , a buffer with zero message is legal , where the header - only buffer is used to negotiate among processes .each process maintains a buffer pool where the buffer size is predefined globally .one buffer block can be filled with arbitrary messages not exceeding the capacity . as the block has encoded all information in its header , all interactions either between local _master _ and _ worker_s or among distributed processes are one - sided and asynchronous . in gres master - workers multi - threading mode , all computation ( _ scatter _ and _ combine _ ) are carried by worker threads in parallel .note that multiple active messages may do _ combine _ operation on the same vertex simultaneously , which leads to data race .according to our previous statistics , given the sparse and irregular connection nature of real graphs the probability of real conflicts on any vertex is very low . in default , gre uses the proven high performance virtual lock mechanism ( _ vlock _ ) for vertex - grained synchronization .gre provides two methods of worker thread organization , i.e. thread pool and thread groups . in thread pool mode ,the process maintains a traditional thread pool and a set of virtual locks .all _ combine _ operations are implicitly synchronized by the _ vlock_. thread groups is an alternative method , addressing the issue that for multi - socket machine , frequent atomic operations and data consistency across sockets may lead to high overhead . in thread groups mode ,worker threads are further divided into groups .each thread group runs on one socket , computes on one of local vertices disjoint subsets , and communicates with other groups by fastforward channels .besides , each thread group has an independent set of _ vlock _ privately .fault tolerance in gre is achieved by checkpointing , i.e. saving snapshot of runtime states periodically in a given interval of super steps .the process is similar to that in pregel but much simpler . during checkpointing, gre only needs to backup for native vertex runtime states and active vertex bitmap , abandoning all agent data and temporal messages . besides , thanks to the column - oriented - storage , gre can dump and recover runtime data image very fast .for gre , since it keeps all runtime data in - memory , failures rarely happen and typically incurred by message loss over network which can be caught by a communication component at the end of a super - step .the experimental platform is a 16-node cluster .each node has two six - core intel xeon x5670 processors , coupled with 48 gb ddr-1333 ram .all nodes are connected with mlx4 infiniband network of 40gb / s bandwidth .the operating system is suse server linux 10.0 .all applications of both gre and powergraph ( graphlab 2.2 ) are compiled with gcc 4.3.4 and openmpi 1.7.2 .we choose three representative algorithms pagerank , single source shortest path(sssp ) and connected components(cc ) .we use 9 real - world and a set of synthetic graph datasets .the real - world datasets we used are summarized in table .[ datasets ] . to the best of our knowledge, this set includes all available largest graphs in public . the synthetic graphs are r - mat graphs generated using graph500 benchmark with parameters a=0.57 , b = c=0.19 and d=0.05 .they have fixed out - degree 16 , and varying numbers of vertices from 64 million to 1 billion .[ datasets ] since graph partitioning strategies are closely related to parallel performance , we implemented two graph partition settings on gre when comparing to powergraph . table .[ tab : partition ] gives the name notation of different partition strategies .the graph partitioning is evaluated in terms of both _equivalent edge - cut rate _ and _ cut - factor_. the _ edge - cut rate _ is defined as the rate of communication edges count to total number of edges , while the _ cut - factor _ is the rate of communication edge count over total number of vertices .[ tab : partition ] our evaluation focuses on gre s performance and scalability on different machine scales and problem sizes .the results of three benchmark programs and graph partitioning are summarized as following .gre achieves good performance on all three benchmark programs , 2.5.6 ( 6.6.0 ) times better than powergraph when running on 8 ( 16 ) machines ( fig .[ fig : runtime ] ) . specifically , compared to other systems , gre achieves the best performance for pagerank on twitter graph ( table .[ tab : pr - relative - perf ] ) .gre can efficiently scale to either hundreds of cpu cores ( fig .[ fig : runtime ] ) or billions of vertices ( fig .[ fig : scalability - problem - sizes ] ) .powergraph , however , can scale to neither 16 machines due to communication overhead , nor the synthetic graph with 512 million vertices and 8 billion edges due to its high memory cost .gre shows significant advantage on graph partitioning .compared to random hash method , gre s agent - graph can partition 9 real - world graphs with 2 times improvement on equivalent edge - cut ( fig .[ subfig : graphs - par - rate ] ) . compared to vertex - cut method in powergraph - s(p ) , when partitioning twitter and sk-2005 into 4 parts , with the same greedy heuristics gre - s(p ) shows 12%% ( 29% 58% ) improvement on equivalent edge - cut ( fig .[ subfig : twitter - par ] , [ subfig : sk - par ] ) .we evaluate gre and compare it with other frameworks in terms of _ strong scalability _ and _ weak scalability_. in strong scalability test , the problem size ( graph size ) is constant while the number of machines is increased . in weak scalability test, the problem size increases with a given number of machines . for powergraph, we adopt the reference implementation in the latest package , with minor modification to ensure its vertex computation same with that in gre . in powergraph, the pagerank is implemented in a traditional graphlab way , while sssp and cc are implemented by emulating pregel s combiner of messages .we use two types of real graphs , twitter social network and sk-2005 web graph , as input graphs .results of one iteration runtime are shown in fig .[ subfig : twitter - pr - runtime ] and fig .[ subfig : sk - pr - runtime ] respectively .first , for both graphs , gre overwhelmingly outperforms powergraph with 1.6.5 times better performance on 2 machines .second , gre shows nearly linear scalability over increasing machines . in fig .[ subfig : twitter - pr - runtime ] , gre - s and gre - p on 16 machines show the speedup of 5.82 and 5.10 over 2 machines respectively .similarly in fig .[ subfig : sk - pr - runtime ] , gre - s and gre - p on 16 machines show a speedup of 6.68 and 6.15 over 2 machines . we evaluate it on the twitter graph whose edge weights are generated by randomly sampling integers from ] . due to the limit of memory capacity, the sssp program only records the distance of each vertex to source .run - time of three benchmark programs are shown in fig .[ subfig : graphs - pr][subfig : graphs - cc ] .we can see that for all three programs , gre shows excellent scalability on problem sizes , with close to or lower than linear increasing runtime .specifically , for the largest graph with 1 billion vertices and 17 billion edges , gre can compute one pagerank iteration in 40s , sssp in 255s , and cc in 139s .an important phenomena we observed is that powergraph can not scale to such a large graph size on the 16 machines because the memory consumption exceeds the physical memory capacity ( 768 gb ) .compared to gre , powergraph requires at least 2 times more memory space as it needs to store redundant in - edges and lots of intermediate data .+ the performance advantage of gre is also reflected by the quality of graph partitioning .we first investigate agent - graph partitioning on a broad set of real graphs summarized in table.[datasets ] .[ subfig : graphs - par ] shows the average count of agents per vertex , which is translated into _ equivalent edge - cut rates _ in fig.[subfig : graphs - par - rate ] by dividing average vertex degree . as shown in fig.[subfig : graphs - par - rate ] , compared to the traditional random vertex sharding by hashing ( red dashed line ) , agent - graph in both gre - p and gre - s fundamentally reduces cutting edges by 50%% .now , we evaluate scalability of agent - graph in terms of the number of partitions , with a comparison to powergraph s vertex - cut . the partitioning quality metrics _ cut - factor _ is computed according to communication measure : for agent -graph the _ cut - factor _ is number of agents ( both scatters and combiners ) per vertex , while for vertex - cut it is / .higher _ cut - factor _ implies more communication . without loss of generality , we choose twitter social network and sk-2005 web graph for detailed analysis . in real world , social network and web graph are two representative types of graphs . generally , social networks have comparatively balanced out- and in - degree distribution , while web graphs are typically fan - in . results of 2 partitions are given in fig .[ fig : twitter - par ] and fig .[ fig : sk - par ] . for both twitter and sk-2005 , gre - s performs best , followed by powergraph - s ,gre - p and powergraph - p in order .except for powergraph - p , all partitioning methods show good scalability over increasing machines ( partitions ) . to investigate why gre - s / p have better partitioning results than counterparts of powergraph - s / p , we analyze the percentage distribution of two agent types , i.e. scatter and combiner . as shown in fig .[ subfig : twitter - par - skew ] and fig .[ subfig : sk - par - skew ] , for both gre - s and gre - p , rates of scatters to combiners have an obvious skew .as explained in section [ sub : agent - graph - model ] , powergraph s data model fails to realize this phenomena , while agent - graph model can differentiate .thus , with respect to communication measure , gre has an advantage over powergraph .gre adopts the well - known _ vertex - centric _ programming model .essentially , it is reminiscent of the classic actor model .previously , the most representative vertex - centric abstractions are pregel and graphlab , whose comparison with gre was summarized in table .[ tab_comp_all ] . herewe describe how gre evolves .pregel is the first bulk synchronous distributed message passing system .it has been widely cloned in giraph , gps , goldenorb , trinity and mizan . besides , frameworks extending hadoop ( mapreduce ) with in - memory iterative execution , such like spark , twister and haloop , also adopt a pregel way to do graph analysis .meanwhile , graphlab uses distributed shared memory model and supports both synchronous and asynchronous vertex computation .vertex computation in both pregel and graphlab internally follows the common gas ( gather - apply - scatter) pattern . besides , powergraph adopts a phased gas , and can emulate both graphlab and pregel .however , gas model handles each edge in a two - sided way that requires two vertices involvement , leading to amounts of intermediate data storage and extra operations . to address this problem , message combiner in pregel and delta - caching in powergraphare proposed as complement to basic gas . instead of gas , gre proposes a new scatter - combine model , which explicitly transforms gas s two - sided edge operation into one - sided active message . in the worst case, active message can degrade to message passing of pregel .gre s agent - graph model is derived from optimizing message transmission on directed graphs .previously , pregel has introduced _ combiner _ for combining messages to the same destination vertex .gre further introduces _ scatter _ to reduce messages from the same source vertex .motivated by _ ghost _ in pbgl and graphlab , we finally develop ideas of message _ agent _ into a distributed directed graph model .note that gps , an optimized implementation of pregel , supports large adjacency - list partitioning where the _ subvertex _ is similar to scatter on reducing messages but not well - defined for vertex or edge computation . the closest match to agent - graph is powergraph s vertex - cut which however is used only in undirected graphs and coupled with different computation and data consistency models . besides the vertex - centric model , generalized spmv ( sparse matrix - vector ) computation is another popular graph - parallel abstraction , which is used by pegasus and knowledge discovery toolbox .note that since spmv computation is naturally bulk synchronous and factorized over edges , their applications can be described in gre . however , unlike gre , the matrix approach is not suitable for handling abundant vertex / edge metadata . for shared memory environment , there are also numerous graph - parallel frameworks .ligra proposes an abstraction of edgemap and vertexmap which is simple but efficient to describe traversal - based algorithms .graphchi uses a sliding window to process large graphs from disks in just a pc .x - stream proposes a novel edge - centric scatter - gather programming abstraction for both in - memory and out - of - core graph topology , which essentially , like gre and powergraph , leverages vertex factorization over edges .gre s computation on local machine is highly optimized for massive edge - grained parallelism , based on technologies such as vlock fine - grained data synchronization and fastforward thread - level communication .emerging _ graph - parallel _ applications have drawn great interest for its importance and difficulty .we identify that the performance - related difficulty lies on two aspects , i.e. irregular parallelism expressing and graph partitioning .we propose gre to address these two problems from both computation model and distributed graph model .first , the scatter - combine model retains the classic vertex - centric programing model , and realizes the irregular parallelism by factorizing vertex computation into a series of active messages in parallel .second , along the idea of vertex factorization , we develop distributed agent - graph model that can be constructed by a vertex - cut way like in powergraph .compared to traditional edge - cut partitioning methods or even powergraph s vertex - cut approach , agent - graph significantly reduces communication .finally , we develop an efficient runtime system implementation for gre s abstractions and experimentally evaluate it with three applications on both real - world and synthetic graphs .experiments on our 16-node cluster system demonstrate gre s advantage on performance and scalability over other counterpart systems .m. stonebraker , d.j .abadi , a. batkin , x. chen , m. cherniack , m. ferreira , e. lau , a. lin , s. madden , e. oneil , p. oneil , a. rasin , n. tran , and s. zdonik .c - store : a column - oriented dbms . in vldb , 2005
large - scale distributed _ graph - parallel _ computing is challenging . on one hand , due to the irregular computation pattern and lack of locality , it is hard to express parallelism efficiently . on the other hand , due to the scale - free nature , real - world graphs are hard to partition in balance with low cut . to address these challenges , several graph - parallel frameworks including pregel and graphlab ( powergraph ) have been developed recently . in this paper , we present an alternative framework , graph runtime engine ( gre ) . while retaining the vertex - centric programming model , gre proposes two new abstractions : 1 ) a scatter - combine computation model based on active message to exploit massive fined - grained edge - level parallelism , and 2 ) a agent - graph data model based on vertex factorization to partition and represent directed graphs . gre is implemented on commercial off - the - shelf multi - core cluster . we experimentally evaluate gre with three benchmark programs ( pagerank , single source shortest path and connected components ) on real - world and synthetic graphs of millions of vertices . compared to powergraph , gre shows 2.5 times better performance on 8 machines ( 192 cores ) . specifically , the pagerank in gre is the fastest when comparing to counterparts of other frameworks ( powergraph , spark , twister ) reported in public literatures . besides , gre significantly optimizes memory usage so that it can process a large graph of 1 billion vertices and 17 billion edges on our cluster with totally 768 gb memory , while powergraph can only process less than half of this graph scale .
the time plays a special role in quantum mechanics . unlike other observables ,time remains a classical variable .it can not be simply quantized because , as it is well known , the self - adjoint operator of time does not exist for the bounded hamiltonians .the problems related to time also arise from the fact that in quantum mechanics many quantities can not have definite values simultaneously .the absence of the time operator makes this problem even more complicated .however , in practice the time is often important for an experimenter .if quantum mechanics can correctly describe the outcomes of the experiments , it must also give the method for the calculation of the time the particle spends in some region .the most - known problem of time in quantum mechanics is the so - called tunneling time problem .tunneling phenomena are inherent in numerous quantum systems , ranging from an atom and condensed matter to quantum fields .there have been many attempts to define a physical time for tunneling processes , since this question has been raised by maccoll in 1932 .this question is still the subject of much controversy , since numerous theories contradict each other in their predictions for `` the tunneling time '' .some of these theories predict that the tunneling process is faster than light , whereas the others state that it should be subluminal .this subject has been covered in a number of reviews ( hauge and stvneng , 1989 ; olkholovsky and recami , 1992 ; landauer and martin , 1994 and chiao and steinberg , 1997 ) .the fact that there is a time related to the tunneling process has been observed experimentally .however , the results of the experiments are ambiguous .many problems with time in quantum mechanics arise from the noncommutativity of the operators .the noncommutativity of the operators in quantum mechanics can be circumvented by using the concept of weak measurements .the concept of weak measurement was proposed by aharonov , albert and vaidman .such an approach has several advantages .it gives , in principle , the procedure for measuring the physical quantity .second , since in the classical mechanics all quantities can have definite values simultaneously , weak measurements give the correct classical limit .the concept of weak measurements has been already applied to the time problem in quantum mechanics .the time in classical mechanics describes not a single state of the system but the process of the evolution .this property is an essential concept of the time .we speak about the time belonging to a certain evolution of the system . if the measurement of the time disturbs the evolution we can not attribute this measured duration to the undisturbed evolution .therefore , we should require that the measurement of the time does not disturb the motion of the system .this means that the interaction of the system with the measuring device must be asymptotically weak . in quantum mechanicsthis means that we can not use the strong measurements described by the von - neumann s projection postulate .we have to use the weak measurements of aharonov , albert and vaidman , instead .we proceed as follows : in sec .[ sec : concept ] we present the model of the weak measurements .[ sec : the - time - on ] presents the time on condition that the system is in the given final state . in sec .[ sec : tunneling - time ] , our formalism is applied to the tunneling time problem . in sec .[ sec : weak - measurement - of ] the weak measurement of the quantum arrival time distribution is presented .section [ sec : concl ] summarizes our findings .in this section we present the concept of weak measurement , proposed by aharonov , albert and vaidman .we measure quantity represented by the operator .we have the detector in the initial state .for a weak measurement to provide the meaningful information the measurements must be performed on an ensemble of identical systems .it is supposed that each system with its own detector is prepared in the same initial state .after measurement the readings of the detectors are collected and averaged .our model consists of the system * s * under consideration and of the detector * d*. the total hamiltonian is where and are the hamiltonians of the system and detector , respectively .we take the operator describing the interaction between the particle and the detector of the form where characterizes the strength of the interaction between the system and detector .the small parameter ensures the undisturbance of the system s evolution .the measurement duration is . in this sectionwe assume that the interaction strength and the time are small .the operator acts in the hilbert space of the detector .we require the spectrum of the operator to be continuous . for simplicity , we can consider this operator to be the coordinate of the detector .the momentum which conjugate to is .the interaction operator ( [ eq : sec2:11a ] ) only slightly differs from the one used by aharonov , albert and vaidman .the similar interaction operator has been considered by von neumann and has been widely used in the strong measurement models ( e.g. , and many others ) .hamiltonian ( [ eq : sec2:11a ] ) represents a constant force acting on the detector .this force results in the change of momentum of the detector . from the classical point of view, the change of the momentum is proportional to the force acting on the detector .since interaction strength and the duration of the measurement are small , the average should not change significantly during the measurement .the action of the hamiltonian ( [ eq : sec2:11a ] ) results in the small change of the mean detector momentum , where is the mean momentum of the detector at the beginning of the measurement and is the mean momentum of the detector after the measurement .therefore , in analogy to ref . , we define the `` weak value '' of the average , at the moment the density matrix of the whole system is , where is the density matrix of the system and is the density matrix of the detector . after the interaction the density matrix of the detector is where is the evolution operator . later , for simplicity we shall neglect the hamiltonian of the detector .then , the evolution operator in the first - order approximation is where is the evolution operator of the unperturbed system and . from eq .( [ eq : sec2:defin ] ) we obtain that the weak value coincides with the usual average .the influence of the weak measurement on the evolution of the measured system can be made arbitrary small using the small parameter .therefore , after the interaction of the measured system with the detector we can try to measure the second observable using , as usual , the strong measurement . as far as our model gives the correct result for the value of averaged over the entire ensemble of the systems , now we can try to take the average only over the subensemble of the systems with the given value of the quantity .we measure the momenta of each measuring device after the interaction with the system .subsequently , we perform the final , postselection measurement of on the systems of our ensemble . then we collect the outcomes only of the systems which have a given value of . the joint probability that the system has the given value of _ and _ the detector has the momentum at the time moment is , where is the eigenfunction of the momentum operator . in quantum mechanicsthe probability that two quantities simultaneously have definite values does not always exist . if the joint probability does not exist then the concept of the conditional probability is meaningless. however , in our case operators and act in different spaces and commute , therefore , the probability exists .let us define the conditional probability , i.e. , the probability that the momentum of the detector is provided that the system has the given value of .this probability is given according to bayes theorem as where is the probability that the system has the given value of .the average momentum of the detector on condition that the system has the given value of is > from eqs .( [ eq : sec2:defin ] ) and ( [ condave0 ] ) , in the first - order approximation we obtain the mean value of on condition that the system has the given value of ( see for analogy ref . ) \right\rangle .\label{eq : sec2:x}\end{aligned}\ ] ] if the commutator ] in eq .( [ condtime ] ) is not zero then , even in the limit of very weak measurement , the measured value depends on the particular detector used .this means that in such a case we can not obtain a _ definite _value for the conditional time .moreover , the coefficient may be zero for the specific initial state of the detector , e.g. , for the gaussian distribution of the coordinate and momentum . the conditions to determine the time uniquely in a case when the final state of the system is known takes , thus , the form =0\label{eq : poscond}\ ] ] which can be understood from on general principles of the quantum mechanics , too .now , we ask _ how long the values of belong to a certain subset when the system evolves to the given final state _ under assumption that the final state of the system is known with certainty . in addition , we want to have some information about the values of the quantity .however , if the final state is known with certainty , we may not know the values of in the past and , vice versa , if we know something about , we may not definitely determine the final state .therefore , in such a case the question about the time when the system evolves to the given final state can not be answered definitely and the conditional time has no reasonable meaning .the quantity according to eqs .( [ condtime ] ) and ( [ timere ] ) has many properties of the classical time .so , if the final states constitute the full set , then the corresponding projection operators obey the equality of completeness .then , from eq .( [ condtime ] ) we obtain the expression the quantity is the probability that the system at the time is in the state .( [ clasprop ] ) shows that the full duration equals the average over all possible final states , as it is a case in the classical physics . from eq .( [ clasprop ] ) and eqs .( [ timere ] ) , ( [ timeim ] ) it follows we suppose that the quantities and can be useful even in the case when the time has no definite value , since in the tunneling time problem the quantities ( [ timere ] ) and ( [ timeim ] ) correspond to real and imaginary parts of the complex time , respectively . the eigenfunctions of the operator constitute the full set , where the integral must be replaced by the sum for the discrete spectrum of the operator .from eqs .( [ delta1 ] ) , ( [ opf1 ] ) , ( [ condtime ] ) we obtain the equality which shows that the time during which the quantity has any value equals to , as it is in the classical physics . the obtained formalism can be applied to the tunneling time problem . in this section , however , we will consider a simpler system than the tunneling particle , i.e. , a two - level system .the system is forced by the perturbation that causes the jumps from one state to another .the time the system is in a given state will be calculated .the hamiltonian of this system is where is the hamiltonian of the unperturbed system and is the perturbation . here are pauli matrices and .the hamiltonian has two eigenfunctions and with the eigenvalues and , respectively .the initial state of the system assumed to be .> from eq .( [ fulltime ] ) we obtain the times the system spends in the energy levels and , respectively , where . from eqs .( [ timere ] ) and ( [ timeim ] ) we can obtain the conditional time . the components ( [ timere ] ) and ( [ timeim ] ) of the time the system spends in the level under condition that the final state after measurement is are when , where , the quantity tends to infinity .this happens because at these time moments the system is in the state with the probability , and one can not consider the interaction with the detector as very weak . , ( dashed line ) , and level , ( dotted line ) , according to eqs .( [ eq:31 ] ) and ( [ eq:32 ] ) , respectively .the quantity , eq .( [ eq:34 ] ) , is shown as solid straight line .the quantities and , shown by curves 1 and 2 , were calculated according to eqs .( [ eq:28 ] ) and ( [ eq:30 ] ) , respectively .the parameters are , .,scaledwidth=60.0% ] on the other hand , the components of the time the system spends in level under condition that the final state is are , eq . ( [ eq:29 ] ) .the parameters are the same as in fig .[ dwt].,scaledwidth=60.0% ] the time the system spends in level under condition that the final state is may be expressed as the quantities , , , and are shown in fig .the quantity is shown in fig . [ ti00 ] . note that the partial durations at the given final state are not necessarily monotonic as it is with the full duration , because the final state at different time moments can be reached by different paths .we can interpret the quantity as the time the system spends in the level on condition that the final state is , but at certain time moments this quantity is greater than . in such casesthe quantity becomes negative at certain times .this is a consequence of the fact that for the system under consideration the condition ( [ eq : poscond ] ) is not fulfilled .the peculiarities of the behavior of the conditional times show that it is impossible to decompose the unconditional time into two components having all classical properties of the time .the best - known problem of time in quantum mechanics is the so - called tunneling time problem .this problem is still the subject of much controversy , since numerous theories contradict each other in their predictions for `` the tunneling time '' .many of the theoretical approaches can be divided into three categories .first , one can study the evolution of the wave packets through the barrier and get the phase time .however , the correctness of the definition of this time is highly questionable .another approach is based on the determination of a set of dynamic paths , i.e. , the calculation of the time the different paths spend in the barrier and averaging over the set of the paths .the paths can be found from the feynman path integral formalism , from the bohm approach , or from the wigner distribution .the third class uses a physical clock which is used for determination of the time elapsed during the tunneling ( bttiker and landauer used an oscillatory barrier , baz suggested the larmor time ) .one more approach is based on a model for tunneling based on stochastic interpretation of quantum mechanics .the problems rise also from the fact that the arrival time of a particle to a definite spatial point is a classical concept .its quantum counterpart is problematic even for the free particle case . in classical mechanics , for the determination of the time the particle spends moving along a certain trajectory , one has to measure the position of the particle at two different moments of time . in quantum mechanicsthis procedure does not work . from heisenberg s uncertainty principleit follows that we can not measure the position of a particle without alteration of its momentum . to determine exactly the arrival time of a particle, one has to measure the position of the particle with great precision .because of the measurement , the momentum of the particle will have a big uncertainty and the second measurement will be indefinite . if we want to ask about the time in quantum mechanics , we need to define the procedure of measurement .we can measure the position of the particle only with a finite precision and get a distribution of the possible positions .applying such a measurement , we can expect to obtain not a single value of the traversal time but a distribution of times . in paper the tunneling time distribution for photon tunneling is analysed theoretically as a space - time correlation phenomenon between the emission and absorption of a photon on the two sides of a barrier .the analysis is based on an appropriate counting rate formula derived at first order in the photon - detector interaction and used in treating space - time correlations between photons .there are two different but related questions connected with the tunneling time problem : 1 .how much time does the tunneling particle spend under the barrier ? 2 .at what time does the particle arrive at the point behind the barrier ? there have been many attempts to answer these questions .however , there are several papers showing that according to quantum mechanics the question ( i ) makes no sense .our goal is to investigate the possibility to determine the tunneling time using weak measurements . to answer the question of _how much time does the tunneling particle spends under the barrier , _ we need a criterion of the tunneling .the following criterion is accepted : the particle had tunneled in the case when it was in front of the barrier at first and later it was found behind the barrier .we shall require that the mean energy of the particle and the energy uncertainty should be less than the height of the barrier .following this criterion , the operator corresponding to the `` tunneling - flag '' observable is introduced where is the heaviside unit step function and is a point behind the barrier .this operator projects the wave function onto the subspace of functions localized behind the barrier .the operator has two eigenvalues : and . corresponds to the fact that the particle has not tunneled out , while the eigenvalue corresponds to the appearance of particle behind the barrier .we will work with the heisenberg representation . in this representation ,the tunneling flag operator becomes to take into account all the tunneled particles , the limit must be taken .so , the `` tunneling - flag '' observable in the heisenberg picture is represented by the operator .one can obtain the explicit expression for this operator .the operator obeys the standard equation .\label{eq : ft}\ ] ] the commutator in eq .( [ eq : ft ] ) may be expressed as =\exp\left(\frac{i}{\hbar}\hat { h}t\right)\left[\hat{f}_t(x),\hat{h}\right]\exp\left(-\frac{i}{\hbar}\hat { h}t\right).\ ] ] if the hamiltonian has the form , then the commutator becomes =i\hbar\hat{j}(x),\ ] ] where is the probability flux operator , therefore , the following equation for the commutator can be written =i\hbar\tilde{j}(x , t).\label{eq : com}\ ] ] the initial condition for the function may be defined as from eqs .( [ eq : ft ] ) and ( [ eq : com ] ) we obtain the equation for the evolution of the tunneling - flag operator from eq .( [ eq : evol ] ) and the initial condition , an explicit expression for the tunneling - flag operator follows in the already mentioned question of _ how much time does the tunneling particle spend under the barrier _ , we shall be interested in those particles , which we know with certainty have tunneled out .in addition , we want to have some information about the location of the particle . however , one may ask whether the quantum mechanics allows one to have the information about the tunneling and location simultaneously ?the projection operator represents the probability for the particle to be in the region . here is the eigenfunction of the coordinate operator . in the heisenberg representationthis operator takes the form from eqs .( [ probflux ] ) , ( [ tunflagflux ] ) , and ( [ posopheis ] ) we see that the operators and in general do not commute .this means that we can not simultaneously have the information about the tunneling and location of the particle .if we know with certainty that the particle has tunneled out then we can say nothing about its location in the past , and if we know something about the location of the particle , we can not determine definitely whether the particle has tunnel out .therefore , the question of _ how much time does the tunneling particle spend under the barrier _ can not have definite answer , if the question is so posed that its precise definition requires the existence of the joint probability that the particle is found in at time and whether or not it is found on the right side of the barrier at a sufficiently later time .a similar analysis has been performed in ref .it has been shown that , due to noncommutability of operators , there exist no unique decomposition of the dwell time .this conclusion is , however , not negative altogether . we know that and =0 ] is zero , the time has a well - defined value .if the commutator is not zero , only the integral of this expression over a large region has meaning of an asymptotic time related to the large region as we will see in sec .[ secasympt ] . equation ( [ tuntime ] ) can be rewritten as a sum of two terms , the first term being independent and the second dependent on the detector , i.e. , where \right\rangle .\label{tuntimeim}\end{aligned}\ ] ] the quantities and are independent of the detector . in order to separate the tunneled and reflected particles the limit should be taken .otherwise , the particles that tunneled after the time will not contribute . if we introduce the operators then from eq .( [ tunflagflux ] ) follows that the operator is .if the particle before the barrier is initially , then in the limit tunneling times become \right\rangle .\label{tuntimeiminf}\end{aligned}\ ] ] let us define an `` asymptotic time '' as the integral of over a wide region containing the barrier . since the integral of is very small compared to that of as we shall see later , the asymptotic time is effectively the integral of only .this allows us to identify as `` the density of the tunneling time '' . in many cases for the simplification of mathematicsit is common to write the integrals over time as the integrals from to . in our modelwe can not , without additional assumptions , integrate eqs .( [ opefinf ] ) , ( [ opn ] ) from because the negative time means the motion of the particle to the initial position .if some particle in the initial wave packet had negative momentum then in the limit it was behind the barrier and contributed to the tunneling time .as stated , the question of _ how much time does a tunneling particle spend under the barrier _ has no exact answer .we can determine only the time the tunneling particle spends in a large region containing the barrier .in our model this time is expressed as an integral of quantity ( [ tuntimereinf ] ) over this region . in order to determine the properties of this integralit is useful to determine the properties of the integrand . to be able to expand the range of integration over time to ,it is necessary to have the initial wave packet far to the left from the points under the investigation and this wave packet must consist only of the waves moving in the positive direction .it is convenient to perform calculations in the energy representation .eigenfunctions of the hamiltonian are , where .the sign or corresponds to the positive or negative initial direction of the wave , respectively . outside the barrier these eigenfunctions are [ eigenfunct ] where and are transmission and reflection amplitudes respectively , and is the mass of the particle .the barrier is in the region between and .these eigenfunctions are orthonormal , i.e. , the evolution operator is then the operator assumes the form where the integral over the time yields and , therefore , similarly , we find if the initial wave packet consisting only of the waves moving in the positive direction is assumed , then one has from the condition it follows that for we obtain the following expressions for the quantities and [ tuntimebefore ] for these expressions take the form [ tuntimeafter ] to illustrate the obtained formulae , the -function barrier and the rectangular barrier will be used .the gaussian incident wave packet initially is far to the left of the barrier .-function barrier with the parameter .the barrier is located at the point .the units are such that and and the average momentum of the gaussian wave packet . in these unitslength and time are dimensionless .the width of the wave packet in the momentum space is .,scaledwidth=60.0% ] and and the height of the barrier is .the used units and parameters of the initial wave packet are the same as in fig .[ tundelta].,scaledwidth=60.0% ] in fig .[ tundelta ] and [ tunrect ] , we see interferencelike oscillations near the barrier .oscillations are present not only in the front of the barrier but also behind the barrier . when is far from the barrier the `` time density '' tends to a value close to .this is in agreement with classical mechanics because in the chosen units the mean velocity of the particle is .[ tunrect ] shows additional property of `` tunneling time density '' : it is almost zero in the barrier region .this explains the hartmann and fletcher effect : for opaque barriers the effective tunneling velocity is very large .we can easily adapt our model for the reflection too . in doing this, one should replace the tunneling - flag operator by the reflection - flag operator replacement of by in eqs .( [ tuntimereinf ] ) and ( [ tuntimeiminf ] ) gives we see that in our model the important condition where and are the transmission and reflection probabilities is satisfied automatically .if the wave packet consists of waves moving in the positive direction , the density of dwell time becomes for we have and for the reflection time we obtain the `` time density '' for the density of the dwell time is and the `` density of the reflection time '' may be expressed as .,scaledwidth=60.0% ] .,scaledwidth=60.0% ] , scaledwidth=60.0% ] we will illustrate the properties of the reflection time for the same barriers and gaussian incident wave packet initially localized far to the left from the barrier . in figs .[ refldelt ] and [ reflrect ] , one can see the interference - like oscillations at both sides of the barrier .since for the rectangular barrier the `` time density '' behind the barrier is very small , this part is presented in fig .[ reflrect2 ] . behind the barrier ,the `` time density '' at certain points becomes negative .this is because the quantity is not positive definite .nonpositivity is the direct consequence of noncommutativity of the operators in eqs .( [ tuntimereinf ] ) and ( [ tuntimeiminf ] ) .there is nothing strange in the negativity of because this quantity has no physical meaning . only the integral over the large regionhas the meaning of time .when is far to the left from the barrier the `` time density '' tends to a value close to and when is far to the right from the barrier the `` time density '' tends to .this is in agreement with classical mechanics because in the chosen units , the velocity of the particle is and the reflected particle crosses the area before the barrier two times . as mentioned above, we can determine only the time that the tunneling particle spends in a large region containing the barrier , i.e. , the asymptotic time . in our modelthis time is expressed as an integral of quantity ( [ tuntimereinf ] ) over this region .we can do the integration explicitly .the continuity equation yields the integration can be performed by parts if the density matrix represents localized particle then .therefore we can write an effective equality we introduce the operator we consider the asymptotic time , i.e. , the time the particle spends between points and when , , after the integration we have where if we assume that the initial wave packet is far to the left from the points under the investigation and consists only of the waves moving in the positive direction , then eq . ( [ tuntimeasympt ] ) may be simplified . in the energy representationthe operator ( [ opt ] ) is the integration over time yields and we obtain substituting expressions for the matrix elements of the probability flux operator we obtain equation when , the last term vanishes and we have this expression is equal to , when the point with coordinate is in front of the barrier , expression ( [ positivinfnty ] ) becomes when is large the second term vanishes and we have the imaginary part of expression ( [ negativinfnty ] ) is not zero .this means that for determination of the asymptotic time it is insufficient to integrate only in the region containing the barrier . for quasimonochromatic wave packets from eqs .( [ opt ] ) , ( [ tuntimeasympt ] ) , ( [ asympttimeterm ] ) , ( [ positivinfnty ] ) and ( [ negativinfnty ] ) we obtain the limits [ asymptmonochrom ] where is the phase time and is the imaginary part of the complex time . in order to take the limit we have to perform more accurate calculations .the range of integration over time to can not be extended because such extension corresponds to the initial wave packet being infinitely far from the barrier .we can extend the range of the integration over the time to only in .for we obtain the following equation where is equal to the wave function at the point and the time moment , when the propagation is in the free space and the initial wave function in the energy representation is .when and , then .that is why the initial wave packet contains only the waves moving in the positive direction .therefore when . from this analysisit follows that the region in which the asymptotic time is well determined has to include not only the barrier but also the initial wave packet region .in such a case from eqs .( [ tuntimeasympt ] ) and ( [ asympttimeterm ] ) we obtain expression for the asymptotic time from eq .( [ eq : lim ] ) it follows that where is defined as the probability flux integral ( [ opt ] ) .equations ( [ mainresult ] ) and ( [ eq : fin ] ) give the same value for tunneling time as does an approach in refs. the integral of quantity over a large region is zero .we have seen that it is not enough to choose the region around the barrier this region has to include also the initial wave packet location .this fact will be illustrated by numerical calculations . for function barrier with the parameters and initial conditions as in fig .[ tundelta ] .the initial packet is shown by dashed line.,scaledwidth=60.0% ] .,scaledwidth=60.0% ] the quantity for -function barrier is represented in fig .[ deltabarjkomutat ] .we see that is not equal to zero not only in the region around the barrier but also it is not zero in the location of the initial wave packet . for comparison ,the quantity for the same conditions is represented in fig .[ deltabarjtikslus ] .the detection of the particles in time - of - flight and coincidence experiments are common , and quantum mechanics should give a method for the calculation of the arrival time . the arrival time distribution may be useful in solving the tunneling time problem , as well . therefore , the quantum description of arrival time has attracted much attention .aharonov and bohm introduced the arrival time operator by imposing several conditions ( normalization , positivity , minimum variance , and symmetry with respect to the arrival point ) a quantum arrival time distribution for a free particle was obtained by kijowski .kijowski s distribution may be associated with the positive operator valued measure generated by the eigenstates of .however , kijowski s set of conditions can not be applied in a general case . nevertheless , arrival time operators can be constructed even if the particle is not free . since the mean arrival time even in classical mechanics can be infinite or the particle may not arrive at all , it is convenient to deal not with the mean arrival time and corresponding operator , but with the probability distribution of the arrival times .the probability distribution of the arrival times can be obtained from a suitable classical definition .the noncommutativity of the operators in quantum mechanics is circumvented by using the concept of weak measurements . in classical mechanicsthe particle moves along the trajectory as increases . this allows us to work out the time of arrival at the point , by identifying the point of the phase space where the particle is at , and then following the trajectory that passes by this point , up to arrival at the point . if multiple crossings are possible , one may define a distribution of arrival times with contributions from all crossings , when no distinction is made between first , second and arrivals . in this articlewe will consider such a distribution .we can ask whether there is a definition of the arrival time that is valid in both classical and quantum mechanics . in our opinion ,the words `` the particle arrives from the left at the point at the time '' mean that : ( i ) at time the particle was in the region and ( ii ) at time ( ) the particle is found in the region .now we apply the definition given by ( i ) and ( ii ) to the time of arrival in the classical case . since quantum mechanics deals with probabilities , it is convenient to use probabilistic description of the classical mechanics , as well .therefore , we will consider an ensemble of noninteracting classical particles .the probability density in the phase space is .let us denote the region as and the region as .the probability that the particle arrives from region to region at a time between and is proportional to the probability that the particle is in region at time and in region at time .this probability is where is the constant of normalization and the region of phase space has the following properties : ( i ) the coordinates of the points in are in the space region and ( ii ) if the phase trajectory goes through a point of the region at time then the particle at time is in the space region . since is infinitesimal , the change of coordinate during the time interval is equal to . therefore , the particle arrives from region to region only if the momentum of the particle at the point is positive .the phase space region consists of the points with positive momentum and with coordinates between and .then from eq .( [ eq : p1 ] ) we have the probability of arrival time since is infinitesimal and the momentum of every particle is finite , we can replace in eq .( [ eq : p2 ] ) by and obtain the equality the obtained arrival time distribution is well known and has appeared quite often in the literature ( see , e.g. , the review and references therein ) .the probability current in classical mechanics is from eqs .( [ eq : toaprob ] ) and ( [ eq : flux ] ) it is clear that the time of arrival is related to the probability current .this relation , however , is not straightforward .we can introduce the `` positive probability current '' and rewrite eq .( [ eq : toaprob ] ) as the proposed various quantum versions of even in the case of the free particle can be negative ( the so - called backflow effect ) .therefore , the classical expression ( [ eq:7a ] ) for the time of arrival becomes problematic in quantum mechanics .similarly , for arrival from the right we obtain the probability density where the negative probability current is we see that our definition given at the beginning of this section leads to the proper result in classical mechanics . the conditions ( i ) and ( ii ) does not involve the concept of the trajectories .we can try to use this definition also in quantum mechanics .the proposed definition of the arrival time probability distribution can be used in quantum mechanics only if the determination of the region in which the particle is does not disturb the motion of the particle .this can be achieved using the weak measurements of aharonov , albert and vaidman .we use the weak measurement , described in sec .[ sec : concept ] .the detector interacts with the particle only in region .as regards the operator we take the projection operator which projects into region . in analogy to ref . , we define the `` weak value '' of the probability of finding the particle in the region , in order to obtain the arrival time probability using the definition from sec .[ sec : classic ] , we measure the momenta of each detector after the interaction with the particle . after time perform the final , postselection measurement on the particles of our ensemble and measure if the particle is found in region .then we collect the outcomes only for the particles found in region .the projection operator projecting into the region is . in the heisenberg representationthis operator is where is the evolution operator of the free particle . taking the operator from sec .[ sec : concept ] as and using eq .( [ eq : defin ] ) we can introduce a weak value of probability to find the particle in the region on condition that the particle after time is in the region .the probability that the particle is in region and after time it is in region then equals when the measurement time is sufficiently small , the influence of the hamiltonian of the particle can be neglected .using eq .( [ eq : sec2:x ] ) from sec .[ sec : concept ] we obtain \rangle.\label{eq : w12}\ ] ] the probability is constructed using conditions ( i ) and ( ii ) from sec .[ sec : classic ] : the weak measurement is performed to determine if the particle is in the region and after time the strong measurement determines if the particle is in the region .therefore , according to sec .[ sec : classic ] , the quantity after normalization can be considered as the weak value of the arrival time probability distribution .equation ( [ eq : w12 ] ) consists of two terms and we accordingly can introduce two quantities and \rangle.\ ] ] then if the commutator $ ] in eqs .( [ eq:20])([eq : prob12 ] ) is not zero , then , even in the limit of the very weak measurement , the measured value depends on the particular detector .this fact means that in such a case we can not obtain a definite value for the arrival time probability .moreover , the coefficient may be zero for a specific initial state of the detector , e.g. , for a gaussian distribution of the coordinate and momentum .the quantities , and are real .however , it is convenient to consider the complex quantity we call it the `` complex arrival probability '' . we can introduce the corresponding operator by analogy , the operator corresponds to arrival from the right .the introduced operator has some of the properties of the classical positive probability current . from the conditions and we have in the limit we obtain the probability current , as in classical mechanics .however , the quantity is complex and the real part can be negative , in contrast to the classical quantity .the reason for this is the noncommutativity of the operators and .when the imaginary part is small , the quantity after normalization can be considered as the approximate probability distribution of the arrival time .the operator was obtained without specification of the hamiltonian of the particle and is suitable for free particles and for particles subjected to an external potential as well . in this sectionwe consider the arrival time of the free particle .the calculation of the `` weak arrival time distribution '' involves the average value .therefore , it is useful to have the matrix elements of the operator .it should be noted that the matrix elements of the operator , as well as the operator itself , are only auxiliary quantities and do not have an independent meaning . in the basis of momentum eigenstates , normalized according to the condition , the matrix elements of the operator are after performing the integration one obtains where . when the matrix elements of the operator are this equation coincides with the expression for the matrix elements of the probability current operator .> from eq .( [ eq : p1p2 ] ) we obtain the diagonal matrix elements of the operator , the real part of the quantity is shown in fig . [ fig:1 ] and the imaginary part in fig .[ fig:2 ] . , according to eq .( [ eq : diagonal ] ) .the corresponding classical positive probability current is shown with the dashed line .the parameters used are , , and . in this system of units ,the momentum is dimensionless.,scaledwidth=60.0% ] .the parameters used are the same as in fig .[ fig:1],scaledwidth=60.0% ] using the asymptotic expressions for the error function we obtain from eq .( [ eq : diagonal ] ) that and , when , i.e. , the imaginary part tends to zero and the real part approaches the corresponding classical value as the modulus of the momentum increases .such behaviour is evident from figs .[ fig:1 ] and [ fig:2 ] also .the asymptotic expressions for function are valid when the argument of the is large , i.e. , or here is the kinetic energy of the particle . according to eq .( [ eq : diagonal ] ) on the resolution time .the corresponding classical positive probability current is shown with the dashed line .the parameters used are , , and . in these units ,the time is dimensionless.,scaledwidth=60.0% ] the dependence of the quantity on is shown in fig . [ fig:3 ] . for small quantity is proportional to .therefore , unlike in classical mechanics , in quantum mechanics can not be zero .equation ( [ eq : cond ] ) imposes the lower bound on the resolution time .it follows that our model does not permit determination of the arrival time with resolution greater than . a relation similar to eq .( [ eq : cond ] ) based on measurement models was obtained by aharonov _the time - energy uncertainty relations associated with the time of arrival distribution are also discussed in refs .the review and generalization of the theoretical analysis of the time problem in quantum mechanics and weak measurements are presented .the tunneling time problem is a part of this more general problem .the problem of time is solved adapting the weak measurement theory to the measurement of time . in this model the expression ( [ fulltime ] ) for the duration , when the arbitrary observable has the certain value , is obtained .this result is in agreement with the known results for the dwell time in the tunneling time problem .further we consider the problem of the duration when the observable has a certain value on condition that the system is in the given final state .our model of measurement allows us to obtain the expression ( [ condtime ] ) of this duration as well .this expression has many properties of the corresponding classical time .however , such a duration not always has the reasonable meaning .it is possible to obtain the duration the quantity has the certain value on condition that the system is in a given final state only when the condition ( [ eq : poscond ] ) is fulfilled . in the opposite case , there is a dependence in the outcome of the measurements on particular detector even in an ideal case and , therefore , it is impossible to obtain the definite value of the duration . when the condition ( [ eq : poscond ] ) is not fulfilled , we introduce two quantities ( [ timere ] ) and ( [ timeim ] ) , characterizing the conditional time .these quantities are useful in the case of tunneling and we suppose that they can be useful also for other problems . in order to investigate the tunneling time problem , we consider a procedure of time measurement , proposed by steinberg .this procedure shows clearly the consequences of noncommutativity of the operators and the possibility of determination of the asymptotic time .our model also reveals the hartmann and fletcher effect , i.e. , for opaque barriers the effective velocity is very large because the contribution of the barrier region to the time is almost zero .we can not determine whether this velocity can be larger than because for this purpose one has to use a relativistic equation ( e.g. , the dirac equation ) .the definition of density of one sided arrivals is proposed .this definition is extended to quantum mechanics , using the concept of weak measurements by aharonov _et al _ . _ _ the proposed procedure is suitable for free particles and for the particles subjected to an external potential , as well .it gives not only a mathematical expression for the arrival time probability distribution but also a way of measuring the quantity obtained .however , this procedure gives no unique expression for the arrival time probability distribution .in analogy with the complex tunneling time , the complex arrival time `` probability distribution '' is introduced ( eq . ( [ eq : complex ] ) ) .it is shown that the proposed approach imposes an inherent limitation , eq .( [ eq : cond ] ) , on the resolution time of the arrival time determination .
the model of weak measurements is applied to various problems , related to the time problem in quantum mechanics . the review and generalization of the theoretical analysis of the time problem in quantum mechanics based on the concept of weak measurements are presented . a question of the time interval the system spends in the specified state , when the final state of the system is given , is raised . using the concept of weak measurements the expression for such time is obtained . the results are applied to the tunneling problem . a procedure for the calculation of the asymptotic tunneling and reflection times is proposed . examples for -form and rectangular barrier illustrate the obtained results . using the concept of weak measurements the arrival time probability distribution is defined by analogy with the classical mechanics . the proposed procedure is suitable to the free particles and to particles subjected to an external potential , as well . it is shown that such an approach imposes an inherent limitation to the accuracy of the arrival time definition .
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` fascinating idea !all that mental work i ve done over the years , and what have i got to show for it ?a goddamned zipfile !well , why not , after all ? '' ( john winston bush , 1996 ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this paper describes a range of observations and arguments in support of the idea that much of artificial intelligence , human perception and cognition , mainstream computing , and mathematics , may be understood as compression of information via the matching and unification of patterns .these observations and arguments provide the foundation for the _ sp theory of intelligence _ and its realisation in the _ sp computer model_outlined below and described more fully elsewhere in which information compression is centre stage .the aim here is to review , update , and extend the discussion in , itself the basis for ( * ? ? ?* chapter 2 ) .related ideas have been around from at least as far back as the 14th century when william of ockham suggested that if something can be explained by two or more rival theories , we should choose the simplest .later , isaac newton wrote that `` nature is pleased with simplicity '' ; ernst mach and karl pearson suggested independently that scientific laws promote `` economy of thought '' ; albert einstein wrote that `` a theory is more impressive the greater the simplicity of its premises , the more different things it relates , and the more expanded its area of application . '' ; cosmologist john barrow has written that `` science is , at root , just the search for compression in the world '' ; and george kingsley zipf developed the idea that human behaviour is governed by a `` principle of least effort '' .partly inspired by the publication of claude shannon s `` theory of communication '' ( now called `` information theory '' ) , fred attneave , horace barlow and others examined the role of information compression ( ic ) in the workings of brains and nervous systems .the close connection between information compression and several other inter - related topics has been demonstrated by several researchers including ray solomonoff ( inductive inference and ` algorithmic probability theory ' ) , chris wallace ( classification and inference ) , jorma rissanen ( modelling by shortest description and ` stochastic complexity ' ) , andrey kolmogorov and gregory chaitin ( ` algorithmic information theory ' ( see , for example , ) , and satosi watanabe ( pattern recognition ) . andray solomonoff has argued that the great majority of problems in science and mathematics may be seen as either ` machine inversion ' problems or ` time limited optimization ' problems , and that both kinds of problem can be solved by inductive inference using the principle of ` minimum length encoding ' .in later research , information compression has featured as a guiding principle for artificial neural networks ( see , for example , , section 4.4 ) and in research on grammatical inference ( see , for example , ) .the ideas described in this paper provide a perspective on artificial intelligence , human perception and cognition , mainstream computing , and mathematics , which is not widely recognised .the main features distinguishing it from other research are : * the scope is very much broader than it is , for example , in the previously - mentioned research on artificial neural networks or grammatical inference .the thrust of the paper is evidence pointing to information compression via the matching and unification of patterns as an organising principle across diverse aspects of artificial intelligence , human perception and cognition , mainstream computing , and mathematics . *most research relating to information compression and its applications makes extensive use of mathematics .by contrast , information compression in this paper and in the sp theory focusses on the simple primitive idea , dubbed `` icmup '' and described in section [ preliminaries_section ] , that redundancy in information may be reduced by finding patterns that match each other and merging or unifying patterns that are the same .far from using mathematics as a basis for understanding information compression , the paper argues , in section [ computing_mathematics_section ] , that icmup may provide a basis for mathematics . *although this is not the main focus of the paper , it is pertinent to mention that icmup provides the foundation for the distinctive and powerful concept of _ multiple alignment _ , a central part of the sp theory of intelligence and , on evidence to date , a key to versatility and adaptability in intelligent systems .i believe this perspective is important for the field of artificial intelligence for three main reasons : * it has things to say directly about the nature of perception , learning , and other aspects of artificial intelligence . *it provides a foundation for the sp theory of intelligence which , via the sp computer model , has demonstrable capabilities in several aspects of artificial intelligence , as outlined in section [ outline_sp_theory_section ] , and it has a range of potential benefits and applications ( _ ibid . _ ) .* it suggests how artificial intelligence may be developed within an encompassing theoretical framework that includes human perception and cognition , mainstream computing , and mathematics .given that large amounts of information can be produced by people , by computers , and via mathematics , and given that ` redundancy ' or repetition in information is often useful in the storage and processing of information , it may seem perverse to suggest that ic is fundamental in our thinking , or in computing and mathematics .but for reasons outlined in section [ resolving_apparent_contractions_section ] , these apparent contradictions can be resolved . as an introduction to what follows, the next section describes some basic principles of ic .after that , the sp theory is described in outline , with pointers to further sources of information , and a summary of empirical support for the theory .this last is itself evidence for the importance of ic in computing and cognition .the sections that follow describe several other strands of evidence that point in the same direction .to cut through some of the complexities in this area , i have found it useful to focus on a rather simple idea : that we may identify repetition or ` redundancy ' in information by searching for patterns that match each other , and that we may reduce that redundancy and thus compress information by merging or ` unifying ' two or more copies to make one .for the sake of brevity , this idea may be shortened to `` information compression via the matching and unification of patterns '' or `` icmup '' .as just described , icmup loses information about the positions of all but one of the original patterns . but this can be remedied with any of the three variants of the idea , described below .icmup may seem too trivial to deserve comment .but because it is the foundation on which the rest of the sp system is built , there are implications that may seem strange and may at first sight look like major shortcomings in the theory : * the first of these is that the sp system , in itself , has no concepts of number and has no procedures for processing numbers . unlike an ordinary operating system or programming language, there is no provision for integers or reals and no functions such as addition , subtraction , square roots , or the like . , the sp system does use a concept of frequency and it does calculate probabilities . but these are part of the workings of the system and not available to users . in any case , they may be modelled via analogue signals , without using conventional concepts of number or arithmetic . ] * secondly , because the system has no concepts of number , it does not use any of the compression techniques that depend on numbers , such as arithmetic coding , wavelet compression , huffman codes , or the like .although the core of the sp system lacks any concept of number , there is potential for the system to represent numbers and process them , provided that it is supplied with knowledge about peano s axioms and related information about the structure and functioning of numbers , as outlined in ( * ? ? ?* chapter 10 ) .the potential advantage of starting with a clean slate , focussing on the simple ` primitive ' concept of icmup , is that it can help us avoid old tramlines , and open doors to new ways of thinking . with the first variant of icmup a technique called _chunking - with - codes_the unified pattern is given a relatively short name , identifier , or ` code ' which is used as a shorthand for the pattern or ` chunk ' .if , for example , the words `` treaty on the functioning of the european union '' appear in several different places in a document , we may save space by writing the expression once , giving it a short name such as `` tfeu '' , and then using that name as a code or shorthand for the expression wherever it occurs . likewise for the abbreviations in this paper , `` ic '' and `` icmup '' . another variant , _ schema - plus - correction _, is like chunking - with - codes but the unified chunk of information may have variations or ` corrections ' on different occasions .for example , a six - course menu in a restaurant may have the general form `` menu1 : appetiser ( s ) sorbet ( m ) ( p ) coffee - and - mints ` ' , with choices at the points marked `` s ` ' ( starter ) , `` m ` ' ( main course ) , and `` p ` ' ( pudding ) . thena particular meal may be encoded economically as something like `` menu1:(3)(5)(1 ) ` ' , where the digits determine the choices of starter , main course , and pudding . a third variant , _ run - length coding _ ,may be used where there is a sequence two or more copies of a pattern , each one except the first following immediately after its predecessor . in this case, the multiple copies may be reduced to one , as before , with something to say how many copies there are , or when the sequence begins and ends , or , more vaguely , that the pattern is repeated .for example , a sports coach might specify exercises as something like `` touch toes ( ) , push - ups ( ) , skipping ( ) , ... '' or `` start running on the spot when i say ` start ' and keep going until i say ` stop ' '' .the _ sp theory of intelligence _ , described most fully in and more briefly in , aims to simplify and integrate observations and concepts across artificial intelligence , human perception and cognition , mainstream computing , and mathematics , with icmup as a unifying theme .the theory , as it stands now is the product of an extended programme of development and testing via the sp computer model .it is envisaged that that model will be the basis for a high - parallel , open - source version of the _ sp machine _ , hosted on an existing high - performance computer , and accessible via the web .this will be a means for researchers everywhere to explore what can be done with the system and to create new versions of it ( * ? ? ?* section 3.2 ) , . the sp theory , via the sp computer model ,has demonstrable capabilities in areas that include the representation of diverse forms of knowledge ( including class hierarchies , part - whole hierarchies , and their seamless integration ) , unsupervised learning , natural language processing , fuzzy pattern recognition and recognition at multiple levels of abstraction , best - match and semantic forms of information retrieval , several kinds of reasoning ( one - step ` deductive reasoning ' , abductive reasoning , probabilistic networks and trees , reasoning with ` rules ' , nonmonotonic reasoning , explaining away , causal reasoning , and reasoning that is not supported by evidence ) , planning , problem solving , and information compression .it also has useful things to say about aspects of neuroscience and of human perception and cognition ( _ ibid . _ ) .several potential benefits and applications of the sp theory are described in , with more detail in ( understanding natural vision and the development of articial vision ) , ( how the sp theory may help to solve nine problems associated with big data ) , ( the development of computational and energy efficiency , of versatility , and of adaptability in autonomous robots ) , ( the sp system as an intelligent database ) , and ( application of the sp system to medical diagnosis ) .an introduction to the theory may be seen in . in broad terms , the sp theory has three main elements : * all kinds of knowledge are represented with _ patterns _ : arrays of atomic symbols in one or two dimensions .* at the heart of the system is compression of information via the matching and unification ( merging ) of patterns , and the building of _ multiple alignments _ like the two shown in figure [ fruit_flies_figure ] . here , the concept of multiple alignment has been borrowed and adapted from bioinformatics . *the system learns by compressing _patterns to create _old _ patterns like those shown in rows 1 to 8 in each of the two multiple alignments in the figure .because information compression is intimately related to concepts of prediction and probability , the sp system is fundamentally probabilistic .each sp pattern has an associated frequency of occurrence , and probabilities may be calculated for multiple alignments and for inferences drawn from multiple alignments ( * ? ? ?* section 4.4 ) , ( * ? ? ?* section 3.7 ) .although the system is fundamentally probabilistic , it may be constrained to deliver all - or - nothing results in the manner of conventional computing systems . since ic is central in the sp theory ,the descriptive and explanatory range of the theory is itself evidence in support of the proposition that ic is a central principle in human perception and thinking , in computing and in mathematics .this section and those that follow describe other evidence for the importance of ic in computing and cognition .first , let s take a bird s eye view of why ic might be important in people and other animals , and in computing . in terms of biology: * ic can confer a selective advantage to any creature by allowing it to store more information in a given storage space or use less space for a given amount of information , and by speeding up transmission of information along nerve fibres thus speeding up reactions or reducing the bandwidth needed for any given volume of information .* perhaps more important than any of these things is the close connection , already mentioned , between ic and inductive inference .compression of information provides a means of predicting the future from the past and estimating probabilities so that , for example , an animal may get to know where food may be found or where there may be dangers .+ incidentally , the connection between ic and inductive prediction makes sense in terms of the matching and unification of patterns : any repeating pattern such as the association between black clouds and rain provides a basis for prediction black clouds suggest that rain may be on the way and probabilities may be derived from the number of repetitions . *being able to make predictions and estimate probabilities can mean large savings in the use of energy with consequent benefits in terms of survival . as with living things ,ic can be beneficial in computing in terms of the storage and transmission of information and what is arguably the fundamental purpose of computers : to make predictions .it may also have a considerable impact in increasing the energy efficiency of computers ( * ? ? ?* section ix ) , ( * ? ? ?* section iii ) . as we shall see, ic is more widespread in ordinary computers than may superficially appear .compression of information is so much embedded in our thinking , and seems so natural and obvious , that it is easily overlooked . here are some examples . in the same way that `` tfeu '' is a convenient code or shorthand for `` treaty on the functioning of the european union '' ,a name like `` new york '' is a compact way of referring to the many things of which that renowned city is composed .likewise for the many other names that we use : `` nelson mandela '' , `` george washington '' , `` mount everest '' , and so on . more generally , most words in our everyday language stand for _ classes _ of things and , as such , are powerful aids to economical description .imagine how cumbersome things would be if , on each occasion that we wanted to refer to a `` table '' , we had to say something like `` a horizontal platform , often made of wood , used as a support for things like food , normally with four legs but sometimes three , ... '' , like the slow language of the ents in tolkien s _ the lord of the rings_. likewise for verbs like `` speak '' or `` dance '' , adjectives like `` artistic '' or `` exuberant '' , and adverbs like `` quickly '' or `` carefully . '' hereis another example . if , when we are looking at something , we close our eyes for a moment and open them again , what do we see ?normally , it is the same as what we saw before . butrecognising that the before and after views are the same , means unifying the two patterns to make one and thus compressing the information , as shown schematically in figure [ swiss_landscape_figure ] . is from wallpapers buzz ( http://www.wallpapersbuzz.com/[www.wallpapersbuzz.com ] ) , reproduced with permission.,scaledwidth=90.0% ] it seems so simple and obvious that if we are looking at a landscape like the one in the figure , there is just one landscape even though we may look at it two , three , or more times .but if we did not unify successive views we would be like an old - style cine camera that simply records a sequence of frames , without any kind of analysis of understanding that , very often , successive frames are identical or nearly so .of course , we can recognise something that we have seen before even if the interval between one view and the next is hours , months , or years . in cases like that , it is more obvious that we are relying on memory , as shown schematically in figure [ recognition_figure ] . notwithstanding the undoubted complexities and subtleties in how we recognise things ,the process may be seen in broad terms as one of matching incoming information with stored knowledge , merging or unifying patterns that are the same , and thus compressing the information .if we did not compress information in that way , our brains would quickly become cluttered with millions of copies of things that we see around us people , furniture , cups , trees , and so on and likewise for sounds and other sensory inputs .ic may also be seen at work in binocular vision : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` in an animal in which the visual fields of the two eyes overlap extensively , as in the cat , monkey , and man , one obvious type of redundancy in the messages reaching the brain is the very nearly exact reduplication of one eye s message by the other eye . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in viewing a scene with two eyes , we normally see one view and not two .this suggests that there is a matching and unification of patterns , with a corresponding compression of information .evidence in support of that conclusion comes from a demonstration with ` random - dot stereograms ' , as described in ( * ? ? ?* section 5.1 ) . in brief ,each of the two images shown in figure [ stereogram_1_figure ] is a random array of black and white pixels , with no discernable structure , but they are related to each other as shown in figure [ stereogram_2_figure ] : both images are the same except that a square area near the middle of the left image is further to the left in the right image . .reproduced from ( * ? ? ?* figure 2.4 - 3 ) , with permission of alcatel - lucent / bell labs.,scaledwidth=90.0% ] when the images in figure [ stereogram_1_figure ] are viewed with a stereoscope , projecting the left image to the left eye and the right image to the right eye , the central square appears gradually as a discrete object suspended above the background .although this illustrates depth perception in stereoscopic vision a subject of some interest in its own right the main interest here is on how we see the central square as a discrete object .there is no such object in either of the two images individually .it exists purely in the _ relationship _ between the two images , and seeing it means matching one image with the other and unifying the parts which are the same .this example shows that , although the matching and unification of patterns is a usefully simple idea , there are interesting subtleties and complexities that arise when two patterns are similar but not identical . seeing the central object means finding a ` good ' match between relevant pixels in the central area of the left and right images , and likewise for the background . here, a good match is one that yields a relatively high level of ic .since there is normally an astronomically large number of alternative ways in which combinations of pixels in one image may be aligned with combinations of pixels in the other image , it is not normally feasible to search through all the possibilities exhaustively . as with many such problems in artificial intelligence , the best is the enemy of the good . instead of looking for the perfect solution, we can do better by looking for solutions that are good enough for practical purposes . with this kind of problem, acceptably good solutions can often be found in a reasonable time with heuristic search : doing the search in stages and , at each stage , concentrating the search in the most promising areas and cutting out the rest , perhaps with backtracking or something equivalent to improve the robustness of the search .one such method for the analysis of random - dot stereograms has been described by marr and poggio .it seems likely that the kinds of processes that enable us to see a hidden object in a random - dot stereogram also apply to how we see discrete objects in the world .the contrast between the relatively stable configuration of features in an object such as a car , compared with the variety of its surroundings as it travels around , seems to be an important part of what leads us to conceptualise the object as an object ( * ? ? ?* section 5.2 ) .any creature that depends on camouflage for protection by blending with its background must normally stay still .as soon as it moves relative to its surroundings , it is likely to stand out as a discrete object .the idea that ic may provide a means of discovering ` natural ' structures in the world has been dubbed the ` donsvic ' principle : _ the discovery of natural structures via information compression _* section 5.2 ) .ic may also be seen down in the works of vision .figure [ limulus_figure ] shows a recording from a single sensory cell ( _ ommatidium _ ) in the eye of a horseshoe crab ( _ limulus polyphemus _ ) as a light is switched on , kept on for a while and then switched off shown by the step function at the bottom of the figure .contrary to what one might expect , the ommatidium fires at a ` background ' rate of about 20 impulses per second even when it is in the dark ( shown at the left of the figure ) .when the light is switched on , the rate of firing increases sharply but instead of staying high while the light is on ( as one might expect ) , it drops back almost immediately to the background rate .the rate of firing remains at that level until the light is switched off , at which point it drops sharply and then returns to the background level , a mirror image of what happened when the light was switched on .this pattern of responding adaptation to constant stimulation can be explained via the action of inhibitory nerve fibres that bring the rate of firing back to the background rate when there is little or no variation in the sensory input .but for the present discussion , the point of interest is that the positive spike when the light is switched on , and the negative spike when the light is switched off , have the effect of marking boundaries , first between dark and light , and later between light and dark . in effect , this is a form of run - length coding .at the first boundary , the positive spike marks the fact of the light coming on .as long as the light stays on , there is no need for that information to be constantly repeated , so there is no need for the rate of firing to remain at a high level . likewise ,when the light is switched off , the negative spike marks the transition to darkness and , as before , there is no need for constant repetition of information about the new low level of illumination .it is recognised that this kind of adaptation in eyes is a likely reason for small eye movements when we are looking at something , including sudden small shifts in position ( ` microsaccades ' ) , drift in the direction of gaze , and tremor . without those movements , there would be an unvarying image on the retina so that , via adaptation , what we are looking at would soon disappear .adaptation is also evident at the level of conscious awareness .if , for example , a fan starts working nearby , we may notice the hum at first but then adapt to the sound and cease to be aware of it .but when the fan stops , we are likely to notice the new quietness at first but adapt again and stop noticing it .another example is the contrast between how we become aware if something or someone touches us but we are mostly unaware of how our clothes touch us in many places all day long .we are sensitive to something new and different and we are relatively insensitive to things that are repeated .as can be seen in figure [ speech_waveform_figure ] , people normally speak in ` ribbons ' of sound , without gaps between words or other consistent markers of the boundaries between words . in the figure the waveform for a recording of the spoken phrase `` on our website''it is not obvious where the word `` on '' ends and the word `` our '' begins , and likewise for the words `` our '' and `` website '' .just to confuse matters , there are three places within the word `` website '' that look as if they might be word boundaries . ) for the figure and for permission to reproduce it.,scaledwidth=90.0% ] given that words are not clearly marked in the speech that young children hear , how do they get to know that language is composed of words ? as before , it seems that ic and , more specifically , the donsvic principle , provide an answer .it has been shown that , via the matching and unification of patterns , the beginnings and ends of words can be discovered in an english - language text from which all spaces and punctuation has been removed , and this without the aid of any kind of dictionary or other information about the structure of english ( * ? ? ?* section 5.2 ) .it true that there are added complications with speech but it seems likely that similar principles apply .the donsvic principle may also be applied to the process of learning the grammar of a language .in addition to the learning of words , the process of grammar discovery or induction includes processes for learning grammatical classes of words ( such as nouns , verbs and adjectives ) and also syntactic forms such as phrases , clauses and sentences . ultimately , grammar discovery should also include the learning of meanings and the association of meanings with syntax . in connection with language learning, ic provides an elegant solution to two problems : _generalisation_how we generalise our knowledge of language without over - generalising ; and _ dirty data_how we can learn a language despite errors in the examples we hear ; with evidence that both these things can be achieved without the correction of errors by parents or teachers . in brief , a grammar that is good in terms of information compression is one that generalises without over - generalising ; and such a grammar is also one that weeds out errors in the data .these things are described more fully in ( * ? ? ?* section 5.3 ) .it has long been recognised that our perceptions are governed by _ constancies _ : * _ size constancy_. to a large extent , we judge the size of an object to be constant despite wide variations in the size of its image on the retina . *_ lightness constancy_. we judge the lightness of an object to be constant despite wide variations in the intensity of its illumination . *_ colour constancy_. we judge the colour of an object to be constant despite wide variations in the colour of its illumination .these kinds of constancy , and others such as shape constancy and location constancy , may each be seen as a means of encoding information economically .it is simpler to remember that a particular person is `` about my height '' than many different judgements of size , depending on how far away that person is . in a similar way ,it is simpler to remember that a particular object is `` black '' or `` red '' than all the complexity of how its lightness or its colour changes in different lighting conditions .if , as seems to be the case , ic is fundamental in our thinking , then it should not be surprising to find that ic is also fundamental in things that we use to aid our thinking : computing in the modern sense where the work is done by machines , and mathematics , done by people or machines .similar things can be said about logic but the main focus here will be on computing and mathematics , starting with the latter .roger penrose writes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` it is remarkable that _ all _ the superb theories of nature have proved to be extraordinarily fertile as sources of mathematical ideas .there is a deep and beautiful mystery in this fact : that these superbly accurate theories are also extraordinarily fruitful simply as _mathematics_. '' ( pp .225226 , emphasis as in the original ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in a similar vein , john barrow writes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` for some mysterious reason mathematics has proved itself a reliable guide to the world in which we live and of which we are a part .mathematics works : as a result we have been tempted to equate understanding of the world with its mathematical encapsulization . ... why is the world found to be so unerringly mathematical ? ''( preface , p. vii ) . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ these writings about the `` mysterious '' nature of mathematics , others such as wigner s `` the unreasonable effectiveness of mathematics in the natural sciences '' , and schools of thought in the philosophy of mathematics foundationism , logicism , intuitionism , formalism , platonism , neo - fregeanism , and more have apparently overlooked an obvious point : _ mathematics can be a very effective means of compressing information_. this apparent oversight is surprising since mathematics is indeed a useful tool in science and , as already mentioned , it is recognised that `` science is , at root , just the search for compression in the world . '' . here is an example of how ordinary mathematics not some specialist algorithm for ic can yield high levels of ic .newton s equation for his second law of motion , , is a very compact means of representing any realistically - large table of the distance travelled by a falling object ( ) in a given time since it started to fall ( ) , , is the acceleration due to gravity , about . ] as illustrated in table [ distance_time_table ] .that small equation would represent the table even if it was a 1000 times bigger , or more .likewise for other equations such as , , , and so on . in the subsections that follow , we shall dig a little deeper , looking at both mathematics and computing in terms of the ideas outlined earlier ( section [ preliminaries_section ] ) : ic via the matching and unification of patterns , chunking - with - codes , schema - plus - correction , and run - lengh coding . in mathematics ,the matching and unification of patterns can be seen mainly in the matching and unification of names . if , for example , we want to calculate the value of from these equations : ; ; , we need to match in the third equation with in the first equation , and to unify the two so that the correct value is used for the calculation of . likewise for .the sixth of peano s axioms for natural numbers for every natural number , is a natural number provides the basis for a succession of numbers : , , ... , itself equivalent to unary numbers in which , , , and so on .a numbering system like that is good enough for counting a few things but it is quite unmanageably cumbersome for large numbers . to be practical with numbers of all sizes , the obvious redundancies in the repetitions of and of to be reduced or eliminated .this can be done via the use of higher bases for numbers binary , octal , decimal and the like ( * ? ? ?* section 10.3.2.2 ) .emil post s `` canonical system '' , which is recognised as a definition of ` computing ' that is equivalent to a universal turing machine , may be seen to work largely via the matching and unification of patterns .much the same is true of the ` transition function ' in a universal turing machine .the matching and unification of patterns may be seen in the way computers retrieve information from computer memory .this means finding a match between the address in the cpu and the address in memory , with implicit unification of the two .it is true that logic gates provide the mechanism for finding an address in computer memory but the process may also be seen as one of searching for a match between the address held in the cpu and the corresponding address in computer memory .a system like prolog a computer - based version of logic may be seen to function largely via the matching and unification of patterns .much the same can be said about query - by - example , a popular technique for retrieving information from databases .other examples will be seen in the subsections that follow .if a set of statements is repeated in two or more parts of a computer program then it is natural to declare them once as a ` function ' , ` procedure ' or ` sub - routine ' within the program and to replace each sequence with a ` call ' to the function from each part of the program where the sequence occurred .this may be seen as an example of the chunking - with - codes technique for ic : the function may be regarded as a chunk , with the name of the function as its code or identifier . in many cases but not all ,a name or identifier in computing or in mathematics may be seen to achieve compression of information by serving as a relatively short code for a relatively large chunk of information .sometimes , the identifier can be larger than what it identifies but , normally , this can be seen to make sense in terms of ic via schema - plus - correction , next .the schema - plus - correction idea may be seen in two main areas : functions with parameters , and object - oriented programming .normally , a function in a computer program , or a mathematical function , has one or more parameters , eg , ` sqrt(number ) ` ( to calculate a square root ) , ` bin2dec(number ) ` ( to convert a binary number into its decimal equivalent , and ` combin(count_1 , count_2 ) ` ( to calculate the number of combinations of ` count_1 ` things , taken ` count_2 ` at a time ) .any such function may be seen as an example of schema - plus - correction : the function itself may be seen as a chunk of information that may be needed in many different places ; the name of the function serves as a relatively short code ; and the parameters provide for variations or ` corrections ' for any given instance .imagine how inconvenient it would be if we were not able to specify functions in this way .every time we wanted to calculate a square root , we would have to write out the entire procedure , and likewise for ` bin2dec ( ) ` , ` combin ( ) ` and the many other functions that people use . herewe can see why ic may be served , even if an identifier is bigger than what it identifies .something that is small in terms of numbers of characters , such as the number , may be assigned to the relatively large identifier , `` ` number ` '' , in ` sqrt(number ) ` , but that imbalance does little to outweigh the relatively large savings that arise from being able to call the function on many different occasions without having to write it out on each occasion . in any case , the processes of compiling or interpreting a computer program will normally convert long , human - friendly identifiers into short ones that can be processed more efficiently by computers .apart from functions with parameters , the schema - plus - correction idea is prominent in object - oriented programming . from simula , through smalltalk to c++ and beyond , object - oriented languages allow programmers to create software ` objects ' , each one modelled on a ` class ' or hierarchy of classes .each such class , which normally represents some real - world category like ` person ' , ` vehicle ' , or ` item for delivery ' , may be seen as a schema . like a function, each class normally has one or more parameters which may be seen as a means of applying ` corrections ' to the schema .for example , when a ` person ' object is created from the ` person ' class , his or her gender and job title may be specified via parameters .classes in object - oriented languages are powerful aids to ic .if , for example , we have defined a class for ` vehicle ' , perhaps including information about the care and maintenance of vehicles , procedures to be followed if there is breakdown outside the depot , and variables for things like engine size and registration number , we avoid the need to repeat that information for each individual vehicle .attributes of high - level classes are ` inherited ' by lower - level classes , saving the need to repeat the information in each lower - level class .run - length coding appears in various forms in mathematics , normally combined with other things . hereare some examples : * multiplication ( eg , ) is repeated addition . *division of a larger number by a smaller one ( eg , ) is repeated subtraction . *the power notation ( eg , ) is repeated multiplication . * a factorial ( eg , ) is repeated multiplication and subtraction . * the bounded summation notation ( eg , ) and the bounded power notation ( eg , ) are shorthands for repeated addition and repeated multiplication , respectively . in both cases, there is normally a change in the value of a variable on each iteration , so these notations may be seen as a combination of run - length coding and schema - plus - correction . * in matrix multiplication, is a shorthand for the repeated operation of multiplying each entry in matrix with the corresponding entry in matrix .of course , things like multiplication and division are also provided in programming languages .in addition , there is more direct support for run - length coding with iteration statements like _ repeat ... until _ , _ while ... _ , and _ for ... _ . for example , .... s = 0 ; for ( i = 1 ; i < = 100 ; i++ ) s + = i ; .... specifies 100 repetitions of adding to , with the addition of to on each iteration , without the need to write out each of the 100 repetitions explicitly .most programming languages also provide for run - length coding in the form of recursive functions like this : .... int factorial(int x ) { if ( x = = 1 ) return(1 ) ; return(x * factorial(x - 1 ) ) ; } . .... here , the repeated multiplication and subtraction of the factorial function is achieved economically by calling the function from within itself .as noted in the introduction , the idea that ic is fundamental in artificial intelligence , human perception and cognition , and in mainstream computing and mathematics seems to be contradicted by the productivity of the human brain and the ways in which computers and mathematics may be used to create information as well as to compress it ; and it seems to be contradicted by the fact that redundancy in information is often useful in both the storage and processing of information .these apparent contradictions and how they may be resolved are discussed briefly here .an example of how computers may be used to create information is how the `` hello , world '' message of c - language fame may be printed 1000 times , with a correspondingly high level of redundancy , by a call to `` hello_world(1000 ) ` ' , defined as : .... void hello_world(int x ) { printf("hello , world\n " ) ; if ( x > 1 ) hello_world(x - 1 ) ; } . .... here , the instruction `` printf(hello , world\backslashn ) ; ` ' prints a copy of `` hello , world '' .then , when the variable `` x ` ' has the value 1000 , the next line ensures that the whole process is repeated another 999 times .the way in which ic may achieve this kind of productivity may be seen via the workings of the sp computer model .when that model ( * ? ? ?* sections 3.9 , 3.10 , and 9.2 ) is used to parse a sentence into its constituent parts and sub - parts , as shown in parts ( a ) and ( b ) of figure [ fruit_flies_figure ] , the model creates a relatively small code as a compressed representation of the sentence ( * ? ? ?* section 3.5 ) .but exactly the same computer model , using exactly the same processes of ic via the matching and unification of patterns , may reverse the process , reconstructing the original sentence from the code ( * ? ? ?* section 3.8 ) .this is similar to the way that a suitably - constructed prolog program may not only be run ` forwards ' to create ` results ' from ` data ' but may also be run ` backwards ' to create ` data ' from ` results ' .a very rough analogy is the way that a car can be driven backwards as well as forwards but the engine is working in exactly the same way in both cases . reduced to its essentials , the way that the sp model can be run ` backwards ' works like this . using our earlier example ,a relatively large pattern like `` ` treaty on the functioning of the european union ` '' is first assigned a relatively short code like `` ` tfeu ` '' to create the pattern `` ` tfeu treaty on the functioning of the european union ` '' which combines the short code with the thing it represents .then a copy of the short code , `` ` tfeu ` '' , may be used to retrieve the original pattern via matching and unification with `` ` tfeu ` '' within the combined pattern .the remainder of the combined pattern , `` ` treaty on the functioning of the european union ` '' , may be regarded as the ` output ' of the retrieval process . as such , it is a decompressed version of the short code . and that decompression has been achieved via a process of ic by the matching and unification of two copies of the short code . superficially , using one mechanism to run the model ` forwards ' and ` backwards 'has the flavour of a perpetual motion machine : something that looks promising but conflicts with fundamental principles .the critical issue is the size of the short code .it needs to be at least slightly bigger than the theoretical minimum for the process to work as described ( * ? ? ?* section 3.8.1 ) .if there is some residual redundancy in the code , the sp model has something to work on . with that proviso , ``decompression by compression '' is not as illogical as it may sound .there is no doubt that informational redundancy repetition of information is often useful .for example : * with any kind of database , it is normal practice to maintain one or more backup copies as a safeguard against catastrophic loss of the data . * with information on the internet, it is common practice to maintain two or more ` mirror ' copies in different places to minimise transmission times and to reduce the chance of overload at any one site .* the redundancy in natural language can be a very useful aid to comprehension of speech in noisy conditions .these kinds of uses of redundancy may seem to conflict with the idea that ic which means reducing redundancy is fundamental in computing and cognition .however , the two things may be independent , or the usefulness of redundancy may actually be understood in terms of the sp theory itself .an example of how the two things may be independent is the above - mentioned use of backup copies of databases : `` ... it is entirely possible for a database to be designed to minimise internal redundancies and , at the same time , for redundancies to be used in backup copies or mirror copies of the database ... paradoxical as it may sound , knowledge can be compressed and redundant at the same time . ''* section 2.3.7 ) .an example of how the usefulness of redundancy may be understood in terms of the sp theory is how , in the retrieval of information from a database or other body of knowledge , there needs to be some redundancy between the search pattern and each matching pattern in the knowledge base ( section [ decompression_by_compression_section ] ) .again , redundancy provides the key to how , in applications such as parsing natural language or pattern recognition , the sp system may achieve good results despite errors of omission , commission or substitution and thus , in effect , suggest interpolations for errors of omission and corrections for errors of commission or substitution ( * ? ? ?* sections 8 and 9 ) , , ( * ? ? ?* section 6.2 ) .this paper presents evidence for the idea that much of artificial intelligence and of human perception and thinking , and much of computing and mathematics , may be understood as compression of information via the matching and unification of patterns .this is the foundation for the sp theory of intelligence , outlined in section [ outline_sp_theory_section ] , with pointers to where further information may be found .the explanatory range of the theory in perception , reasoning , planning , problem solving , and more provides indirect support for the idea that ic is an important principle in computing and cognition .information compression can mean advantages for creatures : in efficient storage and transmission of information ; in being able to make predictions about sources of food , where there may be dangers , and so on ; and in corresponding savings in energy .likewise for artificial systems .some aspects of ic and its benefits are so much embedded in our everyday thinking that they are easily overlooked .most nouns , verbs and adjectives may be seen as short codes for relatively complex concepts , and we frequently create shorthands for relatively long expressions . if we blink or otherwise close our eyes for a moment , we normally merge the before and after views into a single percept . in recognising something after a longer period , we are , in effect , merging the new perception with something that we remember .if we are viewing something with two eyes , we normally merge the two retinal images into a single percept .ic via the matching and unification of patterns may be seen in both computing and mathematics .an equation can be a powerful aid to ic . in the processing of computer programs or mathematical equations ,ic may be seen in the matching and unification of names .it may also be seen : in the reduction or removal of redundancy from unary numbers to create numbers with bases of 2 or more ; in the workings of post s canonical system and the transition function in the universal turing machine ; in the way computers retrieve information from memory ; in systems like prolog ; and in the query - by - example technique for information retrieval . the chunking - with - codes technique for ic may be seen in the use of named functions to avoid repetition of computer code .the schema - plus - correction technique may be seen in functions with parameters and in the use of classes in object - oriented programming . and the run - length coding technique may be seen in multiplication , in division , and in several other devices in mathematics and computing .the sp theory resolves the apparent paradox of `` decompression by compression '' . and computing and cognition asic is compatible with the uses of redundancy in such things as backup copies to safeguard data and understanding speech in a noisy environment .this perspective can be fruitful in research into artificial intelligence , and human perception and cognition including neuroscience and in mainstream computing and its applications .it may also prove useful in mathematics and its applications .h. b. barlow .sensory mechanisms , the reduction of redundancy , and intelligence . in hmso ,editor , _ the mechanisation of thought processes _ , pages 535559 . her majesty s stationery office , london , 1959 .h. sakamoto .grammar compression : grammatical inference by compression and its application to real data . in _ proceedings of the 12th international conference on grammatical inference _, volume 34 of _ jmlr : workshop and conference proceedings _ , pages 320 .2014 .r. j. solomonoff . the application of algorithmic probability to problems in artificial intelligence . in l.n. kanal and j. f. lemmer , editors , _ uncertainty in artificial intelligence _ , pages 473491 .elsevier science , north - holland , 1986 . j. g. wolff . learning syntax and meanings through optimization and distributional analysis . in y. levy , i. m. schlesinger , and m. d. s. braine , editors , _ categories and processes in language acquisition _ , pages 179215 .lawrence erlbaum , hillsdale , nj , 1988 .see http://bit.ly/zigjyc[bit.ly/zigjyc ] .j. g. wolff .medical diagnosis as pattern recognition in a framework of information compression by multiple alignment , unification and search ., 42:608625 , 2006 .see http://bit.ly/xe7prg[bit.ly/xe7prg ] .j. g. wolff . .cognitionresearch.org , menai bridge , 2006 .s : 0 - 9550726 - 0 - 3 ( ebook edition ) , 0 - 9550726 - 1 - 1 ( print edition ) .distributors , including amazon.com , are detailed on http://bit.ly/wmb1rs[bit.ly/wmb1rs ] .j. g. wolff .application of the sp theory of intelligence to the understanding of natural vision and the development of computer vision ., 3(1):552570 , 2014 .see http://bit.ly/1scmpv9[bit.ly/1scmpv9 ] .j. g. wolff .big data and the sp theory of intelligence . , 2:301315 , 2014 .see http://bit.ly/1jgwxdh[bit.ly/1jgwxdh ] .this article , with minor revisions , is due to be reproduced in fei hu ( ed . ) , _ big data : storage , sharing , and security ( 3s ) _ , taylor & francis llc , crc press , 2015 .j. g. wolff .proposal for the creation of a research facility for the development of the sp machine .technical report , cognitionresearch.org , 2015 . unpublished document .see http://bit.ly/1zzjjis[bit.ly/1zzjjis ] .
this paper presents evidence for the idea that much of artificial intelligence , human perception and cognition , mainstream computing , and mathematics , may be understood as compression of information via the matching and unification of patterns . this is the basis for the _ sp theory of intelligence _ , outlined in the paper and fully described elsewhere . relevant evidence may be seen : in empirical support for the sp theory ; in some advantages of information compression ( ic ) in terms of biology and engineering ; in our use of shorthands and ordinary words in language ; in how we merge successive views of any one thing ; in visual recognition ; in binocular vision ; in visual adaptation ; in how we learn lexical and grammatical structures in language ; and in perceptual constancies . ic via the matching and unification of patterns may be seen in both computing and mathematics : in ic via equations ; in the matching and unification of names ; in the reduction or removal of redundancy from unary numbers ; in the workings of post s canonical system and the transition function in the universal turing machine ; in the way computers retrieve information from memory ; in systems like prolog ; and in the query - by - example technique for information retrieval . the chunking - with - codes technique for ic may be seen in the use of named functions to avoid repetition of computer code . the schema - plus - correction technique may be seen in functions with parameters and in the use of classes in object - oriented programming . and the run - length coding technique may be seen in multiplication , in division , and in several other devices in mathematics and computing . the sp theory resolves the apparent paradox of `` decompression by compression '' . and computing and cognition as ic is compatible with the uses of redundancy in such things as backup copies to safeguard data and understanding speech in a noisy environment . _ keywords : _ information compression , intelligence , computing , mathematics
a consequence of the vast expressiveness of human language is that natural language understanding ( nlu ) can not scale to _ general language _input unless it is willing to make some compromises for the sake of practicality and robustness .one such compromise made in most state - of - the - art natural language processing technologies ( e.g. , syntactic parsing ) is that the computational model of language is not a complete model of human grammatical knowledge , but rather , a set of soft preferences derived by statistical learning algorithms from large human - annotated datasets .thus , a strategy for advancing the state - of - the - art in nlu is to focus linguists descriptive effort on _ annotation schemes _ and _ datasets _ : annotating corpora with semantic information , for instance , so that formal cues ( denotational or contextual ) can be automatically associated with meaning representations .a representation will necessarily be limited in the level of detail it provides ( its _ granularity _ ) and/or the range of linguistic expressions that it is prepared to describe ( its _ coverage _ ) .the principles of construction grammar can inform corpus annotations even if they fall short of full - fledged constructional parses .mindful of this granularity coverage tradeoff , we have sought to develop a scheme that will be of practical value for broad - coverage human annotation , and therefore domain - general nlu , for a particular set of lexicogrammatical markers : * prepositions * in english , and more generally , * adpositions * and * case markers * across languages . forming a relatively closed class , these markers are incredibly versatile , and therefore exceptionally challenging to characterize semantically , let alone disambiguate automatically ( [ sec : lit ] ) . as a first step ,we describe * preposition supersenses * , which target a coarse level of granularity and support comprehensive coverage of types and tokens in english .however , in attempting to generalize this approach to other languages , we uncovered a major weakness : it does not distinguish the contribution of the preposition itself , i.e. , what the adposition * codes * for , from the semantic role or relation that the adposition mediates and that a predicate or scene * calls * for ; and as a result , the label that would be most appropriate is underdetermined for many tokens ( [ sec : problems ] ) . in our view , the mismatch can be understood through the lens of * construal * and should be made explicit , leveraging the principles of construction grammar ( [ sec : construal ] ) . [sec : applying_bipartite ] surveys some of the phenomena that our new analysis addresses ; [ sec : challenges ] discusses the tradeoffs inherent in the proposed approach .finally , we sketch how our proposal would fit into a compositional constructional analysis of adpositional phrases ( [ sec : cxg_discussion ] ) .the most frequent english prepositions are extraordinarily polysemous . for example, the preposition * * expresses different information in each of the following usages : nlu systems , when confronted with a new instance of * * , must determine whether it marks an entity or scene s location , time , instrument , or something else .as lexical classes go , prepositions are something of a red - headed stepchild in the linguistics literature .most of the semantics literature on prepositions has revolved around how they categorize space and time ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?however , there have been a couple of lines of work addressing preposition semantics broadly . in cognitive linguistics ,studies have examined abstract as well as concrete uses of english prepositions ( e.g. , * ? ? ?* ; * ? ? ?notably , the polysemy of * * and other prepositions has been explained in terms of sense networks encompassing core senses and motivated extensions . the preposition project ( tpp ; * ? ? ?* ) broke ground in stimulating computational work on fine - grained word sense disambiguation of english prepositions .typologists , meanwhile , have developed _ semantic maps _ of functions , where the nearness of two functions reflects their tendency to fall under the same adposition or case marker in many languages .[ [ preposition - supersenses . ] ] preposition supersenses .+ + + + + + + + + + + + + + + + + + + + + + + + following , we sought coarse - grained semantic categories of prepositions as a broader - coverage alternative to fine - grained senses . because we want our labels to generalize across languages , we use categories similar to those appearing in semantic maps ( , , etc . ) rather than lexicalized senses .we identified a set of such categories through extensive deliberation involving the use of dictionaries , corpora and pilot annotation experiments .we call these categories * supersenses * to emphasize their similarity to coarse - grained classifications of nouns and verbs that go by that name .the * * examples in [ ex : at ] are accompanied by the appropriate supersenses from our scheme .most supersenses resemble thematic roles , in the tradition begun by ; a few others are needed to describe preposition - marked relations between entities .there are multiple english prepositions per supersense ; e.g. , `` * * the city '' and `` * * the table '' would join `` * * 123 main st . '' in being labeled as s. we understand the supersenses as prototype - based categories , and in some cases use heuristics like paraphraseability ( `` in order to '' for ) and wh - question words ( `` why ? '' for and ) to help determine which tokens are instances of the category .the 75 supersenses are organized in a taxonomy based on that of verbnet , with , , and at the top level .the taxonomy uses multiple inheritance to account for subcategories which are considered to include properties of multiple supercategories .the full hierarchy appears in [ fig : hierarchy ] .our approach to preposition annotation is _ comprehensive _ , i.e. , every token of every preposition type is given a supersense label .we applied the supersenses to annotate a 55,000 word corpus of online reviews in english , covering all 4,250 preposition tokens . for each token , annotators chose a single label from the inventory .this is not an easy task , but with documentation of many examples in a lexical resource , * prepwiki * , trained university students were able to achieve reasonable levels of inter - annotator agreement .every token was initially labeled by at least two independent annotators , and differences were adjudicated by experts .while the above approach worked reasonably well for most english tokens , a few persistent issues arising in english and other languages have led us to revisit fundamental assumptions about what it means to semantically label an adposition . in our original english annotation ,a few phenomena caused us much hand - wringing not because there was no appropriate supersense , but because _ multiple _ supersenses seemed to fit .for example , we found that and could compete for semantic territory .[ ex : about - topic ] evinces related usages of * * with different governors : the first three usages could reasonably be labeled as .this is because the * * -pp indicates what is communicated [ ex : book - about , ex : read - about ] and known [ ex : know - about ] .the fourth example [ ex : care - about ] , however , presents an overlap in its interpretation . on the one hand ,traditional thematic role inventories include the category for something that prompts a perceptual or emotional experience , as in [ ex : afraid - of ] . surely , _ cared _ in [ ex : care - about ] describes an emotional state , so * * marks the . however , much like examples [ ex : book - about , ex : read - about , ex : know - about ] , the semantics relating to is still very much present in the use of * * , which draws attention to the aspects of the caring process involving thought or judgement .this sits in contrast to the use of * * in `` i cared * * my grandmother , '' where the prepositional choice calls attention to the benefactive aspect of the caring act .if we are constrained to one label per argument , where should the line be drawn between and in cases of overlap ? in other words , should the semantic representation emphasize the semantic commonality between all of the examples in [ ex : about - topic ] , or between [ ex : care - about ] and [ ex : afraid - of ] ?observing that annotators were inconsistent on such tokens , we drew a boundary between and in an attempt to force consistency .below , we instead argue that the idea of construal / conceptualization offers a more principled answer ; in our new analysis , the suggested by * * and the suggested by _ cared _ can coexist .one of the premises of using unlexicalized supersenses was that the scheme would port well to other languages ( as the wordnet noun and verb supersenses have : ( * ? ? ?* ; * ? ? ?* _ inter alia _ ) ) . to test this ,we have begun applying the existing supersenses to three new languages , namely , hebrew , hindi , and korean . pilot annotation in these languages has echoed the fundamental problem discussed in the previous section .consider the hindi examples below .in [ ex : bipasha ] , the experiencer of an emotion is marked with a postposition * * , the genitive case marker in hindi .the use of * * strongly suggests possession ( here it is possession of an abstract quality ) .however , the semantics of the phrase also includes thus , it seems inappropriate to choose between and for this token .( the same problem is encountered with a similar phrase `` the anger * * bipasha '' in english . )there are other ways to attribute anger to bipasha e.g . ,see [ ex : counter_bipasha ] . herebipasha is not construed as a possessor when the postposition * * is not used .our preliminary annotation of hindi , korean , and hebrew has suggested that instances of overlap between multiple supersenses are fairly frequent .why do `` cared * * the strategy '' in [ ex : care - about ] and `` anger * * bipasha '' in [ ex : bipasha ] above not lend themselves to a single label ?these seem to be symptoms of the fact that no english preposition prototypically marks or roles , though from the perspective of the predicates , such roles are thought to be important generalizations in characterizing events of perception and emotion .in essence , there is an apparent mismatch between the roles that the verb _ care _ or the noun _ anger _calls for , and the functions that english prepositions prototypically code for . while * * prototypically codes for and * * prototypically codes for , there is no preposition that `` naturally '' codes for or in the same way . thus ,if a predicate marks an or with a preposition , the preposition will contribute something new to the conceptualization of the scene being described . with`` cared * * the strategy , '' it is -ness that the preposition brings to the table ; with `` anger * * bipasha , '' it is the conceptualization of anger as an attribute that somebody possesses .thus , we turn to the theories in cognitive semantics to define the phenomenon of * construal * as a means of understanding the contributions that are emerging from the adpositions with respect to the expressed event or situation . then , we turn to the guiding principles of construction grammar to develop a method called * bipartite analysis * in order to handle the problem posed by construals and to resolve the apparent semantic overlap which is pervasive across languages .the world is not neatly organized into bits of information that map directly to linguistic symbols .rather , linguistic meaning reflects the priorities and categorizations of particular expressions in a language ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* ch . 3 ) .much like pictures of a scene from different viewpoints will result in different renderings , a real - world situation being described will `` look '' different depending on the linguistic choices made by a speaker .this includes within - language choices : e.g. , the choice of `` john sold mary a book '' vs. `` john sold a book to mary '' vs. `` mary bought a book from john . '' in the process called * construal * ( a.k.a . *conceptualization * ) , a speaker `` packages '' ideas for linguistic expression in a way that foregrounds certain elements of a situation while backgrounding others .we propose to incorporate this notion of construal in adposition supersense annotation .we use the term * scene * to refer to events or situations in which an adpositional phrase plays a role .( we do not formalize the full scene , but assume its roles can be characterized with supersense labels from [ fig : hierarchy ] . )contrast the use of the prepositions * * and * * in [ ex : puccini ] : while both prepositional phrases indicate works created by the operatic composer puccini ( i.e. , ) , the different choices of preposition reflect different construals : * * highlights the agency of puccini , whereas * * construes puccini as the source of his composition .thus , `` works * * puccini '' and `` works * * puccini '' are paraphrases , but present subtly different portrayals of the relationship between puccini and his works . in other words , these paraphrases are not identical in meaning because the preposition carries with it different nuances of construal . in this paper , we focus on differences in construal manifested in different adposition choices , and the possibility that an adposition construal complements the construal of a scene and its roles ( as evoked by the governing head or predicate ) . for instances like`` i read * * the strategy '' in [ ex : read - about ] that were generally unproblematic for annotation under the original preposition guidelines , the semantics of the adposition and the semantic role assigned by the predicate are congruent . however , for examples like `` cared * * the strategy '' in [ ex : care - about ] and `` anger * * bipasha '' in [ ex : bipasha ] , we say that the adposition construes the role as something other than what the scene specifies .competition between different adposition construals accounts for many of the alternations that are near - paraphrases , but potentially involve slightly different nuances of meaning ( e.g. , `` talk * * someone '' vs. `` talk * * someone '' ; `` angry * * someone '' vs. `` angry * * someone '' ) .thus , the notion of construal challenges s ( ) original conception that each supersense reflects the semantic role assigned by its governing predicate ( i.e. verbal or event nominal predicate ) , and that a single supersense label can be assigned to each adposition token .rather than trying to ignore these construals to favor a single - label approach , or possibly create new labels to capture the meaning distinctions that construals impose on semantic roles , we adopt an approach that gives us the flexibility to deal with both the semantics coming from the scene as well as the construals imposed by the adpositional choice .we address the issues of construal by proposing a * bipartite analysis * that decouples the semantics signaled by the adposition from the role expected by the scene .essentially , we borrow from construction grammar the notion that semantic contributions can be made at various levels of syntactic structure , beginning with the semantics contributed by the lexical items . under our original single - label analysis ,the full weight of semantic assignment rested on the predicate s semantic role , with the indirect assumption that the predicate selects for adpostions relevant to the assignment . under the bipartite analysis, we assign semantics at both scene and adposition levels of meaning : we capture what the scene _calls _ for , henceforth * scene role * and what the adposition itself _ codes _ for , henceforth * function*. both labels are drawn from the supersense hierarchy ( [ fig : hierarchy ] ) . allowing tokens to be annotated with both a role and a function accounts for the non - congruent adposition construals , as in [ ex : puccini2 ] .bipartite analysis recognizes that both of these sentences carry the meaning represented by the supersense at the scene level , but also allows for the construal that arises from the chosen preposition : * * is assigned the function of and * * is assigned the function of .our bipartite annotation scheme does not require a syntactic parse .it therefore does not provide a full account of constructional compositionality .the scene that the pp elaborates may take a variety of syntactic forms ; we aim to train annotators to interpret the scene without annotating its lexical / syntactic form explicitly . in [ sec : cxg_discussion ] , we sketch how a compositional construction grammar analysis could capture the function and the scene role at different levels of structure .in this section , we discuss some of the more productive examples of non - congruent construals in english as well as in hindi , korean , and hebrew .hereafter , we will use the notation to indicate such construals . adopting the `` realization '' metaphor of articulating an idea linguistically ,this can be read as `` is realized with an adposition that marks . ''scenes of emotion and perception provide a compelling case for the bipartite construal analysis .consider the sentences involving emotion in example [ ex : scared ] : comparing examples [ ex : bear ] and [ ex : job ] , we notice that there are two different types of stimuli represented in otherwise semantically parallel sentences .the preposition * * gives the impression that the stimulus is responsible for triggering an instinctive fear reflex ( i.e. , ) , while * * portrays the thing feared as the content or of thought . in some languages, the experiencer can be conceptualized as a recipient of the emotion or feeling , thus licensing dative marking . in the hebrew example [ ex :hot_to_me ] , the experiencer of bodily perception is marked with the dative preposition * * .similarly , in hindi , the dative postpostion * * marks an experiencer in [ ex : seems_hot_to_me ] .contrast this with examples where scene role and adposition function are congruent : in [ ex : easy1 ] and [ ex : easy2 ] , the preposition is prototypical for the given scene role and its function directly identifies the scene role . because the semantics of the role and function are congruent , these cases do not exhibit the extra layer of construal seen in [ ex : scared ] and [ ex : dative - experiencers ] .should be annotated as .we do not discount the possibility that such a metaphor can be cognitively active in speakers using temporal adpositions ; in fact , there is considerable evidence that time - as - space metaphors are cross - linguistically pervasive and productive .however , we do not see much practical benefit to annotating temporal * * or topical * * as spatial .] in essence , the bipartite analysis help capture the construals that characterize the less prototypical scene role and function pairings . our online reviews corpus shows that , at least in english , professional relationships ( especially employer employee and business client ones ) are fertile ground for alternating preposition construals .the following were among the examples tagged as : all of these construals are _ motivated _ in that they highlight an aspect of prototypical professional relationships : e.g. , an employee s work prototypically takes place at the business location ( hence `` work * * '' ) , though this is not a strict condition for using `` work * * ' ' the meaning of * * has been extended from the prototype .likewise , the pattern `` _ _ person _ _ \ { * * , * * , * * } _ organization _ '' has been conventionalized to signify employment or similar institutional - belonging relationships .bipartite analysis equips us with the ability to use the existing labels like or to deal with the overloading of the label , instead of forcing a difficult decision or creating several additional categories .this analysis also accounts for similar construals presented by adpositions in other languages .for example , the overlap of with , as seen in english example [ ex : associate_from ] , occurs in hindi and korean as well .another source of difficulty in the original annotation came from caused - motion verbs like _ put _ , which takes a pp indicating part of a path .sometimes the preposition lexically marks a source or goal , e.g. , * * , * * , or * * [ ex : put_into ] .often , however , the preposition is prototypically locative , e.g. , * * or * * [ ex : put_on ] , though the object of the preposition is interpreted as a destination , equivalent to the use of * * or * * , respectively .this locative - as - destination construal is highly productive , so analyzing * * as polysemous between and does not capture the regularity .the pp is sometimes analyzed as a resultative phrase . in our terms, we simply say that the scene calls for a , but the preposition codes for a : thus , we avoid listing the preposition with multiple lexical functions for this regular phenomenon .the opposite problem occurs with fictive motion : a path pp , and sometimes a motion verb , construe a static scene as dynamic : rather than forcing annotators to side with the dynamic construal effected by the language , versus the static nature of the actual scene , we represent both : the scene role is ( static ) and the preposition function is ( dynamic ) .the added representational complexity of the bipartite analysis seems justified to account for many of the phenomena discussed above , especially as the project grows to include more languages .but is the complexity worth it on balance ?we consider some of the tradeoffs below .we encountered several examples in which function labels are difficult to identify .consider the following paraphrases : in [ ex : run - eso ] , `` schoolyard '' is accompanied by a postposition * * ( comparable to english * * ) , which marks it as the location of running .this is the unmarked choice . on the other hand , in sentence [ ex : run - ul ] ,the noun is paired with the accusative marker * * , the marked choice .the use of * * evokes a special construal : it indicates that the schoolyard is more than just a backdrop of the running act and that it is a location that cheolsu mindfully chose as the place of action .additionally , marking the location with the accusative marker , pragmatically , brings focus to the noun ( i.e. , he ran in a schoolyard as opposed to anywhere else ) .such construals are not limited to locations , but may also include other scene roles such as and , in alternation with postpositions that can express those functions . since accusative case markers generally serve syntactic functions over semantic ones , it may be difficult to identify a semantic function the accusative marker carries . a similar phenomenon can be found in hindi : this suggests that , apart from spatiotemporal relations and semantic roles , adpositions can mark * information structural * properties for which we would need a separate inventory of labels .in some idiomatic predicate argument combinations , the semantic motivation for the preposition may not be clear [ ex : idiomatic ] . while the scene role in [ ex : listen_to ] and [ ex : proud_of ] is clearly , the function is less clear . is the object of attention construed ( metaphorically ) as a in [ ex : listen_to ] , and the cause for pride as a in [ ex : proud_of ] ? or are * * and * * semantically empty argument - markers for these predicates ( cf .the `` case prepositions '' of ) ?we do not treat either combination as an unanalyzable multiword expression because the ordinary meaning of the predicate is very much present .[ ex : unhappy_with ] and [ ex : interested_in ] are similarly fraught . but as we look at more data , we will entertain the possibility that the function can be null to indicate a marker which contributes no lexical semantics .annotators are generally capable of interpreting meaning in a given context .however , it might be difficult to train annotators to develop intuitions about adposition functions , which reflect prototypical meanings contributed by the lexical item that may not be literally applicable .these distinctions may be too subtle to annotate reliably .as we are approaching this project with the goal of producing annotated datasets for training and evaluating natural language understanding systems , it is an important concern .we are currently planning pilot annotation studies to ascertain ( i ) the prevalence of the role vs. function mismatches , and ( ii ) annotator agreement on such instances . enshrining role function pairs in the lexicon may facilitate inter - annotator consistency : our experience thus far is that annotators benefit greatly from examples illustrating the possible supersenses that can be assigned to a preposition .if initial pilots are successful , we would then need to decide whether to annotate the role and function together or in separate stages . because the function reflects one of the adposition s prototypical senses ,it may often be deterministic given the adposition and scene role , in which case we could focus annotators efforts on the scene roles .existing annotations for lexical resources such as propbank , verbnet , and framenet might go a long way toward disambiguating the scene role , limiting the effort required from annotators . assuming the above theoretical and practical concerns are surmountable, annotated corpora would facilitate empirical studies of the nature and limits of adposition / case construal within and across languages .for example : is it the case that some of the supersense labels can only serve as scene roles , or only as functions ? (a hypothesis is that subtypes tend to be limited to scene roles , but this needs to be examined empirically . )which role function pairs are attested in particular languages , and are any universal ? thus far we have seen that certain scene roles , such as , , and , invite many different adposition construals is this universally true ? as adpositions are notoriously difficult for second language learners , would it help to explain which construals do and do not transfer from the first language to the second language ?the bipartite analysis may allow us to trade more complexity at the token level for less complexity in the label set . as discussed in [ sec : construal ] , separating the scene role and function levels of annotation will more adequately capture construal phenomena without forcing an arbitrary choice between two labels or introducing further complexity into the hierarchy .in fact , we hope to _ simplify _ our current supersense hierarchy , especially by removing labels with multiple inheritance for usages that can be accounted for with the bipartite analysis instead .candidates include ( inheriting from and ; e.g. , `` the fly flew * * zig - zags '' ) and ( inheriting from and ; e.g. , `` we traveled * * bus '' ) .we may also collapse the pairs / , / , and /.a simpler hierarchy of supersenses will serve to reduce the number of labels for annotators to consider during the annotation process and also help improve automatic methods by reducing sparsity of labels in the data .we have focused on developing a _ broad - coverage _ annotation scheme for adpositional semantics , and our proposal requires no more than two categorical labels per adposition token ( but see below ) .although our current approach falls short of a full constructional derivation of the form meaning correspondences that comprise a sentence and the interpretation that results , we believe our approach could inform such an analysis .construction grammar formalisms that support full - sentence analyses include embodied construction grammar , fluid construction grammar , and sign - based construction grammar . without tying ourselves to any one of these ,we observe at a high level that the lexical semantic contribution of the adposition ( the function ) can be distinguished from the role of a governing predicate or scene by assigning these meanings to different stages of the derivation .e.g. , in `` care * * the strategy , '' the adposition and pp could express a figure - ground relation whose ground is the meaning of `` the strategy '' ; `` care '' could evoke a semantic frame with a role ; and an argument structure construction could link the ground of the figure - ground relation with the role . if the construal with * * is sufficiently productive , the generalization could be formalized via an argument - structure construction with a verb slot limited to verbs of ( say ) emotion and a pp headed by topical * * .there are complications which we are not yet prepared to fully address .first , if the pp is not governed by a predicate which provides the roles such as a verb or eventive / relational noun the preposition may need to evoke a meaning more specific than our labels .e.g. , for `` children * * pajamas '' and `` woman * * black , '' * * may be taken to evoke the semantics of wearing clothing .the label set we use for broad - coverage annotation is , of course , vaguer , and would simply specify for the clothing sense of * * .copular constructions raise similar issues .consider `` it is * * you to decide , '' meaning that deciding is the addressee s responsibility : this idiomatic sense of * * is closer to a semantic predicate than to a semantic role or figure - ground relation . in rare instances ,we are tempted to annotate a chain of extensions from a prototypical function of a preposition , which we term * multiple construal*. for instance : `` yelled * * '' in [ ex : yell_at ] is a communicative action whose addressee ( ) is also a target of the negative emotion ( : compare the use of * * in `` shoot * * the target '' ) .[ ex : angry_at ] is similar , except `` angry '' focuses on the emotion itself , which bob is understood to have evoked in his boss .with regard to [ ex : involved_in ] , the item `` involved * * '' has become fossilized , with * * marking an underspecified noncausal participant ( hence , as the scene role ) . at the same time, one can understand the * * here as motivated by the member - of - set sense ( cf .`` i am * * the group '' ) , which would be labeled because it conceptualizes membership in terms of containment .a similar logic would apply to `` people * * the company '' : .effectively , the multiple construal analysis claims that multiple steps of extending a preposition s prototypical meaning remain conceptually available when understanding an instance of its use .that said , we are not convinced that this logic could be applied reliably by annotators , and thus may simplify the usages in [ ex : dbl ] to just the first and second or the first and third labels .finally , metaphoric scenes raise a whole host of issues . in [ ex : put_metaphor ] , the locative - as - destination construal ( [ sec : motion ] ) is layered with the states - are - locations metaphor . in bipartite analysis , we annotate the scene in terms of the governing predicate s * target domain * , and the adposition function in terms of the * source domain * : a constructional analysis could capture both source domains and both target domains i.e ., , , , and perhaps by assigning source domain meanings to lexical constructions and target domain meanings to their mother phrases .we have considered the semantics of adpositions and case markers in english and a few other languages with the goal of revising a broad - coverage annotation scheme used in previous work .we pointed out situations where a single supersense did not fully characterize the interaction between the adposition and the scene elaborated by the pp . in an attempt to tease apart the semantics contributed specifically by the adposition from the semantics coming from elsewhere , we proposed a bipartite construal analysis .though many details remain to be worked out , we are optimistic that our bipartite analysis will ultimately improve broad - coverage annotations as well as constructional analyses of adposition behavior .we thank the rest of our carmls team martha palmer , ken litkowski , katie conger , and meredith green for participating in weekly discussions of adposition semantics ; michael ellsworth for an insightful perspective on construal , paul portner for a helpful clarification regarding approaches to conceptualization in the literature , and anonymous reviewers for their thoughtful comments .
we consider the semantics of prepositions , revisiting a broad - coverage annotation scheme used for annotating all 4,250 preposition tokens in a 55,000 word corpus of english . attempts to apply the scheme to adpositions and case markers in other languages , as well as some problematic cases in english , have led us to reconsider the assumption that a preposition s lexical contribution is equivalent to the role / relation that it mediates . our proposal is to embrace the potential for * construal * in adposition use , expressing such phenomena directly at the token level to manage complexity and avoid sense proliferation . we suggest a framework to represent both the scene role and the adposition s lexical function so they can be annotated at scale supporting automatic , statistical processing of domain - general language and sketch how this representation would inform a constructional analysis .
there is considerable interest in developing phased arrays for radio astronomy .projects include the square kilometer array ( ska ) , the low frequency array ( lofar ) , the electronic multibeam radio astronomy concept ( embrace ) , and the karoo array telescope ( kat ) .all of these projects are aimed constructing phased arrays for microwave astronomy , but as technological capability improves , phased arrays will eventually be constructed for submillimetre - wave and far - infrared astronomy .two types of phased array are of interest : ( i ) imaging phased arrays , where an array of coherent receivers is connected to a beam - forming network such that synthesised beams can be created and swept across the sky ; ( ii ) interferometric phased arrays , where the individual antennas of an aperture synthesis interferometer are equipped with phased arrays such that fringes are formed within the synthesised beams . in this way it is possible to extend the field of view , to observe completely different regions of the sky simultaneously , to steer the field of view electronically , and to observe spatial frequencies that are not available to an interferometer because the baselines can not be made smaller than the diameters of the individual antennas .it is important to recognise that the synthesised beams of a phased array need not be orthogonal , and may even be linearly dependent .non - orthogonality may be built into a system intentionally as a way of increasing the fidelity with which an image can be reconstructed , or it may arise inadvertently as a consequence of rf coupling and post - processing cross - talk . in some situations , say in the case of interacting planar antennas , it may not even be clear how to distinguish one basis antenna from another , even before the beam - forming network has been connected . in this paper , we show that the only information that is needed to determine the average powers , the correlations between the complex travelling wave amplitudes , the fluctuations in power , and the correlations between the fluctuations in power at the output ports of a phased array , or between the output ports of phased arrays on different antennas , is the synthesised beams .it is not necessary to know anything about the internal construction of the arrays or the beam forming networks .beam patterns may be taken from electromagnetic simulations or experimental data . in the case of interferometricphased arrays , the arrays on the individual antennas do not have to be the same . the ability to assessthe behaviour of a system simply from the synthesised beam patterns separates the process of choosing the best beams for a given application from the process of understanding how to realise the beams in practice .it also suggests important techniques for simulating phased arrays , and for analysing experimental data .in practice , an imaging phased array comprises a sequence of optical components , an array of single - mode antennas , and an electrical beam - forming network such that each output port corresponds to a synthesised reception pattern on the input reference surface , usually the sky . in some cases , the synthesised reception patterns may be static and designed to give optimum sampling on a given class of object , whereas in other cases , the beam - forming network may be controlled electrically to generate a set of synthesised beams that can be swept across the field of view . in the case of microwave astronomy , the optical system would be a telescope , the single - mode antennas would be the horns or planar antennas of an array of hemt amplifiers or sis mixers , and the beam - forming network would be a system of microwave or digital electronics .horns and ports . ]our analysis is based on the generic system shown in fig . 1 . denotes the input reference surface , the output ports of the horns , and the output ports of the beam - forming network .we shall assume that an array of horns is connected to a beam - forming network having output ports .each of the ports is thus associated with a reception pattern on the input reference surface . for simplicity, we shall assume paraxial optics throughput .when a pseudomonochromatic field , , is incident on the system , a set of travelling waves will appear at : we shall denote their complex amplitudes by .we shall use the notion of complex analytic signals throughout , which for most practical purposes means that one can integrate the final result over some bandwidth to calculate general behaviour .likewise , a set of travelling waves will appear at : we shall denote their complex amplitudes by . when and are finite , the complex amplitudes can be assembled into column vectors and , respectively . in what follows, it will sometimes be beneficial to represent the primary variables as abstract vectors .because the incoming field , , is square integrable over the input reference surface , , it can be represented by a vector in hilbert space .the input surface may extend to infinity , or it may be bounded by an aperture , and therefore of finite extent .regions having different shapes and sizes correspond to different hilbert spaces . and can also be represented by abstract vectors , and respectively , where is the space of square - summable sequences .these definitions lead to two operators , one of which , , maps the incoming optical field onto the outputs of the horns , and the other maps the outputs of the horns onto the outputs of the beam - forming network .these individual operators can be combined into a single composite operator , which describes the system as a whole .it can be shown , appendix a , using only the concepts of _ inner product _ , _ operators _ , and _adjoints _ in hilbert space , that the complex travelling - wave amplitude appearing at port , when a field , , is incident on a system is given by where is the functional form of the synthesised reception pattern . corresponds to the surface and region over which the hilbert space is defined . in expressions such as ( [ 1_2 ] )we shall show the the complex conjugate explicitly , even though some notation includes it in the dot product , as an inner product , implicitly .the reason for the formality in stating , and indeed deriving ( [ 1_2 ] ) , is that ( [ 1_2 ] ) can be shown to be true even when the beam patterns are not orthogonal .the synthesised reception patterns are central to what follows because , according to ( [ 1_2 ] ) , the complex travelling wave appearing at port is given by calculating the inner product , over the input reference surface , between the synthesised reception pattern and the incoming field .it would be naive to assume , however , that when a system is illuminated by a field having the form , a travelling wave only appears at . in the case of phased arrays ,the synthesised reception patterns do not have to be orthogonal , and can even be linearly dependent .thus , although the output at a given port is given by the inner product between a field and a reception pattern , as for orinary antennas , one can not assume that there is a one - to - one mapping between the antenna patterns and the ports .for example , in the case of fig . 1 ,the beam patterns of the horns , , are orthogonal , and the outputs of the horns , , are given by but the beam - forming network is described by a linear operator , and therefore substituting ( [ 1_4 ] ) in ( [ 1_3 ] ) we find which can be cast into the form of ( [ 1_2 ] ) by defining as expected , the synthesised reception patterns are merely weighted linear combinations of the horn patterns .the orthogonality of the synthesised reception patterns can now be tested through where ( [ 1_6 ] ) has been used , together with the orthonormality of the horn patterns . in the case where the numbers of horns and ports are finite ,( [ 1_7 ] ) takes the form of a matrix equation : because is mapping between and , is under complete if , and is singular ; contrariwise , is over complete if , and is not singular . in both cases , except trivially when certain ports are not connected , the synthesised reception patterns are not orthogonal , because is not diagonal . in the case where is unitary , , where is the identity operator of dimension , the synthesised reception patterns are orthogonal .butler beam forming networks are used in practice to realise this situation . in summary ,the complex travelling - wave amplitudes appearing at the output ports of a phased array are found by calculating the inner products of the incoming field with respect to a set of synthesised reception patterns , but the synthesised reception patterns do not have to be orthogonal . even if a system is designed to have orthogonal beams , practical issues relating to coupling and cross talk will cause the beam patterns to be non - orthogonal at some level .one would , therefore , like to derive an analysis procedure based on the beam patterns alone , where it is not necessary to know anything about the internal construction of the array . for our purposes , we shall assume that the behaviour of all phased arrays is described by ( [ 1_2 ] ) regardless of whether it is known how the arrays are constructed or not .in many cases we are interested in using phased arrays to image incoherent or partially coherent fields in the context of astronomy , although the field on the sky is usually incoherent , the input reference plane , as far as the phased array is concerned , may be internal to the optics of the telescope . to this end , it is convenient to introduce correlation dyadics .we shall define the correlation dyadic of the incident field according to where denotes the ensemble average , and is interpreted as a complex analytic signal .the tensor contains complete information about the correlation between the fields at any two points and in any two polarisations .once the correlation dyadic is known , all classical measures of coherence follow . the correlation between the travelling wave amplitudes at any two ports can be written , or in matrix form where is a correlation matrix .the matrix elements of can be found by using ( [ 1_2 ] ) : now illuminate the system with an unpolarised , spatially fully incoherent source where is the dyadic identity operator . substituting ( [ 1_12 ] ) in ( [ 1_11 ] ), we find which shows that , because the synthesised reception patterns are generally not orthogonal , the travelling waves at the output ports are correlated .ultimately , it is these correlations that prevent one from extracting more and more information from a source , using a finite number of horns , by synthesising more and more beams .in what follows , we shall need to make use of the mathematical theory of frames .suppose for the moment we have some general monochromatic field , and that we determine the inner products with respect to a set of basis vectors : . can extend to infinity , and we do not make any assumptions about the orthonormality or linear independence of . underwhat circumstances can the original vector , which represents a continuous function , be recovered unambiguously from a discrete set of complex coefficients , possibly countable , and how can this be achieved ? in the context of phased arrays , we are essentially asking under what circumstances can the form of an incident field be recovered from the outputs of an array , when the synthesised beams are possibly non - orthogonal and linearly dependent .evaluate the square moduli of the inner products between and any general vector, , and sum the results .if there are two constants and such that and , and which can also be written , then the basis set is called a frame with respect to .notice the use of strict inequalities in the allowable values of and . in the casewhere , the frame is called a ` tight frame ' because the inner products for all lie within some small range , and the dynamic range needed for inversion is small . when the original basis is orthonormal , the frame bounds , and , are equal , as can be appreciated by inserting in ( [ 2_2 ] ) .if the frame is over complete , but normalised , is a measure of the redundancy in the frame .if a basis set constitutes a frame , then it can be shown , through ( [ 2_1 ] ) alone , that is injective , one - to - one , but not surjective , onto : maps onto a subspace of , or when is finite , a subspace of .consequently , has a left inverse , , such that . maps the image of , ] , ^{\bot} ] .we shall not develop this idea here .the behaviour of the optical system can be described by where , and are the fields on the input and output sides , respectively , and again , paraxial optics is assumed . now we can use the dual frame of , say , to generate a set of expansion coefficients on the input side : and then ( [ 3_5 ] ) becomes we can also express the output field in terms of a set of coefficients substituting ( [ 3_7 ] ) in ( [ 3_8 ] ) we find where the matrix elements are given by ( [ 3_10 ] ) is an operator , which is a matrix for finite dimensional spaces , that maps the field coefficients on the input side onto the field coefficients on the output side .the operator describes the process of reconstructing the field in the space domain , scattering in the space domain , and then projecting the scattered field onto the output basis set .if we assume finite dimensionality for all surfaces , and that the output frame of one optical component is used as the input frame of the next optical component , then we can cascade a number of components , , according to where the are the scattering matrices of the individual components .the last component , , could be the phased array itself , in ( [ 3_3 ] ) , giving a description of the system as a whole .earlier , we showed that it is possible to describe the behaviour of a phased array in terms of the duals of the reception patterns , rather than the reception patterns themselves .equally , we can use either frames or dual frames on the input and output reference surfaces of an optical component to generate a variety of scattering matrices , each of which describes the behaviour of the component equally well .moreover , we can choose whether to use frames or dual frames , or a mixture , in the definition of the correlation dyadics , thereby generating a variety of equally good ways of describing correlations .when representing the process of scattering a partially coherent field through an optical component , the correlation dyadics should be chosen to match the bases used for the scattering matrices themselves .it is not possible to construct a phased array that forms a frame with respect to any undefined complex function , even over a finite - sized region , because an infinite number of individual horns would be needed . in reality , however , optical fields have finite dimensionality , and frames become feasible . often , a phased array will be placed on the back of an optical system , and the role of the phased array is to collect as much of the information that appears at the output of the optical system as possible .we now consider whether the outputs of a given phased array form a frame with respect to any field that can pass through a preceding optical system .we have shown previously that the behaviour of paraxial optical systems is best described using the hilbert - schmidt decomposition of the operator that projects the field at the input reference surface onto the output reference surface : in ( [ 3_5 ] ) .a hilbert - schmidt decomposition is needed because optical systems generally map fields between different hilbert spaces , and therefore eigenfunctions are not suitable for describing behaviour .thus , the dyadic green s function in ( [ 3_5 ] ) becomes after substituting ( [ 4_1 ] ) into ( [ 3_5 ] ) it becomes clear that the process of scattering a field through an optical system consists of projecting the incoming field onto the input eigenfields , scaling by the singular values , and reconstructing the outgoing field through the outgoing eigenfields .it is also clear , and an intrinsic feature of the hilbert schmidt decomposition , that the field , possibly partially coherent , at the output reference surface has only a limited number of degrees of freedom . in the context of ( [ 4_1 ] ) , the hilbert schmidt decomposition has only a finite number of singular values that are significantly different from zero .what we require is for the synthesised reception patterns of our phased array to create a frame with respect to the vector space spanned by the having singular values significantly different from zero : say hilbert subspace . in this casethe frame is finite , and could , in principle at least , be realised by a finite number of horns .how do we determine whether the synthesised reception patterns constitute a frame with respect to ?suppose that is some general vector in the hilbert space at the output reference surface of an optical system . corresponds to that subspace of spanned by the output eigenfields having singular values greater than some threshold value , say . in other words, contains , for all practical purposes , any information that could have been transmitted through the optical system .the set of output eigenfields having singular values greater than is , where , because the throughput of the optical system is finite .now suppose that we have some other set of vectors , and wish to determine whether constitutes a frame with respect to .that is to say , if we determine the complex coupling coefficients between the and any vector in , can we recover the vector in without ambiguity ?if is some general vector in , then the frame condition reads or , assuming that has been normalised for a given set of vectors , the inner products can be written explicitly , such that ( [ 4_3 ] ) takes the form we can , however , describe completely in terms of the output eigenfields substituting ( [ 4_5 ] ) into ( [ 4_4 ] ) gives where expanding ( [ 4_6 ] ) , we get where or , because the number of basis functions is finite , can be written as , where the elements of correspond to the overlap integrals between the output eigenfields and the synthesised reception patterns : ( [ 4_7 ] ) .although , the final relationship expresses a mapping of a finite dimensional space onto itself , the mapping passes through a space having infinite dimensions and therefore the integral in ( [ 4_7 ] ) should be evaluated analytically if at all possible . the frame condition ( [ 4_8 ] ) then becomes in order to establish whether constitutes a frame with respect to the output eigenfields having non - zero singular values , we need to determine the limits and by rotating throughout .another way of thinking about the same problem is that we have some general , and we wish to determine whether it always be described in terms of the vector space spanned by the set of vectors , corresponding to the set of all possible measurements , given the mapping . the operator is hermitian , and can be diagonalised : the frame condition then becomes the middle term of ( [ 4_12 ] ) takes on its maximum value when the vector corresponds to the eigenvector of having the largest eigenvalue : remembering that must have unit length and therefore can only be rotated .if is degenerate in the largest eigenvalue , there is a range of vectors that lead to a maximum , but the outcome is still that .likewise , ( [ 4_12 ] ) takes on its minimum value when the vector corresponds the the eigenvector having the smallest singular value , .if the smallest singular value is zero , is singular , and does not constitute a frame with respect to .the operator simply maps the eigenfield coefficients of the optical system onto the output ports of the array and then back again onto the eigenfield coefficients .if the set of basis vectors do not span all possible vectors in , either because there are too few of them , or because they do not span the same space , information is lost when the frame coefficients are calculated , and the frame is incomplete .it is not possible , therefore , to recover complete information about the output field of the optical system from the outputs of the phased array . in this case , recovering with the dual vectors , and then reconstructing the field using the eigenfields , will give the best least square approximation to the field .in reality , because of the presence of noise a bayesian method would probably be used to reconstruct the field . for infinite - dimensional frames , , we can use the same procedure , but now we must calculate the eigenvalues of the matrix , where where and the sum over extends to infinity . again , these integrals should be evaluated analytically .clearly , in the case where the frame is complete and orthonormal , giving , supporting the validity of the result .we now have a measure of how effectively a phased array can image a complex field ; it is easy to show that when a phased array forms a frame with respect to a fully coherent field at a surface , then it is also possible to recover completely the spatial correlations of a partially coherent field at the same surface : essentially because the natural modes of the partially coherent field lie within the same hilbert subspace . according to ( [ 2_5 ] ) , in the infinite - dimensional case , which describes the recovery of a coherent field .incidently , ( [ 4_16 ] ) also shows that for an over complete or perfectly complete frame . forming the correlation matrix and the correlation dyadic , using ( [ 4_16 ] ) , we get which describes the recovery of the spatial correlations of a field from measurements of the cross correlations between the outputs of a phased array , using the dual beams .( [ 4_17 ] ) confirms that the correlations of a field can also be recovered , if the reception patterns constitute a frame .the previous section describes a calculation that can be performed to find out whether the synthesised reception patterns of a phased array form a frame with respect to the output eigenfields of an optical system .this procedure must be used when one is interested in recovering phase information from the field .often , however , in the case of simple imaging , one is only interested in being able to recover the intensity distribution of a fully incoherent source . in this case , certain of the beam patterns needed to form a frame may be created by scanning the array physically across the source .it seems , however , that different frames are needed depending on whether one is trying to preserve phase or whether one is just interested in measuring intensity : we should distinguish between ` field frames ' and ` intensity frames ' . to this end , assume that the source is fully incoherent and unpolarised , but that the intensity varies from position to position .the correlation dyadic of the source then becomes where is the intensity as a function of position . substituting ( [ 5_1 ] ) into ( [ 1_11 ] )gives but say that we only measure the diagonal elements of through the use of total power detectors , then where .thus , for an incoherent source , the output powers of the individual ports of a phased array are related to the intensity distribution of the source through a set of inner products with the functions .if the goal is to reconstruct the intensity distribution of a source , one could ask whether the basis forms a frame with respect to the hilbert space defining the range of possible intensity distributions . there is a problem , however ,because in assuming that the source field is spatially incoherent , we assumed that the intensity is a member of an infinite dimensional space . to answer the question as to whether the phased array is suitable for recovering intensity , we must define more clearly the vector space of intensity distributions that is of interest .one possible approach is to describe the intensity distribution as a weighted linear combination of basis functions , .these functions could , for example , be radial basis functions , wavelets , or delta functions at sample points .if chosen carefully , these functions need not correspond to a single region , but could represent a number of spatially separated regions that one wishes to image simultaneously .if we characterise the space of intensity distributions according to then the powers recorded at the output of the phased array become where frame condition then reads where and , and hence the tightness of the frame , can be determined by finding the eigenvalues of , or the svd of . in the case where the basis functions correspond to sample points , we have and .clearly , the original intensity distribution can be found , to within the degrees of freedom , by using the dual vectors of , namely , defined in the space of .usually , however , for stochastic sources , and when noise is included , a bayesian method would be used to recover images .it is instructive to see how this form of analysis compares with , and is applicable to , multimode bolometric imaging arrays .it has been shown that the expectation value , ] , recorded by a detector combination is given by = \int \mbox{tr } { \bf z}{\bf w } \ , d \omega \mbox{.}\ ] ] likewise , assuming gaussian statistics , the fluctuations in the output and the correlations between the fluctuations of two outputs are given by where all quantities are allowed to be a function of frequency .the only restriction on ( [ 8_2 ] ) is that the post - detection integration time where for frequencies outside of , which is necessary for all astronomical instruments .these expressions can be extended to describe the quantum mechanical behaviour of phased arrays , as has been done for bolometric interferometers .the bolometric interferometer model did not , however , include the poisson limit for low photon occupancies . in this volume , we describe how one can add a single term to ( [ 8_2 ] ) to create a statistical mixture that includes the poisson noise of photon counting . once the additional term is included , it is possible to take into account the transition from fully poisson to fully bunched behaviour , as the photon occupancies of the incoming modes increase , as one moves from infrared to submillimetre wavelengths .it is entirely possible , therefore , to modal the quantum - statistical behaviour of phased arrays at all wavelengths .expressions ( [ 8_1 ] ) and ( [ 8_2 ] ) offer a further possibility of considerable importance .clearly , we have a numerical procedure for determining the expectation values and the covariances of the powers that arrive at the output ports of an imaging , or interferometric , phased array when a source is present .a discretised version of the model has already been published .the model takes into account noise , and can be extended easily to include quantum effects .also , the beam patterns do not have to be orthogonal .( [ 8_1 ] ) and ( [ 8_2 ] ) therefore make it straightforward to set up a likelihood function for the outputs that would be recorded when some class of source is observed .obviously the likelihood function would contain the signal , its fluctuations , and any instrumental noise , including quantum effects , as well as the hanbury brown - twiss correlations between pixels .the source may be as simple as a single incoherent gaussian on the sky , or if one is trying to design a phased array that can observe two different regions of the sky simultaneously , it could be two highly separated gaussians .any other parameterised source distribution could be used ; for example , cauchy functions are often used in astronomy to parameterise sunyaev - zeldovich emission from clusters of galaxies , and in ( [ 8_2 ] ) , we used a gauss - schell source as a convenient way of parameterising general partially coherent fields .on the basis of the likelihood functions , one could then derive numerically , the fisher information matrix , from which the covariance matrix of the source parameter estimators could be found .we have already started to apply this technique , in a completely different context , for understanding the design of bolometric imaging arrays : see saklatvala , in this volume . in order words, one could determine the minimum errors , and the confidence contours , that could be achieved when determining the parameters of sources .exploring how these errors change as the design of a phased array changes , say by packing more and more overlapping beams into a finite region , would be of considerable interest , and the result should be related , in some way , to the effectiveness with which the array forms a frame with respect to the incoming field distributions , or intensity distributions , of interest .we have studied the functional behaviour of imaging phased arrays and interferometric phased arrays , and shown that their operation is closely related to the mathematical theory of frames . in order to calculate the behaviour of an imaging phased array , or an interferometric phased array , it is only necessary to know the synthesised reception patterns , which may be non - orthogonal and linearly dependent .it is not necessary to know anything about the internal construction of the array itself . as a consequence, data can be taken from experimental measurements or from electromagnetic simulations .the theory of frames allows one to assess , in a straightforward manner , whether the outputs of a phased array contain sufficient information to allow a field or intensity distribution to be reconstructed in an unambiguous way .our model also allows straightforward calculation of quantities such as the correlations in the fluctuations at the output ports of phased arrays .the theory of interferometric phased arrays is almost identical to the theory of multimode bolometric interferometers , and therefore , recently developed techniques for modelling bolometric interferometers can be applied to phased arrays also : including quantum statistics .the work opens up the important possibility of constructing likelihood functions that enable the covariance matrices of source - parameter estimators to be determined .thus , for example , one could explore the possibility of enhancing source reconstruction by packing in more and more overlapping synthesised beams into a region , or widely separated regions , of finite size .any enhancement of the accuracy with which source parameters can be recovered , will be related , and to some extent determined , by the degree to which the beam patterns of the array form a frame with respect to all possible incoming field distributions . in a later paper, we shall use the concepts described here , and the numerical techniques reported previously , to simulate and assess the behaviour of interferometric phased arrays when different optical systems and beam - forming networks are used .suppose that some field , , is incident on a phased array , we can represent a measurement of the amplitude and phase of the travelling wave at , relative to a normalised reference signal , by the inner product , where is a vector corresponding to a measurement at port alone .for example , the measurement could be carried out by homodyne mixing the travelling wave with a reference oscillator at , and then low - frequency filtering the result .introducing the linear operator , leads to a measurement of , which by definition of the adjoint , can be written .in other words the inner product between and the field distribution represented by gives the same result as the measurement , but now the inner product is evaluated at the input reference surface .we shall call the synthesised reception pattern of port .the canonical inner product in takes the form where is the functional form of the synthesised reception pattern , because the result must be equal to , and therefore conjugate linear in . in ( [ a1_1 ] ), the integral over corresponds to the input reference surface , and extends over the region associated with hilbert space . finally ,because ( [ a1_1 ] ) must be equal to , we have an expression that relates the complex amplitude of the travelling wave at to the incident field : it is clear from ( [ a1_2 ] ) that the synthesised reception pattern is the complex conjugate of what would be measured in an experiment where a point source is swept over the input surface .the key point is that ( [ a1_2 ] ) is valid even when the beams are not orthogonal .a. van ardenne , a. smolders , and g. hampson,``active adaptive antennas for radio astronomy : results of the r & d program towards the square kilometer array '' in _ astronomical telescopes and instrumentation 2000 - radio telescopes _ ,spie * 4015 * , ( 2000 ) .r. braun , `` the concept of the square kilometer array interferometer , '' in _ proc .high sensitivity radio astronomy _ ,n. jackson and r.j .davies , eds ., ( cambridge univ . press , cambridge , uk , 1997 ) , 260 - 268 .s. withington , g. saklatvala , and m.p .hobson , `` partially coherent analysis of imaging and interferometric phased arrays : noise , correlations , and fluctuations , '' j. opt .a , accepted dec . 2005 .s. withington , m.p .hobson , and e.s .campbell , `` modal foundations of close - packed optical arrays with particular application to infrared and millimeter - wave astronomical interferometry , '' j. appl. phys . * 96 * , 1794 - 1802 ( 2004 ) .s. withington , m.p .hobson , and g. saklatvala , `` quantum statistical analysis of multimode far - infrared and submillimetre - wave astronomical interferometers , '' j. opt .a * 22 * , 1937 - 1946 ( 2005 ) .
microwave , submillimetre - wave , and far - infrared phased arrays are of considerable importance for astronomy . we consider the behaviour imaging phased arrays and interferometric phased arrays from a functional perspective . it is shown that the average powers , field correlations , power fluctuations , and correlations between power fluctuations at the output ports of an imaging or interferometric phased array can be found once the synthesised reception patterns are known . the reception patterns do not have to be orthogonal or even linearly independent . it is shown that the operation of phased arrays is intimately related to the mathematical theory of frames , and that the theory of frames can be used to determine the degree to which any class of intensity or field distribution can be reconstructed unambiguously from the complex amplitudes of the travelling waves at the output ports . the theory can be used to set up a likelihood function that can , through fisher information , be used to determine the degree to which a phased array can be used to recover the parameters of a parameterised source . for example , it would be possible to explore the way in which a system , perhaps interferometric , might observe two widely separated regions of the sky simultaneously .
in cluster mode , syclist has ( among others ) the following capabilities : it accounts for a salpeter imf , includes several initial distributions for the angular velocity of the stars , as well as the effect of a random distribution of the angle of view on the gravity darkening .it can mimic in a simplified way the presence of a fraction of binary stars in the population . the effect of fast rotation on the shape of the star is included in the roche model approximation .the variation of the effective temperature and of the luminous flux over the surface ( the so - called gravity darkening ) is implemented using either the or relations . in this framework , the equator of a fast rotating star is cooler and dimmer than the average temperature and luminosity , and the poles are hotter and brighter . as a consequence , an observer ignoring the orientation of the axis of rotation of a star will deduce an erroneous temperature and luminosity depending on its angle of view .for example , an observer looking at a fast rotating star pole - on will see principally the hot and bright polar regions and thus deduce that the star is bluer and more luminous than the average surface values .the latitudinal variation of the gravity has also observational consequences . assuming that the gravity deduced from a spectral fitting corresponds to the flux - averaged gravity of the visible hemisphere, a random distribution of the angle of view will produce a scatter in the plane ( fig .[ fig1 ] ) . asa function of ` observed ' for a cluster at 25 myr at .the colour code shows the value .the black line is the corresponding isochrone for a non rotating population .no instrumental noise has been considered here . ]extended main - sequence turn - off in the colour - magnitude diagram is a common feature of star clusters .one explanation is that these clusters , instead of having an instantaneous burst of star formation , have a spread ( e.g. * ? ? ?* ; * ? ? ?recently , we have shown that rotation could be a plausible alternative explanation , providing a natural framework to the observed relation between the duration of the spread in the star formation rate and the age of the cluster . in this framework ,the extension of the turn - off is no more produced by a spread , but by the distribution of initial rotation rates of the stars .
during the last few years , the geneva stellar evolution group has released new grids of stellar models , including the effect of rotation and with updated physical inputs . to ease the comparison between the outputs of the stellar evolution computations and the observations , a dedicated tool was developed : the syclist toolbox . it allows to compute interpolated stellar models , isochrones , synthetic clusters , and to simulate the time - evolution of stellar populations .
by the present time a large number of close binary systems containing a component with an accretion disk have been detected . in such systemsa secondary nondegenerate star fills its critical roche lobe and transfers matter to the primary star through the inner lagrangian point mostly as a gas stream . due to the high angular momentum the outflowing gas forms the accretion disk around the primary `` peculiar '' component . at the place the gas stream strikes the outer rim of the disk an area of enhanced temperature and luminosity , named bright spot , is formed . in algol type systems the disk accretion occurs on a normal b a main sequence star ( plavec , 1980 ) , and in cataclysmic variables and x ray binaries it occurs on a degenerate star ( kraft , 1965 ) . in the last case the accretion disk may give an appreciable contribution to the optical continuum ( pringle and rees , 1972 ; shakura and sunyaev , 1973 ) .cataclysmic variables are the most suitable objects for the study of accretion disks .these close binary systems consist of a white dwarf ( primary ) and a main sequence star of a late spectral class ( g m ) .the choice of cataclysmic variables for research into accretion disks is defined by the following factors : 1 .the main part of energy radiation from the disk is emitted at optical and ultraviolet wavelengths ; 2 . the contribution of the secondary component ( a star of the late spectral type g k ) to the system total luminosity is relatively small in comparison with that of the accretion disk ; 3 .cataclysmic variables are usually more amenable to observations than low mass x ray binaries .this makes it definitely easier to get observational data of high quality ; 4 .the number of cataclysmic variables is much greater than the number of representatives of other types of binaries with accretion disks .typical optical spectra of cataclysmic variables contain emission lines of hydrogen , neutral helium and singly ionized calcium all superposed onto a blue continuum .heii 4686 may also be present . in the spectra of systems with high inclinations the emission lines of h and hei are usually double peaked and have profiles with base widths over 20003000 km / s ( williams , 1983 ; honeycutt et al . , 1987 ) .the double peaked profile is a result of doppler shift of emission from the accretion disk ( smak , 1969 ; horne and marsh , 1986 ) .the profiles are often observed to be asymmetric and the intensities of the red and blue peaks are variable with the orbital period phase ( greenstein and kraft , 1959 ) .the trailed spectra show strong double peaked symmetrical line profiles and a weak narrow component which forms a s wave due to sinusoidal variations of its radial velocity .the `` s - wave '' component is usually attributed to the bright spot the point of interaction of the gas stream and the accretion disk ( kraft , 1961 ; smak , 1976 ) , moreover their physical parameters define the nature of the processes involved ( livio , 1992 ) .this makes it important to study the observational properties in the investigation of accretion in close binary systems . in this paperwe consider a method of modelling emission line profiles which are formed in non - uniform accretion disks .we study the line profile variation depending on the phase turn of the disk with the bright spot on the surface . in section 2 the model and the technique of calculationsare described . in section 3we test this method for the determination of the line profile from the model parameters .evaluation of the accuracy of determination of parameters is described in section 4 . finally , in section 5 the results of calculations are given .a powerful tool in the investigation of the orbital variations of emission lines in spectra of close binaries , which is widely used now , is doppler tomography .this is an indirect imaging technique which can be used to determine the two dimensional velocity - field distribution of line emission in binary systems ( marsh and horne , 1988 ) .this method provides very accurate reconstructions even when analyzing low s / n ratio spectra .however , such a powerful tool in studying the structure of accretion disks is unfortunately not free from demerits .we point out only the basic of them . *the computation of a doppler map requires a large number of high resolution spectra covering an orbital period .this is a problem when studying weak and short period close binary systems .* the variation in flux of the disk details during observations may distort the map ( marsh and horne , 1988 ) .this occured , for example , in the study of the cataclysmic variable u gem ( marsh et al . , 1990 ) . in the doppler map the ring shaped component from the disk is weakened near the bright spot .this weakening is actually non existent . *since all the observational data are used for computation of the map , we lose the possibility of studying variations of fluxes from the bright spot and other emission regions over the orbital period .the enumerated shortcomings of the method of doppler tomography restrict possibilities of its use .the modelling of the line profiles obtained at different phases of the orbital period is another possible method of analysis of variations of the accretion disk and spot parameters with time , without mapping the disk in the velocity field .thus , at a sacrifice the high spatial resolution we obtain a possibility of studying the temporal variability . for accurate calculation of line profiles formed in the accretion disk ,it is necessary to know the velocity field of radiating gas , its temperature and density , and , first of all , to calculate the radiative transfer equations in lines and the balance equations .unfortunately , this complicated problem has not been solved until now and it is still not possible to reach an acceptable consistency between calculations and observations .nevertheless , even the simplified models allow one to define some important parameters of the accretion disk . in close binary systems it is possible to note five basic emission regions : an accretion disk , a gas stream , a bright spot , a primary and a secondary components .however , in low mass systems the accretion disk and the bright spot only make the main contribution to the radiation of emission lines ( see , for example , marsh et al ., 1990 ; marsh and horne , 1990 ) .therefore in our calculations we applied a two component model which included a flat keplerian geometrically thin accretion disk and a bright spot whose position is constant with respect to the components of a binary system ( fig .[ fig1 ] ) .we began the modelling of line profiles with calculation of a symmetrical double peaked profile formed in a uniform axisymmetrical disk , then we added a distorting component formed in the bright spot .the flat balmer line decrement usually observed in spectra of cataclysmic variables shows that the hydrogen emission lines are optically thick . in this casethe local emissivity of the lines becomes strongly anisotropic , because the photons tend to emerge easily in the directions of high velocity gradients .therefore for the calculation of the line profiles we have used the method of horne and marsh ( 1986 ) , taking into account the keplerian velocity gradient across the finite thickness of the disk . to calculate the emission line profile we divide the disk surface into a grid of elements , andassign the velocity vector , line strength and other parameters for each element .the computation of the profiles proceeds by summing the local line profiles weighted by the areas of the surface elements .for details see horne and marsh ( 1986 ) and horne ( 1995).we have assumed a power law function for distribution of the local line emissivity over the disk s surface where r is the radial distance from the disk s centre and ( smak , 1981 ; horne and saar , 1991 ) . free parameters of our model are : 1 .parameter ; 2 . /r the ratio of the inner and the outer radii of the disk ; 3 . radial velocity of the outer rim of the accretion disk .+ unfortunately , the theoretical modelling of the stream disk interaction is in its infancy .it is known from photometric and spectral studies that over the orbital period the bright spot is eclipsed by the outer edge of the accretion disk ( see , for example , livio et al . , 1986 ) .however , it is still unclear if the spot is optically thick . by the optically thick spot we mean one for which anisotropy of the local line emissivity should be taken into account .we consider this problem in details in the next section . in our modelwe consider the spot on the accretion disk to have a keplerian velocity and to be described by four geometric parameters ( fig.1 ) : 4 . the azimuthal angle of the spot centre relative to the line of sight , ( fig .1 ) ; 5 . the spot azimuthal extent ; 6 . the radial position of the spot centre in fractions of the outer radius ( =1 ) ; 7 . the radial extent .+ for simplicity we assume that the brightness ratio of the spot and disk is constant and the spot brightness does not depend on azimuth ( , and its dependence on radius is described by the function 8 . , where the free parameter is the spot brightness . for the further analysis instead of it is preferable to use the relative dimensionless luminosity , which is determined as + {l } l=\int\limits_{r_{s}-\delta r_{s}/2}^{r_{s}+\delta r_{s}/2}s\cdot b\cdot f(r)\cdot dr= \\ \\= \frac{\pi}{180}\frac{\psi \delta r_{s}b}{2-\alpha}\left[\left(r_{s}+ \frac{\delta r_{s}}{2}\right)^{2-\alpha}-\left(r_{s}-\frac{\delta r_{s}}{2 } \right)^{2-\alpha}\right ] \end{array } $ ] + where the spot area , and the spot brightness .so , our accretion disk model has 8 parameters .such multiparametric reverse problems raise a question on the uniqueness and stability of the solution . in order to answer it ,we analyze how the various parameters of the spot and disk affect a line profile .the dependence of the line profile on the parameters of the uniform accretion disk is considered in details by smak ( 1981 ) and horne and marsh ( 1986 ) .they have shown that the accretion disk parameters basically affect different parts of line profiles on the whole , and therefore they can be determined unambiguously .really , the velocity of the outer rim of the accretion disk defines the distance between the peaks in the lines ( fig .2a ) , the shape of the line profile depends on the parameter ( fig .2b ) , and the extent of the wings is determined by ( fig .2c ) . when we studied the dependence of the emission line profiles on the parameters of the bright spot it was important to find out whether it was possible to make their unambiguous estimates .the results of testing presented below are based on the modelling of a series of line profiles of the accretion disk with the bright spots on different azimuths and having different parameters .some of them were fixed here , but others were changed so that the relative spot luminosity was constant .the calculations have shown that correct determination of the spot parameters depends on its azimuth , which is necessary to find in an independent way .this is possible from the analysis of the phase variations of asymmetry degree of emission lines ( for example , v / r ratio ) .the azimuth of the spot is , where the phase angle of the spot , the orbital phase , and the phase of the moment when the v / r - ratio is equal to 1 .it corresponds to the moment when the radial velocity of the s wave components is zero .note that in practice to improve the accuracy of measuring the asymmetry degree , instead of the ratio of intensities of the line peaks it is preferable to use the quantity : where s the degree of asymmetry , and the integrals of the line intensity of the violet and the red `` humps '' ( or their parts in equal ranges of wavelengths ) , respectively ( fig .[ fig3 ] ) .moreover , such a quantity is more sensitive to changing the spot parameters . in an optically thick diskthe local line emissivity is strongly anisotropic .thus the line surface brightness of the accretion disk must be enhanced with non axisymmetrical pattern and proportional to ( horne and march , 1986 ; horne , 1995 ) .this happens because the velocity gradient on spot azimuths and is the greatest and the probability of the line photon tending to emerge is also the highest .so , the observed brightness of an optically thick spot will vary with its azimuth ( due to its limited size ). it will be maximum on azimuths and , fall on the azimuth and minimum on azimuths and .we have calculated a set of models with different parameters of the spot at phases covering the full orbital period , and based on the obtained profiles we have plotted a grid of s waves ( fig .[ fig4 ] ) . for the optically thick spot the s wave curves are seen to have a depression at spot phase .the depth of the depression increases with decreasing azimuth extent of the spot .such a depression on the s wave curve may suggest that the spot is optically thick , since in the case of the optically thin spot there is no depression . to find out if the shape of the spot affects the shapes of emission lines , we used the models shown in fig.[fig5 ] . in spots of equal relative luminosity and equal area the azimuth extent varied from 10 to 70 degrees .it is seen that the line profile does not depend on the spot shape for azimuth and strongly depends on it at phase ( fig .[ fig6 ] ) .this is true both for optically thick and optically thin spots. actually the radial extent of the spot does not affect the shape of the line profile .variation of this parameter over a wide range , from 0.02 to 0.30 , affects slightly the profile at all phases and any azimuth extent of the spot .this is explained by the fact that the interval of the radial velocities inside the spot only slightly depends on .therefore it is possible to compensate for the change in this parameter by changing the spot contrast , i.e. the invariant is the product .so , to calculate the line profile , it is necessary to specify the value of in some way ( for example , the typical radial extent of the bright spot ) .it is known from photometric observations of cataclysmic variables that for different systems it lies in the range ( rozyczka , 1988 ) .this parameter is determined with high confidence for any optically thick and optically thin spot , which have an azimuth over ( fig .[ fig7 ] , [ fig8 ] ) .calculations have shown that the type of the spot brightness distribution does not practically influence the line profile .as an example we adduce the response of the line profile to various types of the azimuth dependence .we have calculated the models with asymmetric distribution of the spot brightness ( it is decreasing linearly from to zero ) and with the uniform distribution for comparison .the relative luminosity of the spot was constant .it is seen that the minor modifications of the profile appeared for the spots extended very much in azimuth ( ) , at phases close to zero ( fig .[ fig9],[fig10 ] ) .the calculation of the line component which is formed in the gas stream has no principal differences from the modelling of the component formed in the bright spot .because the translational velocity of the stream much exceeds the velocity of its expansion and is highly supersonic , the width of the local line profile formed in the stream must be much smaller than the full width of the line formed in the accretion disk ( lubow and shu , 1975 ) .much of the kinetic energy of the stream is released by radiation probably at the moment of its collision with the disk .it is known from observations that usual radii of accretion disks in cataclysmic variables are , where is the roche lobe dimension .the velocity of the stream at such distances from the accreting star approximately corresponds to keplerian velocities of the accretion disk on these radii .for this reason it is complicated to determine the area of origin of the s wave component of the observed emission line profile .however this is still possible to do from analysis of the s wave , for example .research into some dwarf nova ( ip peg , u gem ) using the doppler tomography technique has shown that secondary components in such systems also contribute to emission in lines ( march and horne , 1990 ; march et al . ,however the contribution of their emission to the total flux is very small and may be ignored in calculations .for the further analysis of the obtained results it is important to know the accuracy of determination of the model parameters .it depends on many factors , for example , on the method of determination , on the spectral resolution , and also on the values of the parameters .because it is nearly impossible to take into account all these factors analytically , we have decided to use the following statistical method .a line profile calculated with the known values of the model parameters was normalized so that the relative intensity of the line was equal to 2 ( a value typical of cataclysmic variables ) .then the profile was distorted by the poisson noise ( the level of the continuum was varied to change the s / n ratio ) .after this parameters were fitted to the minimum of residual deviation of a `` new '' model profile from the `` old '' noisy one .this procedure was repeated several hundred times , then the average values and the errors of determination of the parameters were estimated .the results of these calculations are shown in fig .[ fig11 ] . as it can be seen from the plots presented , and the relative luminosity of the spotare estimated with the highest accuracy .for instance , the accuracy of determination of under is about 20 km / s .it is better than 5 percent for the typical value of of about 700 km / s .the parameter is determined quite confidently .the accuracy of determination is essentially lower .we have presented a technique for calculation of profiles of emission lines formed in a non uniform accretion disk .the results of calculations have shown that the analysis of spectra obtained at different phases of the orbital period allows basic parameters of the spot ( such as geometric sizes and luminosity ) to be estimated and the structure of the accretion disk to be investigated .we have determined that change in the shape of the emission line profiles with the variation of different parameters of the spot strongly depends on the azimuth of the spot .therefore , the necessary condition for the accurate determination of the parameters of the spot is a knowledge of its phase angle . by separating all spectra according to phases of `` the greatest influence '' of appropriate parameterswe can sequentially determine them .1 . analysis of the s wave allows us to determine the phase angle of the spot and its optical depth .the azimuth extent of the spot is determined better on azimuths , while its radial position is determined better on azimuths .the shape of the line profile is practically insensitive to modification in radial extent of the spot .therefore , for modelling the start value of this parameter is set by default ( it is possible to take , for example , a typical radial extent of a bright spot ) .thus the number of free parameters of the model decreases by unity .we thank g.m .beskin and l.a.pustilnik for valuable advice and discussion of the work .the work was partially supported by the russian foundation of basic research ( grant 95 - 02 - 03691 ) and federal programme `` astronomy '' .99 greenstein j.l . , kraft r.p . : 1959 , , * 130 * , 99 honeycutt r.k . , kaitchuck r.h . ,schlegel e.m . : 1987 , , * 65 * , 451 horne k , marsh t.r . : 1986 , , * 218 * , 761 horne k. : 1995 , , * 297 * , 273 horne k. , saar s.h . : 1991 , , * 374 * , l55 kraft r.p . : 1961 , science , * 134 * , 1433 kraft r.p . : 1963 , advances in astron . and astrophys . , * 2 * , 43 livio m. : 1993 , accretion disks in compact stellar systems , ed . : wheeler j.c . , world scientific publishing co. livio m. , soker n. , dgani r. : 1986 , , * 305 * , 267 lubow s. , shu f. : 1975 , , * 198 * , 383 marsh t.r . , horne k. : 1990 , , * 349 * , 593 marsh t.r . , horne k. : 1988 , , * 235 * , 269 marsh t.r . , horne k. , schlegel e.m . , honeycutt r.k . , kaitchuck r.h . : 1990 , , * 364 * , 637 plavec m.j . : 1980 , close binary stars : observations and interpretation , eds .: plavec m.j . , popper d.m . and ulrich r.k ., dordrecht , reidel , 155 pringle j. , rees m. : 1972 , , * 21 * , 1 rozyczka m. : 1988 , acta astron ., * 38 * , 175 shakura n.i ., sunyaev r.a . : 1973 , , * 24 * , 337 smak j. : 1969 , acta astron . , * 19 * , 155 smak j. : 1976 , acta astron . , * 26 * , 277 smak j. : 1981 , acta astron . , * 31 * , 395 williams g. : 1983 , , * 53 * , 523
techniques of calculation of emission line profiles formed in a non uniform accretion disk is presented . change of the profile shape as a function of phase turn of the disk with a bright spot on the surface is analysed . a possibility of a unambiguous determination of the disk and spot parameters is considered , the accuracy of their determination is estimated . the results of calculations show that the analysis of spectra obtained at different phases of the orbital period gives a possibility of estimating the basic parameters of the spot ( such as geometric size and luminosity ) and to investigate the structure of the accretion disk . november 13 , 1997
quantum computation has developed as an exciting field of research in the last decade and it has generated wide interest among scientists and engineers .it offers the opportunity of creation of algorithms that are radically different and more efficient as compared to their classical counterparts .shor s prime factorization algorithm and grover s quantum search algorithm have theoretically demonstrated the power of quantum algorithms . however , the experimental implementation of the quantum algorithms is still quite challenging .nuclear magnetic resonance ( nmr ) has been the vanguard among the presently available techniques for physical implementation of quantum algorithms . till date, the algorithms have been tested on systems with a small number of qubits with a presumption that once a quantum computer with large number of qubits are made , more real world application of the algorithms can be implemented .implementation of the quantum algorithms on very large system requires the application of a large number of unitary operators .as any physical implementation involves some amount of error which accrue when the unitary operators are applied in tandem , physical implementation of an algorithm in a large system becomes difficult .the sensitivity of the algorithm to small errors can lead to it s failure .grover s quantum search algorithm , or more generally the quantum amplitude amplification algorithm , is designed to search a marked item from an unsorted database .it drives a quantum computer from a prepared initial state to a desired target state , which encodes the marked item .generally , is prepared by applying a unitary operator on a particular basis state , i.e. . the heart of the algorithm is the grover s iteration operator given by thus is the selective phase inversion of the state . if then times iteration of on yield the target state with a high probability . for searching a database of items , the initial state is chosen to be the equal superposition of all basis states each of which has a probability amplitude .it is generated by applying the walsh - hadamard transform on the basis state , i.e. . since is a unique basis state , and times iterations of on yield the target state . in this paper, we consider the case when the implementation errors cause the deviations in selective phase inversions , and .in other words , we want the apparatus to implement but due to errors , the apparatus implements where is the selective phase rotation of by angle .then the grover s operator becomes and the well - known _ phase - matching _condition demands for grover s algorithm to succeed . for large database size , and andthe above condition becomes quite stringent . from the implementation point of view , satisfying eq .[ phcon ] is tough as the phase rotations on state and are not equal in general .therefore , as the size of the database increases , there is a high risk that grover s algorithm fails even if there are very small errors in the implementation of the operators . to take into account the above mentioned problem, tulsi has modified the quantum search algorithm .the algorithm is based on the assumption that errors are ( i ) reproducible and ( ii ) reversible .the reproducibility allows us to implement the transformations at our disposal while the reversibility allows us to implement the inverse transformations at our disposal .then the collective effect of the errors can be cancelled by iterating the following operator note that for , is just two steps of grover s algorithm , i.e. .tulsi has shown that times iteration of on yield the target state with high probability . therefore ,if is small ( i.e. the database is large ) , small difference between and can cause the grover s algorithm to fail while the modified algorithm still succeeds in finding the target state ( see fig .[ simu ] for simulation results ) .however it may be pointed out that grover s algorithm is self correcting if ( fig .[ simu1 ] ) .the complexity of both the algorithms remains almost the same for .it should be noted that for the experimental demonstration of the difference between the original and the modified search algorithm for large database , it is not necessary to implement them on a very large system .we can simulate the behaviour of the algorithms for large database by preparing a small system with , i.e. initially , the target state has a low probability amplitude . as there are no other restrictions on the size of system ,a two qubit system is suitable enough for this purpose .+ to experimentally verify the algorithm of tulsi , the original and the modified search algorithms are implemented here in an nmr quantum information processor .the implementation procedure consists of ( i ) preparation of the pseudo - pure state ( pps ) , ( ii ) preparation of the superposition of all the states such that the marked state has a low probability amplitude , ( iii ) application of the original / modified iterations and finally ( iv ) measurement .the experiment has been carried out at room temperature in 11.7 tesla field in a bruker av500 spectrometer using a qxi probe .the system chosen for the implementation of the algorithm is carbon-13 labeled chloroform ( ) , where and act as the two qubits .the and resonance frequencies at this field are 500 mhz and 125 mhz respectively and the scalar coupling between the spins is j= 209 hz . the nmr hamiltonian for a 2-qubit weakly coupled spin system is , where are the larmour frequencies and the j is the scalar coupling . the equillibrium density matrix , which is the starting point of any algorithm in nmr quantum information processor , under high temperature and high field approximation is in a highly mixed state represented by , where the is 1:0.25 are the gyromagnetic ratio of the nuclei .we describe the various stages of the experimental implementation in the following paragraphs .+ for a two - qubit system , there are basis states : , , and .we choose the target state to be .if is an equal superposition of the basis states then . but to simulate the grover s algorithm for large database , we want to be small . that we achieve by letting to be an unequal superposition .we first create the pps by the use of spatial averaging ,[fig .[ pseq ] ] .a pps has a unit population in the state and zero population in and states .then we apply a pulse on it .we have .\end{aligned}\ ] ] thus .by choosing and , we achieve respectively .just to compare , if is an equal superposition of basis states then and these values of correspond to respectively so that we need qubits respectively to represent all basis states .however , by choosing to be an unequal superposition , a two - qubit system becomes sufficient to simulate large databases .the next step in the implementation of the algorithms is the application of the operator . in our case, we assume that there are no errors in trasnformation , i.e. . since and , we have note that in case of no errors in transformation , we have . fig .[ pseq ] contains the pulse programme for the implementation of the operator .the and operators are selective phase rotations of and states respectively i.e. therefore in nmr , the and operators are implemented by evolution under respectively .following , for , the evolution under and are implemented by composite z - rotation pulses like {y}\left[\frac{\phi}{2}\right]_{\bar{x}}\left[\frac{\pi}{2}\right]_{\bar{y}}$ ] .the evolution under the is implemented by evolving the system under the scalar coupling hamiltonian only , for a time period of .the operator is applied by ( a ) reversing the order of application of pulses and evolution , ( b ) flipping the phase of the centre pulse of the composite z rotation by and ( c ) changing the evolution time from to for .the application of involves evolution of the system under the hamiltonian given by eq .this is similar to the hamiltonian evolution for , the only difference being the negative sign before and .this implies that the phase of the centre pulse in the composite z rotation is changed by .moreover , as errors are not introduced in the operator ( i.e. ) , the central pulse in the composite z rotation has a flip angle of and the time of evolution is 1/2j in both the cases of implementation .+ after the implementation of the algorithm , the final state is measured . in this caseonly the diagonal elements of the final density matrix ( population spectra ) is required to be measured .this is done by collecting the data after applying a gradient to kill the off - diagonal elements followed by a 90 pulse .the diagonal elements of the final density matrix are reconstructed from the population spectrum .+ fourteen iterations of the original and the modified algorithms were implemented for three different values of initial probability amplitude of the marked state i.e. and .for the case , no error has been introduced in operator ( i.e. ) which implies that the phase matching condition is satisfied in this case .it can be seen that both ` original ' and ` modified ' algorithm behaves almost similarly i.e. they find the marked state with almost the same periodicity ( fig .[ f3a ] ) . in the next case ( fig .[ f3b ] ) , the value of as chosen to be to make smaller and an error of 10 was introduced in ( i.e. ) so that the phase matching condition is violated .we see that in this case , the original search algorithms starts to fail while the ` modified ' algorithm obtains the searched state with a high probability ( 80 ) .the original algorithm can not amplify the amplitude of the marked state so as to definitely distinguish it and therefore the solution is not reached .finally , the algorithms were implemented for and 10 error in operator ( fig .[ f3c ] ) . in this case is very small ( simulating a system of about 10 qubits ) , and therefore the ` _ phase matching _ 'condition is violated even more strongly .it can be seen that in this case , the ` original ' algorithms totally fails in reaching the solution but the ` modified ' algorithm succeeds . for completeness , the diagonal elements of the tomographed density matrix for the case of fig .[ f3c ] i.e. and 10 error in are plotted in fig 4 .this confirms the success of the ` modified ' algorithm of tulsi .+ in conclusion , we have implemented the ` modified ' quantum search algorithm by tulsi and have experimentally validated his claim that his algorithm is robust to errors in operator as compared to the original search algorithm .we have shown that small errors can be fatal for searching larger databases using grover s algorithm while the ` modified ' search algorithm is robust .we have experimentally simulated the behaviour of the algorithms in large database on a 2-qubit nmr quantum information processor .quantum computers when fully operational will be dealing with real world problems requiring large systems .this experiment , besides providing a validation for an important theoretical prediction , will help in providing impetus to future work on the study of existing algorithms in large real world systems .+ : the use of av-500 nmr spectrometer funded by the department of science and technology ( dst ) , new delhi , at the nmr research centre , indian institute of science , bangalore , is gratefully acknowledged .a.k . acknowledges dae and dst for raja ramanna fellowships , and dst for a research grant on quantum computing using nmr techniques .99 p. shor , _ proceedings of the 35th annual symposium on foundations of computer science _ ( ieee computer society , los almitos , 1994 ) ,l. k. grover , phys .rev . lett . *79 * , 325 ( 1997 ) l.k .grover , phys .* 80 * , 4329 ( 1998 ) ; g. brassard , p. hoyer , m. mosca , and a. tapp , contemporary mathematics ( american mathematical society , providence ) , * 305 * , 53 ( 2002 ) [ arxiv.org:quant-ph/0005055 ] .i. l.chuang , l. m. k. vanderspyen , x. zhou , d.w .leung and s. llyod , nature * 393 * , 143 ( 1998 ) ; j. a. jones and m. mosca , , 1648 ( 1998 ) ; l.m.k .vanderspyen , matthias steffen , gregory breyta , c.s.yannoni , m.h .sherwood and i.l .chuang , nature * 414 * , 883 ( 2001 ) ; j. a. jones , m. mosca and r. h. hansen , nature * 393 * , 344 ( 1998 ) ; i. l. chuang , n. gershenfeld and m. kubinec , , 3408 ( 1998 ) ; p. hoyer , , 052304 ( 2000 ) ; g.l .long , y.s .zhang , and l. niu , phys .a * 262 * , 27 ( 1999 ) .a. tulsi , phys .a * 78 * , 022332 ( 2008 ) .( oxford university press , new york , 1994 ) . m. a. nielsen and i. c. chuang , _ quantum computation and quantum information _ , cambridge university press , 2000 .d. g. cory , a.f .fahmy and t.f .havel , proc .usa * 94 * , 1634 ( 1997 ) ; n. gershenfeld and i.l .chuang , science * 275 * , 350 ( 1997 ) .d. g. cory , m. d. price and t.f .havel , physica d * 120 * , 82 ( 1998 ) .
grover s quantum search algorithm , involving a large number of qubits , is highly sensitive to errors in the physical implementation of the unitary operators . this poses an intrinsic limitation to the size of the database that can be practically searched . the lack of robustness of grover s algorithm for a large number of qubits is due to quite stringent ` _ phase - matching _ ' condition . to overcome this limitation , tulsi suggested a modified search algorithm [ pra 78 , 022332 ] which succeeds as long as the errors are reproducible and reversible while grover s algorithm fails . such systematic errors arise often from imperfections in apparatus setup e.g. the errors arising from imperfect pulse calibration and offset effect in nmr systems . in this paper , we report the experimental nmr implementation of the modified search algorithm and its comparison with the original grover s algorithm . we experimentally validate the theoretical predictions made by tulsi .
quantum mechanics is a probabilistic theory . it does not assign definite outcomes to certain measurements. a physicist performing identical measurements on two identically prepared systems might get different measurement outcomes .quantum mechanics postulates that the outcomes of some measurements are undetermined before the measurement .this randomness in the measurement outcomes has been used to generate random numbers . it might be argued that the randomness in the measurement outcome is not really undetermined before the measurement .it is perhaps determined by some hidden variables that provide a more complete description of the system , but they are unknown to the physicist .however , this hidden - variable description of nature was recently tested in three bell test experiments and was found to be incompatible with the observed experimental data .the observed data were consistent with quantum mechanics .in other words , we see in our experiments that nature behaves randomly , as postulated by quantum mechanics .this implies that if the experimental observations obey some relations and on the condition that the experiment was performed correctly , we can certify the measurement outcomes were undetermined before the measurement was performed .that is , their outcomes generated new random numbers .the conditions that need to be satisfied are those for a loophole - free bell experiment .remarkably , these conditions do not include that the physicist know the mechanisms of the measuring device .this observation makes the realization of a device - independent ( di ) quantum random - number generator ( qrng ) possible . in a diqrng , the user is able to certify the creation of new random numbers despite being ignorant of the device mechanisms . in certifying the generation of new random numbers , the user trusts that quantum mechanics provides a complete description of nature .based on the statistics of the measurement outcomes , he can put a bound on the correlations between his measurement outcomes and any other system that exists outside of his lab .this bound allows him to extract new random numbers from the measurement outcomes , that is , random numbers which are not correlated to any system outside of his lab .the first proof - of - concept diqrng used entangled photons generated in an atomic ion trap to certify 42 new random numbers over a period of about one month .more recently , using a more efficient entanglement source , bits of new randomness were created at a rate of bits / s .both setups used the clauser - horne - shimony - holt ( chsh ) value to certify the randomness .the chsh value is a function of the measurement statistics , and this value sets a lower bound on the di randomness that can be certified .it turns out that using different bell operators , that is , different functions of the measurement statistics , will give different equally valid lower bounds to the di randomness from the same measurement statistics . in ,several previously known as well as randomly generated bell operators were tested and shown to certify varying amount of randomness from the two - qubit werner state .these operators were chosen in an _ ad hoc _ manner , and no single operator was found to be optimal for all the werner states . in , the complete measurement statistics were used to obtain a bound on the di randomness instead of resorting to a specific bell operator .this gives the highest lower bound on the di randomness .a by - product of this process is the optimal bell operator that would have given the same bound .this bell operator gives the maximum di randomness for the given measurement statistics . in a bell setup for generating new random numbers, the physicist has a choice of the measurement operators to use . by optimizing these operators, he can get a better bound on the di randomness .this is the question that we address : how much randomness can the physicist certify by using the optimal measurement operator ? recently , this question was also addressed in for an experimentally relevant optical bell experiment setup and in , where the requirement for full device independence was relaxed .we consider the usual bell setup for generating di random numbers .the user inputs two random and independent measurement settings , and , and receives two measurement outcomes , . in a di setup, the user does not have any knowledge of the measurement device .the behavior of the apparatus is solely characterized by the conditional probabilities , which we view as the components of the vector .the user will use one measurement setting , , to generate his random numbers ; the other settings are only used to obtain bounds on the di randomness .following , the maximum guessing probability for an adversary , eve , who is constrained by quantum mechanics and has perfect knowledge of the measurement apparatus is = \max_{\{q_{ab},\mathbf{p}_{ab}\ } } \sum_{ab } q_{ab } p_{ab}(a , b|x^\ast , y^\ast)\label{pri}\\ & \textrm{such that \ ; } \sum_{ab } q_{ab } \mathbf{p}_{ab}=\mathbf{p}\;\label{con}\\ & \textrm{and \ ; } \mathbf{p}_{ab } \in \mathcal{q}\;.\label{conq}\end{aligned}\ ] ] the notation means that the conditional probabilities can be realized in quantum mechanics . in other words ,there exist a state and some measurement operators and such that .the constraint ( [ con ] ) ensures that the weighted sum of the particular behaviors gives the observed behavior .eve can realize the guessing probability ] .the optimization variable corresponds to a bell expression that gives rise to a guessing probability of .the optimal that achieves the minimum then corresponds to the optimal bell expression that minimizes eve s guessing probability given the behavior . in general, the optimization problems ( [ pri ] ) and ( [ dual ] ) can be computationally hard to solve .however the constraints ( [ con ] ) and ( [ dualcon ] ) can be relaxed to give upper bounds to the guessing probabilities in a way that the programs can be cast as a semidefinite program ( sdp ) which can be solved efficiently .these relaxations can be progressively tightened to give bounds that are successively tighter .while the user of a dirng has no access to the workings of the device , the physicist who builds the device has a choice of the quantum state and the measurement operators that he wants to implement in the operation of the device .the vector has components which are rank - one projectors and satisfy for example , if his machine can prepare the pure entangled two - qubit state , then as shown in , by designing the measurement operators to be projectors along with the angles the device will be able to certify two bits of randomness with the measurement settings .however , if the measurement operators used were not optimal , the machine will exhibit a different behavior and may certify less randomness .so if the builder can prepare a maximally entangled two - qubit state and use the optimal measurement operator , then the device will be able to certify two bits of randomness , and all is good .however , if the builder is technologically limited to preparing some other state , then in general the measurement operators in ( [ m2 ] ) will not be optimal anymore . in this case, the builder is then interested in finding the measurement operator he should implement that would certify the maximum randomness given are more unpredictable than others , then more randomness can potentially be extracted by post - selecting a subset of the symbols . although the post - selection reduce the number of data - points available for randomness extraction , the post - selected data might be more random , which makes it harder to for eve to guess correctly .the net result can be an increase in the final randomness generation rate . ] given that he is restricted to the state .this is the task that we shall now investigate .more precisely , we want to find =\max_{\boldsymbol{\pi } } d\left[p(\boldsymbol{\pi } ) \right]\;,\end{aligned}\ ] ] where and the vector is constrained by ( [ picon ] ) .admittedly , we have not solved this problem .instead , we present and implement an iterative algorithm in algorithm [ algo1 ] that converges to a local maximum of ] and corresponding by solving the relaxed version of ( [ dual ] ) compute the minimum of and corresponding [step5 ] [ algo1 ] the tolerance sets the stopping condition for the algorithm . in step [ step5 ] ,we compute the minimum of guessing probability which corresponds to finding the measurement settings that maximizes the bell value for a given bell expression .the guessing probability is a quadratic function of and with the quadratic constraints ( [ picon ] ) .we can use the lagrange multiplier method to find the minimum .while the algorithm might not find the global maximum $ ] , it usually finds measurement settings that yield more di randomness than a randomly chosen measurement setting . in our implementation, we use several initial settings in an attempt to find the global maximum .all sdp calculations were performed using the cvx package for matlab . .* the green dotted and purple dashed lines show the di randomness obtained when constrained by the bell operators ( [ ibeta ] ) and ( [ ichsh ] ) with a fixed measurement direction . using both operators together gives a higher randomness , depicted by the yellow dash - dotted line .constraining eve to the complete behavior gives the most randomness from the fixed behavior generated from the measurement direction depicted by the solid orange line .the top line denotes the randomness bound for an optimized measurement direction .these curves were obtained with a third - order relaxation of the sdp hierarchy . ]we apply our algorithm to the family of states [ rho ] where and visibility gives the fraction of the state . in the noiseless limit of , arbitrarily close to two bits of di randomness can be attained in the maximally entangled case when with measurement settings .two bits of di randomness are also achievable when is arbitrarily close to zero with measurement settings .we first consider the case where and the visibility is fixed at . in fig .[ fig : hmin99 ] , we compare the di randomness from the optimized measurement setting to a bound obtained using a fixed measurement setting as reported in .we see a significant improvement in the certifiable randomness using the optimized measurement settings . for completeness, we also include the certifiable randomness constrained using two specific bell operators and also constrained by both operators together using a fixed measurement setting where , , and .the di randomness bounds using specific operators are always lower than using the complete measurement statistics .next , we plot the di randomness bound as a function of for various visibilities in fig . [fig : hmin22 ] for .we also plotted the di randomness when in the same figure . in most cases ,the improvement obtained from using four measurement settings is not very significant . in fig .[ fig : chsh ] , we plot the di randomness as a function of nonlocality as measured by the chsh value .relying on the chsh value alone gives a much lower di randomness , especially when the state has a high visibility .even with a maximally entangled two - qubit state , a chsh value of can only certify bits of randomness .for two ( solid line ) and four ( dashed line ) measurement settings on each side .the four - measurement - setting randomness bound that we report here is slightly higher than the results reported in , where there are two fixed settings for one side and four fixed settings for the other side .we computed the fixed settings using both the second- and third - order relaxations of the sdp hierarchy , but they might turn out to be identical when a tighter constraint is used . we find no improvement in the tomographic result ( dash - dotted line ) compared to the results using a fixed measurement setting reported in .the two - setting and four - setting curves were obtained using a second - order relaxation of the sdp hierarchy . ] in fig .[ fig : vis ] , we fix the input state to have and plot the di randomness as a function of visibility for and .there is only a slight increase in the di randomness bound when going to four measurement settings . the dirandomness increases monotonically with as one would expect .this is because from a high - visibility state , one can always introduce noise to get to a state with lower visibility and attain at least the same di randomness . .for a fixed visibility , the randomness rate is not a monotonic function of .it is maximum when and . ] finally , in the limit when the number of settings becomes large , the di randomness will be upper bounded by the setup where the user can perform a complete tomography . in this case , the constraint ( [ con ] ) is replaced by a constraint on the quantum states with .the constraints that is positive mean that programs ( [ pri ] ) and ( [ dual ] ) are already sdps .we plot the tomographic randomness rate in fig .[ fig : tomo ] . for a fixed , the tomographic randomness rates decrease with . however , the tomographic randomness rates are not monotonic in for a fixed .for the same , starting with a state with small entanglement ( low ) can still yield the same amount of randomness as a state with large entanglement ( near ) .the dip in the randomness rates when is unlikely due to the algorithm being stuck in a local maximum .we check this numerically by scanning the whole parameter space . for the case of a qubit pair input that we are considering ,the measurement directions that the user uses to generate his tomographic randomness can be parametrized by the bloch vector angles and .some typical tomographic randomness rates are shown in fig .[ fig : tomo_map ] as a function of the two bloch vectors . in fig .[ fig : vis ] , we plot the randomness from a tomographic measurement when as a function of .we find no improvement compared to the results reported in .the measurement used there , indeed attains the maximum randomness we found .when the visibility is exactly unity , the quantum state that the user has is a pure state . for this , eve sguessing probability can be calculated exactly and then maximized over the user s measurements .the final result is where characterizes the measurement direction and is given by solving the min - entropy from this guessing probability is plotted as the top line in fig .[ fig : tomo ] .we see that two bits of randomness are achievable only when the state is maximally entangled or when it is separable . as a function of the measurement settings for different input states parametrized by from to with .the axis corresponds to the angle of the bloch vector of the measurement setting for the first side , and the axis corresponds to the angle of the bloch vector of the measurement setting for the second side .the red asterisk denotes the maximum value for each . ]the amount of randomness generated from a diqrng can be improved by optimizing the measurement setting . however , for the two - qubit state considered , the additional improvement achieved by using four measurement settings on each side is in most cases not significant . there is a disadvantage in having more measurement settings : the experimental setup is more complicated and more data are needed to characterize the measurements .this is not justified by the minimal increase in the randomness generation rates .b. hensen , h. bernien , a. e. drau , a. reiserer , n. kalb , m. s. blok , j. ruitenberg , r. f. l. vermeulen , r. n. schouten , c. abelln , w. amaya , v. pruneri , m. w. mitchell , m. markham , d. j. twitchen , d. elkouss , s. wehner , t. h. taminiau , and r. hanson . ., 526(7575):682686 , october 2015 .lynden k. shalm , evan meyer - scott , bradley g. christensen , peter bierhorst , michael a. wayne , martin j. stevens , thomas gerrits , scott glancy , deny r. hamel , michael s. allman , kevin j. coakley , shellee d. dyer , carson hodge , adriana e. lita , varun b. verma , camilla lambrocco , edward tortorici , alan l. migdall , yanbao zhang , daniel r. kumor , william h. farr , francesco marsili , matthew d. shaw , jeffrey a. stern , carlos abelln , waldimar amaya , valerio pruneri , thomas jennewein , morgan w. mitchell , paul g. kwiat , joshua c. bienfang , richard p. mirin , emanuel knill , and sae woo nam .strong loophole - free test of local realism ., 115:250402 , dec 2015 .marissa giustina , marijn a. m. versteegh , sren wengerowsky , johannes handsteiner , armin hochrainer , kevin phelan , fabian steinlechner , johannes kofler , jan - ke larsson , carlos abelln , waldimar amaya , valerio pruneri , morgan w. mitchell , jrn beyer , thomas gerrits , adriana e. lita , lynden k. shalm , sae woo nam , thomas scheidl , rupert ursin , bernhard wittmann , and anton zeilinger . significant - loophole - free test of bell s theorem with entangled photons ., 115:250401 , dec 2015 .s. pironio , a. acin , s. massar , a. boyer de la giroday , d. n. matsukevich , p. maunz , s. olmschenk , d. hayes , l. luo , t. a. manning , and et al .random numbers certified by bell s theorem . , 464(7291):10211024 , apr 2010 .b. g. christensen , k. t. mccusker , j. b. altepeter , b. calkins , t. gerrits , a. e. lita , a. miller , l. k. shalm , y. zhang , s. w. nam , and et al .detection - loophole - free test of quantum nonlocality , and applications . , 111(13 ) , sep 2013 .michael grant and stephen boyd .graph implementations for nonsmooth convex programs . in v.blondel , s. boyd , and h. kimura , editors , _ recent advances in learning and control _ , lecture notes in control and information sciences , pages 95110 .springer - verlag limited , 2008 .
the rates at which a user can generate device - independent quantum random numbers from a bell - type experiment depend on the measurements that he performs . by numerically optimising over these measurements , we present lower bounds on the randomness generation rates for a family of two - qubit states composed from a mixture of partially entangled states and the completely mixed state . we also report on the randomness generation rates from a tomographic measurement . interestingly in this case , the randomness generation rates are not monotonic functions of entanglement .
a quantum computer would possess the fascinating ability to perform certain computational tasks exponentially faster than classical computers , by nontrivially using the exponentially large size of a many - body quantum hilbert space . semiconductor quantum dot spin systems are one of the leading candidates for building a quantum computer because of their prospective scalability, their long coherence times, and their capacity for fast all - electrical gate operations. there are various ways to encode quantum information in the spin states of electrons loaded into one or more quantum dots . for example , the two spin states of a single electron can form a qubit; alternatively a qubit may also be encoded in the collective spin states of two or three electrons. in this paper , we focus on the case of the singlet - triplet qubit, where the qubit is encoded in the singlet - triplet spin subspace of two electrons trapped in a double quantum dot .this encoding scheme has the advantages of fast single - qubit operations and of being immune to homogenous fluctuations of the magnetic field .arbitrary single - qubit operations are performed by combining -axis rotations around the bloch sphere , achieved by a tunable exchange interaction between the singlet and triplet states, and -axis rotations , which are generated by a local magnetic field gradient. together with an entangling two - qubit gate , which can be based on either a capacitive coupling between the two qubits or an exchange coupling, one is then able to perform universal quantum computation .the great advantage of the singlet - triplet quantum dot spin qubits , leading to substantial experimental and theoretical activities in the topic , is that the qubit operations can all be implemented by external electric fields ( i.e. suitable gate voltages ) , thus making them operationally convenient as well as compatible with existing semiconductor electronics .one of the biggest obstacles to the realization of a quantum computer is the qubit decoherence that results from the interaction between the qubits and their environment .this decoherence must be very small for successful quantum computation to work , and the central problem of the whole field has been the issue of whether it is experimentally feasible to reduce decoherence to a level low enough for fault - tolerant quantum computation to go forward in particular , the decoherence must be very small both during the idling of the gates ( i.e. when the qubits are just quantum memory ) and during the actual gate operations .there are two main noise channels for singlet - triplet qubits leading to decoherence : overhauser noise , which stems from the hyperfine - mediated spin flip - flop processes that take place between the electron spins and the nuclear spins in the surrounding substrate, and charge noise arising from environmental voltage fluctuation , which corresponds to the deformation of the quantum dot confinement potential due to nearby impurities or other sources of uncontrolled stray electric fields. fortunately , these types of noise are highly non - markovian : they produce stochastic errors in the qubit hamiltonian which vary on a much longer time scale ( ) than typical gate operation times ( on the scale of ns ) .dynamical decoupling has proven to be a successful method for combating this kind of noise .its underlying idea is the `` self - compensation '' of errors , best illustrated by the hahn spin echo technique introduced first in the context of nmr: when a quantum state dephases due to noise over some time span , one may apply a -pulse to flip the sign of the error in the state , effectively reversing the error s evolution so that the qubit `` refocuses '' to its original state after a second time span of equal duration to the first . hereit is very important that the noise is non - markovian since one requires the noise to remain static over the time spans before and after the -pulse .this dynamical way of reviving a quantum state has proven invaluable to coherent manipulation of quantum systems , as have several more sophisticated pulse sequences that were subsequently developed and implemented in experiments. in general , dynamical decoupling extends the coherence time from the dephasing time to a much longer timescale ( which is defined depending on the specific dynamical decoupling sequence used ) beyond which the quantum information is inevitably lost . for singlet - triplet spin qubits in gaas quantum dots , ns and ms, while for si ns and ms but is expected to be even longer in isotope - enriched samples. therefore , dynamical decoupling is a powerful way to preserve a quantum state against noise , enabling robust quantum memory . achieving robust quantum memory capabilities , however , covers only one of the requirements for a viable quantum computer .equally necessary is the ability to protect the qubit from noise _ while _ performing quantum gates on it .this necessity has motivated the development of dynamically corrected gates ( dcgs), which can roughly be thought of as an extension of dynamical decoupling to the situation where the qubit is simultaneously being purposefully rotated .in particular , dcgs also typically exploit the notion of self - canceling errors . like dynamical decoupling ,such protocols have been vastly successful in nmr and in the general theory of quantum control. however , in contrast to dynamical decoupling , most approaches to dcgs developed thus far in the literature are not applicable to the case of singlet - triplet qubits because of their unique experimental constraints .first , the tunable exchange interaction which gives rise to -axis rotations is always non - negative and bounded from above by a certain maximal value. second , in order to do arbitrary single - qubit rotations , one must set up a magnetic field gradient across the two quantum dots ; this gradient can not be varied during gate operations , meaning that the control is effectively single - axis ( along ) and that there is an always - on field rotating the qubit states into each other .either constraint by itself would already rule out many dcg schemes ; together , these constraints make noise - resistant control in singlet - triplet qubits uniquely challenging . in particular , the spectacular pulse control techniques developed in the nmr literature over many years are useless for our purpose since nmr does not satisfy the special constraints discussed above , and we must start from scratch and develop dcg pulses for the singlet - triplet qubits obeying the special constraints of the problem .despite these challenges , it was realized recently that it is still possible to develop dcgs for singlet - triplet qubits subject to static noise . in ref ., we introduced supcode ( soft uniaxial positive control for orthogonal drift error ) , demonstrating that it is possible to design special sequences of square pulses that implement robust quantum gates while at the same time respecting all experimental constraints .supcode was originally introduced to cancel errors due to overhauser noise only . in the case of a non - zero magnetic field gradient, we showed how to cancel the leading - order effect of overhauser noise by supplementing a nave pulse with an uncorrected identity operation , designed in such a way that the errors accumulated during the identity operation exactly cancel the errors arising during the nave pulse .we further showed that by performing the identity operations as interrupted rotations around certain axes of the bloch sphere , error cancelation is always possible since one has the flexibility to include as many degrees of freedom as necessary for the cancelation simply by including more interruptions .the cost one has to pay is that the error - correcting pulse is typically substantially longer than the nave pulse . for the cases discussed in ref ., more than of rotation around the bloch sphere is required for an error - correcting pulse .a long pulse sequence is an essential price to pay for carrying out error - corrected dcg operations in quantum computation , but the pulse time can be optimized through careful calculations .this idea of correcting a nave pulse by supplementing it with an identity operation formed by nested rotations was further developed and optimized in ref . .there , we showed that arbitrary single - qubit rotations can be made resistant to both overhauser and charge noise simultaneously .furthermore , it was shown that the pulse sequence duration can be reduced by a factor of from the previous work , ref ., even though the sequences cancel both types of noise , not just overhauser noise , greatly increasing the experimental feasibility of these sequences .subsequently , alternative approaches to dcgs for canceling both types of noise in singlet - triplet qubits have appeared in the literature. in ref ., we also showed that supcode can be extended to construct robust two - qubit exchange gates based on the inter - qubit exchange - coupling , and that it is again possible to protect against both overhauser and charge noise .the design of a robust two - qubit gate is considerably more complicated because of the presence of additional errors that do not arise in the single - qubit case , including possible leakage error out of the computational subspace as well as the over - rotation error in the two - qubit ising gate caused by charge noise .nevertheless we have shown that these obstacles can be circumvented when single - qubit supcode gates are combined in a manner similar to the bb1 sequence developed in nmr. unfortunately , the resulting sequence is relatively long ( about of rotation ) and is challenging for actual implementation in the laboratory . the task then remains to reduce the length of the pulse sequence while maintaining its robustness against noise .the main purpose of this paper is to bridge the gap between the theory of supcode and its experimental implementation . as in the development of any theory , we have made several simplifying assumptions .first , it is generally the case that the qubit exchange coupling is controlled by the tilt , or detuning , of the double quantum dot confinement potential .this allows the experimenter to control the qubit by adjusting voltages , but it also makes the qubit vulnerable to charge noise .furthermore , the effect of charge noise on the qubit will generally depend on the precise dependence of the exchange coupling on the detuning . in our previous works, we have mostly assumed a phenomenological relation between the exchange coupling and detuning : , a form used in previous works. however , this phenomenological form is non - universal , and in practice varies from sample to sample .it is therefore an important question to ask whether supcode would still work for other charge noise models in which has a different form .second , we have assumed that the pulses are perfect square pulses which are turned on and off instantaneously . in actual experiments ,the pulses have finite rise times , and in ref ., we have shown that inclusion of the finite rise time would only amount to a shift in pulse parameters but otherwise leave our major results unchanged for the original supcode .the question remains whether the same holds for the more powerful yet shorter sequences presented in ref . .in this paper , we explicitly examine these experimental considerations and show that the power of supcode sequences is not compromised by the extra complications of real systems .we further clarify how one could slightly modify the pulse parameters of the two - qubit gate in order to accommodate different charge noise models .moreover , we show that the length of the corrected two - qubit gates can be reduced by as much as 35% from that shown in ref . , a significant step toward future experimental implementation .we believe that the optimized dcg pulse sequence proposed in the current article are ready for immediate implementation in the laboratory spin qubit experiments .most crucially , in the previous works we have assumed a static noise model .such a model captures the essence of the quasi - static noise found in actual experiments, and the basic idea is that in such realistic situations , performing a supcode sequence would echo away most , although not all , of the effect of the noise . in this paper, we test this idea by performing randomized benchmarking of the 24 single - qubit clifford gates , all found through our supcode framework , under noise , where is a parameter that depends on the physical processes causing the noise .we show that unlike for static noise , in this case there is a limit to the amount of improvement possible via supcode , but that this limit depends strongly on and substantial benefit from supcode is available for the case where .the results we present in this paper show that supcode is a powerful tool that can perform noise - resistant quantum gates despite the complications of real spin qubit systems , including different dependencies between the exchange coupling and the detuning , finite rise times and realistic noise sources . for these reasons, we believe that supcode will be immensely helpful to on - going experimental efforts in performing quantum gates on semiconductor quantum dot devices .this paper is organized as follows . in sec .[ sec : model ] we present the theoretical model , explain the experimental constraints and the basic assumptions that we have made . in sec .[ sec : oneqrot ] we give a very detailed and pedagogical review of how supcode sequences are constructed for an arbitrary single - qubit rotation .explicit examples of several quantum gates are also presented , including the 24 single - qubit clifford gates which are used in the randomized benchmarking in sec .[ sec : benchmarking ] .we discuss how different charge noise models and finite rise times would affect our supcode sequences in sec .[ sec : altgofj ] and sec .[ sec : finiterisetime ] , respectively . in sec .[ sec : twoq ] we show that the length of the corrected two - qubit gate presented in ref . can be significantly reduced in duration by about 35% .we also show how the pulse parameters are minimally altered for a general charge noise model . following this, we discuss the noise - resistant manipulation of a multi - qubit system using single - qubit and two - qubit corrected gates presented in this paper and the buffering identity operation required to accomplish this task .we present randomized benchmarking results in sec .[ sec : benchmarking ] .finally we conclude in sec .[ sec : conclusion ] .the model hamiltonian for a singlet - triplet qubit can be expressed in terms of the pauli operators as }{2}\sigma_z.\ ] ] the computational bases are and . here , , where creates an electron with spin at the dot .any linear combinations of the and states can be represented as a unit vector pointing towards a specific point on the bloch sphere , with and its north and south poles , respectively .being able to perform arbitrary single qubit operations then amounts to being able to rotate such a unit vector the bloch vector from any point to any other point on the bloch sphere .this capability combined with an entangling two - qubit gate , such as the cnot gate , suffices to achieve universal quantum computation .geometrically , one needs the ability to rotate around two non - parallel axes of the bloch sphere in order to complete an arbitrary rotation . in this system ,rotations around the -axis are performed with a magnetic field gradient across the double - dot system , which in energy units reads . in practice the magnetic field gradient is generated either by dynamically polarizing the nuclear spins surrounding the double dots ( the `` overhauser field '' ) , or by depositing a permanent micromagnet nearby. in principle , the magnetic field gradient can be changed , and thus also the rotation rate around the -axis .unfortunately , changing it requires times much longer than the gate operation time .therefore we assume that in performing a given computational task , the magnetic field gradient , , is held constant throughout .rotations around the -axis are done by virtue of the exchange interaction , the energy level splitting between and .a nice feature of the quantum dot system is that the magnitude of can be controlled by the detuning , namely the tilt of the effective double - well confinement potential , which in turn can be done by simply changing the gate voltages . in other words , by feeding in a series of carefully designed pulses to the control gates , one then has fast , all - electrical control of the rotation rate around the -axis .however , due to its intrinsic energy level structure, is bounded from below by zero , and from above by a certain maximal value , beyond which the tunneling between quantum dots becomes large enough to alter the charge configuration of the electrons .( in certain extreme conditions such as very high magnetic fields , is always negative .this does not change our argument since can not change its sign . )we emphasize here that it is this unique constraint , \le j_{\rm max} ] has on its right , while ] , as enclosed by the dashed frame in fig .[ fig : sk1](b ) , and expand the two gates in such a way that their end parts , the gates , are leaning toward each other. then one may simply do the contraction \cdot[r(\hat{z},-2\phi_1)\otimes i]\cdot[r(\hat{z},\pi)\otimes r(\hat{z},\pi)]\notag\\ & \quad = r(\hat{z},-2\phi_1)\otimes i\label{eq : sk1furtheroptimization}\end{aligned}\ ] ] as shown in the dotted frame of fig .[ fig : sk1](c ) . for the sequence shown in fig .[ fig : sk1 ] , seven -rotations and six gates are required .if we assume that each gate requires roughly of rotation around the bloch sphere , the total length of the gate , in terms of the angle swept , is around , a 35% reduction from the bb1 sequence .the above discussions of the bb1 and sk1 sequences have assumed , which means that the over - rotation error is proportional to the angle rotated [ cf .eq . ] . in the general case, we may revise eq . with the over - rotation error acquiring a factor dependent on : \varepsilon\}\theta}{2}\right\}.\label{eq : bb1revisedxgate}\ ] ]we shall demonstrate how our method works for the sk1 sequence , but application to bb1 sequence is conceptually the same .corresponding to eq ., in eq .we need the function for two rotation angles , and .this means that in eq .may take two possibly different values .we denote and .similarly to eq . , we define and eq . must be correspondingly revised as \varepsilon\right\}\notag\\ & \quad+{\cal o}(\varepsilon^2),\label{eq : sk1improvedrevised}\end{aligned}\ ] ] and has to satisfy .\label{eq : sk1improvedphirevised}\ ] ] to make the first - order error vanish . therefore under a general scenario with an arbitrary dependence of on detuning ,our sequence will work perfectly as long as one chooses the value as in eq . .here we made no assumption about the precise form of : the only important thing is that has to be known and the values of and can be calculated .since we now have corrected single - qubit and two - qubit gates , arbitrary multi - qubit circuits immune to noise can be performed in a similar manner as shown in fig .[ fig : bb1 ] and fig .[ fig : sk1 ] .there remains one more component essential for implementing a multi - qubit circuit : a variable - time identity operation .in fact , the identity operation plays important roles in several parts of our pulse sequence , which we explain in detail below .first , as can be seen from fig .[ fig : bb1 ] and fig .[ fig : sk1 ] , when qubit is undergoing a single - qubit operation , for example a -rotation , qubit has to undergo a corrected identity operation .one can not simply do nothing on qubit because the constant presence of the overhauser field required to access the -axis rotation would lead the qubit states to stray undesirably , and the situation is made worse by the presence of noise . therefore it is necessary for qubit to undergo a corrected identity operation which has the same time duration as the operation performed on qubit , namely they must both end at the same time and proceed to the next operation .the same holds for multi - qubit gates : when several qubits as part of a qubit array are performing certain operations , all remaining qubits must perform identity operations , and these operations should all have the same time duration .if the operations on two qubits have different time durations ( say and ) , then one must supplement those operations with identity operations with time durations and to make them end at the same time , while the remaining qubits must also end their respective operations at time .this is necessary to keep the entire system immune to noise to the leading order .for example , in the ising gate as shown in fig .[ fig : bb1](a ) , both qubits and have to perform a swap operation , .they would automatically span the same time if the overhauser fields for qubits and are identical .however when the overhauser fields are different , the two operations would end at different times and one must `` buffer '' them by identities as explained above . secondly , the identity operation is also fundamental for two - qubit gates : it is an essential ingredient for performing [ cf .eq . and fig .[ fig : bb1](a ) ] . in the definition of ,the argument is equal to , which then directly translates to the resulting ising gate . here , is a composite pulse implementing an identity operation in the subspace of dots 2 and 3 , therefore generating for a certain value of amounts to doing an identity operation with matching a predetermined value .the above discussion implies that we need a family of corrected identity operations which can generate a broad range of time durations as well as values of . to accomplish this, we employ an additional degree of freedom in the discussion of sec .[ sec : onepiece ] , the exchange interaction .note that for each value of , one can always perform a corrected rotation around with a certain time and value of .when is changed between and , and would also change , covering certain ranges .we have found such an identity as a level-6 one , defined as since the sequence is symmetric , we only need four unknowns .we then choose , and use as the `` tunable knob '' : for each given value of , we solve for physical solutions of , , , , and record the time duration and .we have found that the pulse sequence of eq . generates identity operations with time duration between and . by duplicating this identity ,one can obtain corrected identities spanning any time for .these identity operations can also be used in the construction of the two - qubit gates discussed in previous sections .this pulse sequence generates values of between and ( corresponding to ) . in ref ., we have used this sequence to generate and .( color online ) finite - bandwidth approximation of noise via sum of rts .( a ) thin red : power spectra of individual rtss ; thick black : sum of rts ; dashed : ideal .( b ) a specific sample of such noise drawn from this distribution.,title="fig : " ] ( color online ) finite - bandwidth approximation of noise via sum of rts .( a ) thin red : power spectra of individual rtss ; thick black : sum of rts ; dashed : ideal .( b ) a specific sample of such noise drawn from this distribution.,title="fig : " ] ( color online ) fidelity vs number of gates .( a ) dc noise , ( b ) noise .red / dashed : nave clifford implementation ; blue / solid : supcode clifford implementation .points are from rb simulation and curves are fits to .note that the exponential decay model does not fully describe the data in part ( b ) . ]( color online ) fidelity decay constant vs noise amplitude .( a ) dc noise , ( b ) noise .red / dashed : naive clifford implementation ; blue / solid : supcode clifford implementation .points are from rb simulation , curves are proportional to and . for dc noise ,as expected , the lowest - order contribution of the noise is canceled by supcode , leaving a residual effect . for the ac noise , the improvement from supcode saturates at approximately 10-fold reduction in .vertical lines indicate the values of used in fig .[ fig : figdcexp].,title="fig : " ] ( color online ) fidelity decay constant vs noise amplitude .( a ) dc noise , ( b ) noise .red / dashed : naive clifford implementation ; blue / solid : supcode clifford implementation .points are from rb simulation , curves are proportional to and . for dc noise ,as expected , the lowest - order contribution of the noise is canceled by supcode , leaving a residual effect . for the ac noise , the improvement from supcode saturates at approximately 10-fold reduction in .vertical lines indicate the values of used in fig .[ fig : figdcexp].,title="fig : " ] ( color online ) asymptotic improvement ratio of supcode vs nave pulses , for noise .the line is the supcode sequences cancel lowest - order effects of static ( dc ) noise , but they will not function in the opposite limit , of completely white noise . in reality , we expect the noise spectrum to be of an intermediate form , where recent experimental work puts for nuclear spin fluctuations ) , for charge noise. for such `` colored '' noise , the slow correlations mean that it is not necessarily possible to predict the fidelity of a quantum algorithm involving a sequence of gates from looking at the performance of the individual gates within that sequence . a powerful technique for investigating the fidelity of pulse sequences exists in the form of randomized benchmarking ( rb ) ( a nice theoretical overview is given in ref . ) .the crucial insight of rb is that instead of investigating arbitrary gates , we may restrict ourselves to a finite subset , the clifford group .this means that we need only produce a finite set of corrected gates .also , we can efficiently calculate the effect of any arbitrary sequence of ideal clifford gates acting on a state .additionally , after any arbitrary sequence of clifford gates applied to the system ground state , only 1 additional , efficiently - calculable , clifford gate is required to rotate the resulting state into the standard - measurement basis .this last property is crucial for the experimental implementation of rb , allowing errors in the clifford gates to be determined independently of any errors in state preparation and measurement ( spam ) .we have investigated the theoretical performance of our single - qubit gates using a numerical simulation of randomized benchmarking .we denote the set of 24 single - qubit cliffords as , and similarly denote a pulse implementation of the group ( which may be a nave uncorrected implementation , or one of our corrected supcode composite pulse implementations as given explicitly in tables [ tab : numericonepiece ] to [ tab : genpulsepara ] and shown graphically in fig .[ fig : fignaiv ] ) as .we can calculate the expected fidelity of an implementation of a length- sequence of cliffords as where denotes the fidelity between unitaries and , and the bracket represents averaging over both the choice of random clifford elements distributed uniformly and independently over and also averaging over realizations of the charge and magnetic field noise , parameterized by an amplitude .we generate `` '' noise realizations via a weighted sum of random telegraph signals ( rtss) , resulting in noise that approximates a desired power spectrum over a wide range in , as shown in fig .[ fig : figoverf ] .we choose the low - frequency cutoff such that the slowest rts has time constant and the high - frequency cutoff from .one interpretation of a low - frequency cutoff is that it corresponds to the experimentalist making a calibration of and on a timescale of prior to a given benchmarking run . as such, our choice of minimizes the relative improvement due to supcode , since it corresponds to calibrating out and about as quickly as is reasonable to imagine : more usually will be on the order of minutes or hours ( ) leading to a much larger dc component of the noise and correspondingly better performance of supcode compared to nave pulses .our high - frequency cutoff is on the order of the shortest pulses of our sequences , such that all higher frequencies are effectively `` white '' : extending the cutoff towards higher frequencies should be equivalent to adding a white noise background that will affect the nave and corrected pulses similarly , depending only on their total duration . bothbecause the nave and corrected pulse sequences are built from piecewise - constant pulses and because the noise realizations are also piecewise - constant , the system evolution can be efficiently calculated as a product of matrix exponentials .this gives an efficient calculation of the expected fidelity .we proceed in the standard way for rb by fitting for differing to a decaying exponential function , where unlike in the case of experimental rb we are able to avoid fitting an overall scaling factor , due to absence of spam errors for this numerical simulation . due to the non - markovian form of the noise , is not necessarily expected to have exactly exponential form , and indeed we do observe a deviation from the exponential in fig . [fig : figdcexp ] .nevertheless we use the fitted [ which can be related to an error - per - gate ( epg) ] to summarize the performance of a particular implementation of the clifford group under a particular noise distribution .when the strength of the noise is reduced , we find that , as expected , the epg of a supcode clifford implementation falls more steeply than for a nave implementation ( see fig .[ fig : figdjdh ] ) .for static noise , the for supcode is order in the noise strength , , compared to order for the nave implementation , allowing the supcode to perform arbitrarily better than the nave sequence , if the noise can be reduced sufficiently .however , for colored noise , the ratio of the nave to the supcode saturates to a finite value , , in the limit that the noise is reduced toward zero , .thus , there is a maximum improvement that is possible for supcode .we find that this ratio is a strong function of the exponent of the noise distribution , and over the range it fits well to an exponential function ( see fig .[ fig : figratio ] , where ) .( to study much outside this range , we would need to use a different process to generate the noise . )the specific value of the base varies , , when sweeping the low- and high - frequency cutoffs of the noise spectrum over a factor of 10 , but the sensitivity to remains .based on this empirical result and the experimental estimates of , it seems that supcode should perform extremely well against magnetic field noise , but have more limited success against charge noise .this assumes the experimental estimates of for these noises turn out to hold true , and comes with the caveat that a sum of rtss can not reproduce a noise spectrum with where a spin - diffusion model is more physically realistic .a future variant of supcode might trade a fraction of the performance against field noise for improved performance against charge noise .our numerical rb technique can be extended in a straightforward , if tedious , fashion to investigate 2-qubit sequences .we have only considered the case where the magnetic field noise and charge noise are of similar magnitude , have the same , and are generated independently : it will be interesting to relax some of these constraints .in particular it could be interesting to examine the effect of correlated noises , and it may be possible to construct families of pulse sequences that sacrifice some performance on general independent noise in favor of performance on correlated noise .another open question relates to the failure of the gaussian approximation for colored noise the noise is not only characterized by the power spectrum , but also by the microscopic structure of the environment .for example , rather than our weighted sum of rtss , modeling the case where the noise is due to a collection of two - level fluctuators with random switching rates , the same noise spectrum could arise from a single fluctuator with an undetermined switching rate .due to the failure of the gaussian approximation , these different environments may cause different behavior ( see , for example ref . ) .our numerical technique can be extended to investigate the behavior of our gates under such different environments .in conclusion , we have shown that our protocol for performing robust quantum control of semiconductor spin qubits , supcode , can be extended to incorporate the numerous complications inherent in a real quantum device without compromising any of its error - suppressing capabilities .we have shown that this is true for both the full range of single - qubit operations as well as for an entangling two - qubit gate , demonstrating that noise - resistant universal quantum control can be achieved in actual experiments . in the case of the two - qubit gate , we have also explained how the gate operation time can be substantially reduced compared to earlier work , constituting a crucial step toward experimental implementation .in addition , we have provided a randomized benchmarking for our proposed gate control operations .below , we summarize our main findings regarding each of these points .the most important message of this work is that the applicability of supcode is not in any way diminished when various experimental complications are taken into account .one such complication stems from the dependence of the exchange coupling on the detuning .this dependence varies from sample to sample and has a large impact on the effect of charge noise on the qubit , so it is therefore important that schemes to combat charge noise such as supcode are able to incorporate this dependence into their functionality . in our earlier work on supcode , as well as in other theoretical and experimental works ,a simple model in which the exchange coupling is assumed to increase exponentially with the detuning was used .while this assumption can greatly simplify the theoretical analysis , it also raises the question of whether the efficacy of supcode depends on this assumption . here, we have explicitly shown that this is not the case , and that supcode remains equally effective for other models of the exchange - coupling dependence on detuning .in fact , for a general model , we have seen that one simply needs to adjust the form of the coupled nonlinear equations and then follow the standard procedure to solve them to obtain error - suppressing pulse sequences .we demonstrated this fact explicitly for two alternative choices of the exchange coupling function and showed that numerical solutions can still be found .furthermore , we have shown that these results hold for both the single and two - qubit gates .a second complication that arises in real experiments is that pulses can not be made perfectly square ; instead they necessarily contain a finite rise time during which the exchange coupling switches between zero and non - zero values . replacing the perfect square with a trapezoidal model for the pulses, we showed that a finite rise time would merely translate to rather small shifts in the pulse parameters relative to the values obtained for square pulses .we further showed that it is generally the case that one can start with the pulse parameters found assuming perfectly square pulses , and then optimize around these values to obtain noise - resistant sequences of pulses with finite rise times .the fact that finite rise times do not lead to a substantial change in the parameters means that such a search remains local in parameter space and is relatively easy to perform . a third experimental reality that our earlier works on supcode did not account for is the fact that the noise is not truly static , exhibiting some variation on longer time scales .to address the importance of this effect on the operation of supcode , we presented a complete randomized benchmarking analysis showing that this indeed puts some limitations on the performance of supcode . to make supcode experimentally feasible , it is not only important to account for the issues that arise in real physical systems , but it is also crucial to shorten the total gate operation times as much as possible . in particular , we showed that the length of the corrected two - qubit gate presented in ref. can be significantly reduced by about 35% .this large reduction is made possible by replacing the bb1 sequence with a generalized sk1 sequence , in conjunction with some additional optimizations to the sequence .it is , in principle , likely that the pulse sequence can be shortened further through extensive numerical searches for better optimization , but given that the pulse sequences proposed in this work are already short enough for laboratory implementations , we believe that the time is here for a serious experimental investigation of supcode to test its efficiency in producing error - resistant one- and two - qubit gates for spin qubit operations in semiconductor quantum dot systems .quantum dot spin qubits , particularly because of their scalability , are one of the primary candidates for the building blocks of a quantum computer .the noise - insensitive gates generated by supcode help fill the need for precise and robust quantum control in these qubits . in this paper, we show how one may apply supcode to produce noise - resistant single - qubit , two - qubit and multi - qubit operations .not only do supcode sequences respect all the fundamental experimental constraints associated with singlet - triplet qubits , but they also possess a remarkable robustness and flexibility when realistic , sample - dependent factors are taken into account .we therefore believe that a judicious use of supcode is capable of bringing gate errors below the quantum error correction threshold , thus ushering in the possibility of fault - tolerant quantum computation in singlet - triplet semiconductor spin qubits .54ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] __ ( , ) link:\doibase 10.1038/nphys174 [ * * , ( ) ] link:\doibase 10.1038/nphys1856 [ * * , ( ) ] link:\doibase 10.1126/science.1116955 [ * * , ( ) ] link:\doibase 10.1038/nature10707 [ * * , ( ) ] link:\doibase 10.1103/physreva.57.120 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.89.147902 [ * * , ( ) ] link:\doibase 10.1038/35042541 [ * * , ( ) ] link:\doibase 10.1103/physrevb.82.075403 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.111.050501 [ * * , ( ) ] link:\doibase 10.1038/nphys1424 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.105.216803 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.107.146801 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.110.177602 [ * * , ( ) ] link:\doibase 10.1126/science.1217692 [ * * , ( ) ] link:\doibase 10.1103/physrevb.86.085423 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.101.236803 [ * * , ( ) ] link:\doibase 10.1103/physrevb.79.245314 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.140403 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.96.100501 [ * * , ( ) ] link:\doibase 10.1063/1.3194778 [ * * , ( ) ]link:\doibase 10.1103/physrevb.83.235322 [ * * , ( ) ] link:\doibase 10.1103/physrev.80.580 [ * * , ( ) ] link:\doibase 10.1103/physrev.94.630 [ * * , ( ) ] link:\doibase 10.1063/1.1716296 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.95.180501 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.98.100504 [ * * , ( ) ] http://stacks.iop.org/1367-2630/15/i=9/a=095004 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.105.266808 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.108.086802 [ * * , ( ) ] link:\doibase 10.1038/nature11449 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.105.187602 [ * * , ( ) ] link:\doibase 10.1038/nmat3182 [ * * , ( ) ] link:\doibase 10.1016/0022 - 2364(89)90077 - 2 [ * * , ( ) ] link:\doibase 10.1103/physreva.80.032314 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.104.090501 [ * * , ( ) ] link:\doibase 10.1209/0295 - 5075/89/10011 [ * * , ( ) ] link:\doibase 10.1103/physreva.85.052313 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.020501 [ * * , ( ) ] link:\doibase 10.1103/physreva.88.052326 [ * * , ( ) ] link:\doibase 10.1038/ncomms2003 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.110.140502 [ * * , ( ) ] link:\doibase 10.1103/physreva.86.042329 [ * * , ( ) ] link:\doibase 10.1006/jmra.1994.1159 [ * * , ( ) ] link:\doibase 10.1103/physreva.67.012317 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.110.146804 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.98.050502 [ * * , ( ) ] link:\doibase 10.1103/physrevb.84.155329 [ * * , ( ) ] link:\doibase 10.1103/physrevb.86.205306 [ * * , ( ) ] link:\doibase 10.7566/jpsj.82.014004 [ * * , ( ) ] link:\doibase 10.1103/physrevb.84.075339 [ * * , ( ) ] link:\doibase 10.1103/physreva.85.042311 [ * * , ( ) ] _ _ ( , ) link:\doibase 10.1103/physreva.87.052328 [ * * , ( ) ]
we present a comprehensive theoretical treatment of supcode , a method for generating dynamically corrected quantum gate operations , which are immune to random noise in the environment , by using carefully designed sequences of soft pulses . supcode enables dynamical error suppression even when the control field is constrained to be positive and uniaxial , making it particularly suited to counteracting the effects of noise in systems subject to these constraints such as singlet - triplet qubits . we describe and explain in detail how to generate supcode pulse sequences for arbitrary single - qubit gates and provide several explicit examples of sequences that implement commonly used gates , including the single - qubit clifford gates . we develop sequences for noise - resistant two - qubit gates for two exchanged - coupled singlet - triplet qubits by cascading robust single - qubit gates , leading to a 35% reduction in gate time compared to previous works . this cascade approach can be scaled up to produce gates for an arbitrary - length spin qubit array , and is thus relevant to scalable quantum computing architectures . to more accurately describe real spin qubit experiments , we show how to design sequences that incorporate additional features and practical constraints such as sample - specific charge noise models and finite pulse rise times . we provide a detailed analysis based on randomized benchmarking to show how supcode gates perform under realistic noise and find a strong dependence of gate fidelity on the exponent , with best performance for . our supcode sequences can therefore be used to implement robust universal quantum computation while accommodating the fundamental constraints and experimental realities of singlet - triplet qubits .
solution growth emulates a way in which nature often produces single crystals of minerals , i.e. out of a liquid with a composition that is different from the product . in our laboratory , as well as in others ,many of the materials that are produced for investigations of their physical properties , are grown as single crystals by solution growth . often , when it comes to producing single crystals of a desired phase , insufficient phase - diagram data is available , and we must estimate which composition and temperature ranges may produce the desired phase as well as what crucible to use .then , based on the products of such experiments , it is decided if and how to alter the the initial composition , temperature range and crucible material .this process can , at times , require multiple iterations that consume both time and resources .recently , we have expanded the use of differential thermal analysis ( dta ) as part of this procedure .we have found that the use of dta can greatly facilitate the optimization of the growth , and can play a role in selecting the right crucible material . in this paper ,which is focussed on solution growth , the comparison of realistic simulations with experimental differential thermal analysis ( dta ) curves is used to obtain growth parameters without detailed knowledge of phase diagrams , whereas prior use of dta for crystal growth mainly involved detailed and lengthy phase - diagram studies ( see e.g. ref . ) . in the following , after describing the experimental techniques ,we will present simulations of dta signals for a hypothetical binary system .then we will discuss the dta - assisted growth of three compounds that may serve as examples : tbal , pr , and ymn .of these phases , mm - sized single crystals have not been produced before .the descriptions of solution growth in this paper are by no means complete .rather , with its focus is on the use of dta for determining solution - growth parameters , it should be considered an extension to earlier papers , by fisk and remeika , canfield and fisk and canfield and fisher .differential thermal analysis ( dta ) was performed in a perkinelmer pyris dta 7 differential thermal analyzer . as a process gas, we used zr - gettered ultra - high - purity ar .for crucibles we used al ( manufactured by perkinelmer ) , mgo ( custom - made by ozark technical ceramics , inc . ) , and ta ( home - made from small - diameter ta tubes ) .in order to protect the pt cups and the thermocouple of our instrument from possible ta diffusion , the ta crucibles were placed inside standard ceramic crucibles .the samples , with mass - 80 mg , were made by arc melting appropriate amounts of starting materials with typical ( elemental ) purities of 99.9 - 99.99% .an experiment consisted of two or three cycles at heating and cooling rates of typically 10 - 40 / min .the data from the first heating cycle was different from data from subsequent heating cycles .this may have occurred because the sample shape did not conform to the crucible so that it was not in intimate contact with the crucible walls until it had melted , or because a reaction with the crucible changed the composition of the sample . in a dta curve , besides the events described in sec . 3, there is also a baseline , not associated with the properties of the sample ( see e.g. ref .this baseline is also influenced by the rate at which the dta - unit ramped . for the growth experiments we used the following procedure .appropriate amounts of starting materials with typical ( elemental ) purities of 99.9 - 99.99% were selected , pre - alloyed by arc melting , if needed , and put into a crucible .the crucible material was the same as for the dta experiment , al ( coors ) , mgo ( ozark tech . ) , or ta ( homemade ) . for separating the grown crystals from the remaining liquid, a sieve was used . for the ceramic crucibles ,an inverted crucible catches the liquid , while a plug of quartz wool in the catch crucible acted as a sieve .the ta crucibles were ` 3-cap crucibles ' , with a built - in sieve .the crucibles were placed in a flat - bottom quartz ampoule with some quartz wool above and below the crucibles , to prevent possible cracking caused by differential thermal expansion between the quartz and the crucible and to provide cushioning during the decanting process .the quartz ampoule was evacuated and filled with a partial pressure of ar , so that the pressure in the ampoule was nearly atmospheric at the highest temperature , and was then sealed .the ampoule was then placed in a box furnace , and subject to a heat treatment determined from dta experiments ( see sec .after the final temperature had been reached , the ampoule was taken out of the furnace , inverted into the cup of a centrifuge and quickly spun to decant the liquid from the crystals . for initial characterization , we measured powder x - ray diffraction patterns on one or several finely ground crystals from the growth yield with a rigaku miniflex+ diffractometer employing cu - k radiation .the patterns were analyzed with rietica , using a le bail - type of refinement .for the evaluation of measured dta curves , we compared them to simulated , ideal , dta curves . for the simulations, we considered a hypothetical binary system , the - system ( fig . 1 ) .this system is composed of and that melt congruently at 700 and 1000 or 1400 , respectively , peritectic compounds and , at and , respectively , and a eutectic alloy of composition with a eutectic temperature of 600 . at its decomposition temperature , 800 , peritectic decomposes into a liquid with composition and solid , which in turn decomposes , at 900 , into a liquid with composition and solid . for the liquidus lines we chose second - order polynomials with the concentration of as a variablethis functional dependence can be considered realistic , and is easily evaluated . and and two peritectics , and .the dotted line is for melting at 1400 .the dashed vertical lines indicate compositions for which dta curves were calculated.,scaledwidth=60.0% ] a full discussion , see e.g. ref . , of dta is beyond the scope of this paper and we shall limit our consideration to a rather simple model .we will consider the dta as a black box , that produces a signal proportional to the temperature derivative of the enthalpy of the sample .we assume that the enthalpies of formation for the phases , , , and are equal , and that the specific heats for the solid phases and the liquid ( regardless of composition ) , are constant and equal .then the simulated dta signal is proportional to the temperature derivative of the fraction of solid to liquid , determined by the well - known lever law ( see e.g.ref .our dta measurements were usually carried out at heating and cooling rates of 10 to 40 / min , which does not usually result in an equilibrium distribution of phases .therefore , a cooling curve was calculated assuming that once a phase has solidified it does not dissolve back , or react with other solid phases. then the composition of the liquid follows the liquidus line and the final solid contains a nonequilibrium distribution of phases .the calculated heating curve represents the heating curve of this nonequilibrium distribution of phases under the assumption that the kinetics are such that also during heating the liquidus line is followed .although the assumptions are not fully realistic , the patterns recognizable in experimentally observed curves are reproduced well by our model . during a dta measurement ( even in the absence of undercooling ) , there is a difference between the events observed in the heating and cooling curves .the melting and solidification events have the same onset temperature , but show a width ( dependent on the heating or cooling rate ) . to include this effect in our simulations as an instrument response function, we assumed this temperature lag is temperature independent and can be described by the ` standard distribution ' , that is often used to describe waiting times . in fig .2a the calculated dta curves for an alloy of composition are presented ( see dashed vertical line in fig . 1 ) .events at both the eutectic ( 600 ) and liquidus ( 780 ) temperatures are clearly observed .the eutectic is a much more sharply defined event than the liquidus , because the observed thermal event is proportional to the temperature derivative of the fraction of solid - to - liquid . in fig . 2b ,the calculated dta curves for an alloy of composition are shown ( see dashed vertical line in fig . 1 ) .the eutectic ( 600 ) , and the peritectic decomposition of ( 800 ) are clearly observed in the calculations , whereas the liquidus ( 840 ) is less obvious . in fig .2c , the calculated dta curves for an alloy of composition are shown for the case that melts at 1000 ( see dashed vertical line in fig . 1 , crossing the lower - lying liquidus line ) .the eutectic ( 600 ) , the peritectic temperature of ( 800 ) , the peritectic temperature of ( 900 ) and the liquidus ( 956 ) are visible . note that in the simulations for figs .2c the peritectic temperatures for the -phase and the eutectic are visible because of the non - equilibrium nature of the model .such dta curves can be used , especially in the case that the underlying phase diagram is unknown , to determine the temperature range over which crystals can be grown .for example , the dta curves for a sample of composition show that the primary solidification for that composition can be grown by slowly cooling between 800 and 600 .in addition , to separate the crystals from the liquid , the sample should be decanted above 600 . after the growth ,the crystals will be identified as , e.g. by x - ray diffraction .crystals of the -phase form for and can be grown by slowly cooling between the liquidus , 850 , and 800 , and separated by decanting above 800 . however , choosing the composition is quite demanding : the maximum useful temperature range for growth is only about 30 - 50 .moreover , the weight fraction of crystals to liquid will be low for this composition .finally , crystals of the -phase form for and can be grown ( for the lower - lying liquidus line in fig .1 ) by cooling slowly between 970 , and separated by decanting above 900 . of the phase diagram in fig . 1, for ( * a * ) melting at 1000 and ( * b * ) at 1400 . the dotted line in ( * b * ) represents a cooling curve with undercooling by 50 taken into account.,scaledwidth=60.0% ] the simulations indicate that the event associated with the liquidus can be quite weak ( see figs .2a and b ) . as we mentioned above, the strength of the thermal signature for a liquidus is highly dependent on the slope of the liquidus curve , and can be very hard to detect experimentally . consider an identical phase diagram , except that melts at 1400 ( dotted in fig . 1 ) . as a resultthe liquidus curve above 900 is considerably steeper . in fig . 3 , modelled dta cooling curves for alloys from both hypothetical systemsare presented . as can be seen , the dta signature for the system with a lower - melting and a lower slope is much easier to detect , i.e. removing more solid from the liquid per yields a stronger dta signal . under the assumption that there is no time lag due to the dissolution of the last solid phase, the calculated heating curves can be considered realistic representations of actual measurements .however , experimental cooling curves may be shifted to lower temperatures due to undercooling .in many systems , the nucleation of the crystalline phase from the liquid can be slow .since the dta measurements were made at reasonably high cooling rates this may lead to significant undercooling .( note that in an actual growth experiment , typically at much slower cooling rates , there is probably less undercooling . ) in an undercooled melt , when nucleation does occur , there is a rapid growth of the solid phase . in a simulation for the composition with melting at 1400 , allowing undercooling by 50 , the solidification is very clearly visible in the dta curve ( fig .3b , dotted line ) .thus whereas we may not detect the true liquidus temperature we may be able to determine both an upper bound given by the maximum temperature to which the sample was heated , and a lower bound given by the solidification event for the liquidus . we can give the upper bound , because when not all crystals have been dissolved ( i.e. the liquidus temperature has not been reached ) these serve as perfect nucleation centers , preventing undercooling . note that we have frequently observed that peritectic temperatures and occasionally the eutectic temperature are also undercooled .the growth of tbal provides an example of the importance of identifying the right crucible material , and the role dta measurements can play in this selection .furthermore , it also provides an example of the efficient determination of a very narrow temperature range , by dta , over which crystals can be grown .the crystal structure and magnetic properties , that involve anisotropic exchange interactions , of polycrystalline tbal were published some time ago . for an estimate of a composition for the initial melt for solution growth ,we considered the binary phase diagram of the tb - al system . to our knowledge , this has only been predicted , from the systematics of different rare - earth - al binary systems , but not verified experimentally in detail .especially the tb - rich side ( above 56% tb ) is uncertain .this predicted phase diagram includes 2 eutectics , at 3.5% ( 642 ) and at 77% tb ( 903 ) , and 5 compounds : tbal , tbal , tbal , tb , and tb .of these , tbal melts congruently , whereas tbal and tbal form peritectically .even though tb and tb are indicated to form peritectically as well , congruent melting can not be excluded due to a lack of experimental data . since the phase diagram around this composition was indicated to be uncertain , a dta experiment , rather than a slow and expensive growth served as an inexpensive and quick check , using only small amounts of starting materials .the primary - solidification line for tbal reportedly lies between 57% and 67% tb and 1079 - 986 .therefore , for the experiments described below , we chose to use an alloy with composition tb . in our experience , an alloy with more than 10 - 15% rare earth can not be reliably held in an al crucible , because of thermite - type reactions .furthermore , we considered it possible that the tb alloy , being also rich in al , would attack ta , the other crucible material we regularly used in the past .mgo , on the other hand , could be stable enough to use . to test the crucibles , we performed dta experiments in the three available crucible materials : al , ta , and mgo . in order to prevent direct contact of unreacted elements with the crucibles , we alloyed samples of approximately 40 mg by arc - melting prior to the dta experiments .dta heating and cooling cycles were performed three times between 1240 and 600 , at heating rates of 40 / min and cooling rates of 10 / min .the curves obtained from the first two cycles , between 800 and the highest temperature reached , are shown in fig .al in ( a ) al,(b ) ta , and ( c ) mgo , measured upon heating with a 40 / min rate and cooling with a 10 / min rate.,scaledwidth=60.0% ] upon heating , in both the first and the second heating cycle , an endothermic event occurred near 970 in all three cases .besides a shift of the baseline , probably because the contact between the sample and the thermocouple changed after melting , the mgo and ta - crucible curves show no significant difference between the first and second heating curves , and no clear events at higher temperatures .the two al-crucible heating curves , however , are very different : a clear exothermic bump between 1070 and 1120 is visible in the first heating curve , but not in the second one . furthermore , the endothermic event near 970 is less pronounced in the second heating curve , and followed by a pronounced second endothermic event that peaks near 1100 . upon cooling ,( note that the non - linear but smooth behavior at elevated temperatures in the cooling curves are due to stabilization of the furnace - ramp rate , rather than due to thermal events ) the mgo and ta - crucible curves show no significant difference between the first and the second cooling cycle .in the al-crucible curves , there are several differences between the first and second cooling curves .particularly , near 1210 , a weak exothermic event may be observed in the first cooling curve , but not in the second cooling curve . in both first and second cooling cycles , pronounced exothermic peaks were observed near 1060 and 960 .note that the peak near 1060 is _ only _ observed in the al-crucible curves .these results are consistent with a thermite - type reaction of the alloy tb with the al crucible , reducing the amount of tb in the metallic liquid , and increasing the amount of al .the exothermic bump in the first heating cycle , between 1080 and 1120 ( fig .4a ) then is associated with this reaction .the event that occurred upon cooling at 1060 , is likely due to the peritectic temperature of tbal , predicted to be at 1079 .furthermore , the highest - temperature event in the first cooling cycle was no longer present in the second cooling cycle , maybe due to the sample becoming so rich in al that it could no longer fully melt ( the nearest compound richer in al , al tb reportedly melts at 1514 ) .since both ta and mgo showed no evidence of a reaction , both are good candidates as a crucible material .( although we worried that the tb alloy would attack ta , there is no clear indication of such an attack from the dta results . ) for the growth experiment , we chose mgo as the crucible material .al measured upon heating with a 10 / min rate and cooling with a 10 / min rate.,scaledwidth=60.0% ] the heat treatment for the growth experiment is determined from the details of the dta curves . in fig . 5 ,the relevant parts of a third heating and cooling curve ( measured at 10 / min for both heating and cooling ) are shown for an mgo - crucible experiment . upon heating , a sharp endothermic peakis observed with an onset temperature of about 965 .this is followed by a weak step near 1015 - 1035 . upon cooling ,two events are observed .there is a weak exothermic bump with an onset temperature of about 1010 , followed by a strong exothermic peak with an onset temperature of about 960 .note the resemblance between these experimental results and the simulation in fig . 2a , and also note that there is some undercooling of the liquidus , - 30 , that makes the liquidus much more visible upon cooling , as in the simulation in fig .the measurements suggest that crystals of the primarily solidifying compound for the composition tb can be grown by cooling slowly between and , followed by a decant above .taking into account the possibility of undercooling , this heat treatment is very demanding ( limited temperature range for growth ) , and may be influenced by differences in thermometry between the growth furnace and the dta . for the growth experiment , an arc - melted button ( g )was first heated to 1200 , kept at this temperature for 2 h for homogenization , then it was cooled in h to 1020 , near the observed liquidus in the heating curve of the dta experiment .finally , it was cooled in 10 h to 975 , which is near the temperature of the onset of the endothermic event in the heating curve of the dta experiment . at this temperature ,the sample was decanted . in the growth crucible ,large crystals of several mm were found ( see the photograph in fig .powder - x - ray diffraction identifies the crystals as tbal , with space group _ pbcm _ , and lattice parameters a=5.85(3 ) , b=11.4(3 ) , and c=5.63(3 ) , in agreement with the reported crystal structure .[ cols="^,^,^ " , ] crystals of tbal could probably have been obtained from the alloy tb based upon the available binary phase diagram alone , cooling slowly below the reported liquidus temperature ( 1060 ) and decanting above the reported peritectic temperature for tb ( 986 ) .however , we have often observed that binary phase diagrams with rare - earth components need revision ( see e.g. ) , and the tb - al binary phase diagram was already known to be uncertain .for the dta simulations , we have assumed that the solidification of an alloy follows the liquidus . under these assumptions, it seems from our dta experiments on tb , that the solidification of tbal is immediately followed by a last , eutectic , solidification at a temperature substantially higher than the eutectic temperature reported ( 903 ) .if the alloy indeed followed the liquidus , the published binary phase diagram requires re - examination .however , further experiments , which are outside the scope of this paper , would be required to clarify this .we are currently investigating the ternary pr - ni - si system , including the liquidus surface of the pr - rich corner . approximately 20 ternary intermetallic compounds have been reported for the pr - ni - si system , in accordance with what was reported for ce - ni - si , and nd - ni - si .one of those is the compound pr . for many pr - ni - si compounds ,the range of compositions that form the primary - solidification surface is very narrow , sometimes down to a few percent .therefore , it is useful to quickly and systematically test different compositions via dta analysis . in this section ,we show that our dta and growth experiments , on an an alloy of composition pr , resulted in the identification and optimized growth of the compound that solidifies primarily : pr .a rod of several grams of composition pr was prepared by arc - melting and drop - casting . since the rod had cooled down very quickly , we considered it homogeneous on the scale of the dta samples . therefore , a piece of this rod , of the typical size for a dta experiment or a growth , was considered representative for the whole rod .two pieces were taken , about 40 mg for the dta experiment , and about 2.25 g for the growth experiment .for the experiments , we used ta crucibles as experience showed us that , whereas ta may be expected to be attacked by ni and si , these alloys ( with about 50% of pr ) do not appear to attack the crucible .ni measured upon heating and cooling with a 10 / min rate.,scaledwidth=60.0% ] for the dta experiment , the sample was heated and cooled three times between and at 10 / min .after the first heating cycle , during which the sample settled in the crucible , the measurements were reproducible . in fig .7 , the relevant parts of the third - cycle curves are shown .the curves are substantially noisier than those shown for the tbal experiment .this may be because the thermal contact between the ta crucible and its ceramic liner varied , or because the mass of the ta crucible was substantially greater than that of the sample inside it . in spite of the noise, we were able to extract information , useful for crystal growth , from the experiment . in the cooling curve , four sharp exothermic eventscan be observed , with onset temperatures of about 1025 , 950 , 730 , and 685 .this experimental cooling curve can roughly be compared to the simulation of the hypothetical binary alloy , see fig . 2c . in that simulation , upon cooling , four events occur , the liquidus , two events associated with peritectics , and the eutectic . the experimental liquidus is probably sharpened by undercooling , see fig . 3 . in the experimental heating curve ,only one event with an onset temperature of about 690 , can be observed clearly . at higher temperatures, there may be an event near 990 ( a weak peak ) , and between - 1090 ( a broad step ) .note that , whereas we do not fully understand the differences between these heating and cooling curves ( particularly in the lower temperature range ) , for crystal growth we only need to know the temperature region over which only the primary solidification grows , i.e. between the liquidus temperature and the temperature where secondary phases may start to grow . the heat treatment for crystal growth might be proposed based upon the dta cooling curve alone , but we used the heating curve for some guidance . for estimation of the liquidus , we examined at the highest - temperature events . in the cooling curve , the shape of the peak near 1025 appears consistent with undercooling ( c.f .3b ) . in the heating curve, the broad step between - 1090 may be associated with that peak and appears similar to the simulated liquidus in fig .2b . in the cooling curve, the second - highest temperature event appears as a sharp peak near 940 suggesting a decanting temperature higher than 940 ( but lower than 1025 ) .the weak peak in the heating curve at 990 may suggest that secondary phases can start to grow below that temperature . since this temperature falls between the two highest - temperature events in the cooling curve , we considered it safe to decant at a temperature slightly above 990 . therefore , we used the following heat treatment for the growth experiment .the sample was heated to 1190 in 5 h , and allowed to equilibrate for 2 h. after that , it was cooled to 1100 in 2 h. after this it was cooled to 1000 in 50 h , after which the sample was decanted . in the crucible , large mm - sized blocky crystals were found . a photograph of one of the crystals is displayed in fig .powder - x - ray diffraction identified the crystals as pr , with space group is _ pnma _ , and lattice parameters a=23.32(3 ) , b=4.302(3 ) , and c=13.84(3 ) , in agreement with the reported crystal structure .the compound is only know by its crystal structure , therefore we are currently investigating its low - temperature thermodynamic and transport properties .the combined dta and crystal - growth experiments demonstrate that composition pr is part of the primary phase field of the compound pr .the growth of ymn provides an example of how dta can help make finding the right composition for solution growth very efficient . the ternary intermetallic compoundymn has been known at least since 1971 , but has only been synthesized in polycrystalline form .recently , it was reported that ymn has a narrow pseudogap in the spin excitation spectrum .this prompted us to try to grow single crystals .the published partial triangulated ternary isotherm for the y - mn - al system at 600 suggests that a growth may be attempted from an al - rich liquid .therefore , we tried to grow it using a composition approximately halfway between al and ymn : y . we chose an al crucible , since in our experience such an alloy will likely not attack it , and we started with pieces of the elements . the sample was cooled in 15 h between and 950 , then decanted .the mm - sized prismatic crystals in the growth crucible were identified as ymn , which was not reported in the ternary isotherm , but does appear in literature . as a next step, we alloyed by arc - melting a total of about 1 g of the starting elements in the ymn ratio .although we had some losses due to evaporation of mn , about 2% , after arc melting , an x - ray powder diffraction experiment indicated the sample to be mainly ymn .then we performed dta , in an al crucible , on a mg piece .up to 1350 there was only one noticeable thermal event in both the heating and cooling curves , at temperatures between 1220 - 1240 .this , combined with the diffraction , is an indication that ymn is congruently melting at 1220 - 1240 .a congruently melting material is part of its own primary solidification surface .therefore , we decided to try a composition nearby ymn .the melting temperature of the desired phase is higher than the reported liquidus temperature for an alloy of composition mn , , therefore it seemed possible to grow ymn out of a composition close to the binary mn - al - line .mn measured upon heating and cooling with a 20 / min rate.,scaledwidth=60.0% ] we decided to try an alloy of composition y using al crucibles .we alloyed a few grams by arc - melting , and again had losses ( about 3% ) due to mn evaporation .in order to make a small sample representative for the whole , the arc - melted button was coarsely ground and the powder thoroughly mixed .about 20 mg of the powder was used for a dta experiment .the results for heating and cooling at 20 / min , between 1000 and 1250 , are shown in fig .8 . in both the heating and the cooling curve , two very pronounced events are visible . in the heating curve ,peaks are seen near 1110 and 1200 , while in the cooling curve events occur at onset temperatures of and .a weak event is observed at lower temperatures of 1030 - 1050 in both the heating and cooling curve .for the determination of growth parameters only the two highest - temperature events are important , therefore we did not measure down to still lower temperatures .these experimental curves can be compared to the simulated curves in fig . 2c .the dta experiment suggests that crystals can be grown and separated from the remaining melt by cooling an alloy of composition y slowly below ( above the highest - temperature peak in the heating curve ) and decanting above ( above the second - highest - temperature peak in the heating curve ) . for a growth experiment, we started with appropriate amounts of pieces of the elements .the sample was first heated to 1250 for equilibration , and then cooled in 1 h to 1200 , below which is was cooled to 1160 in 60 h. at this temperature the sample was decanted .well - separated prismatic crystals were found in the growth crucible .a photograph of two of those crystals is presented in fig .powder - x - ray diffraction identified the crystals as ymn , with space group _ i4mmm _ , and lattice parameters a=8.86(1 ) , c=5.12(1 ) , in agreement with the reported crystal structure .the examples presented here address how dta can help in determining growth parameters for solution growth , without detailed knowledge of phase diagrams .as shown with the example of tbal , dta can sometimes help in identifying the right crucible material , and , moreover , it can help pinpointing a very narrow temperature range over which to grow crystals .the example of the growth of pr shows that the combination of dta and growth experiments can help in determining the primarily solidifying compound out of a given metallic liquid , while limiting the growth to that of the primary .finally , the example of ymn shows how dta can help in the quick determination of the primary phase field for a compound .extensions of the method can be sought in including other crucible materials for dta , e.g. bn . or , in order to reduce problems with elements that have high vapor pressures , in sealing the ta - dta crucibles .however , problems still exist with elements that have a high vapor pressure and can not be held in ta .as was already discussed by fisk and remeika , one of the great advantages of solution growth is the economy of the method . by including dta in the procedure to optimize growth, it can be economized even further , especially in terms of material costs . furthermore , although dta generally shows that `` something occurs at a certain temperature '' , the combination of a dta experiment with a growth experiment , if successful , can lead to definite conclusions regarding the primarily solidifying compound out of a metallic liquid of certain composition .for this it is not necessary to establish a full phase diagram , a dta experiment on a sample of the composition of interest is sufficient .the authors wish to thank j. fredericks , s. chen , b. k. cho , m. huang , d. wu , t. a. lograsso , s. l. budko , g. lapertot for their kind help in discussing and preparing samples .the financial support from the us department of energy is gratefully acknowledged : ames laboratory is operated for the us department of energy by iowa state university under contract no .this work was supported by the director for energy research , office of basic energy sciences .
to obtain single crystals by solution growth , an exposed primary solidification surface in the appropriate , but often unknown , equilibrium alloy phase diagram is required . furthermore , an appropriate crucible material is needed , necessary to hold the molten alloy during growth , without being attacked by it . recently , we have used the comparison of realistic simulations with experimental differential thermal analysis ( dta ) curves to address both these problems . we have found : 1 ) complex dta curves can be interpreted to determine an appropriate heat treatment and starting composition for solution growth , without having to determine the underlying phase diagrams in detail . 2 ) dta can facilitate identification of appropriate crucible materials . dta can thus be used to make the procedure to obtain single crystals of a desired phase by solution growth more efficient . we will use some of the systems for which we have recently obtained single - crystalline samples using the combination of dta and solution growth as examples . these systems are tbal , pr , and ymn . a1 . thermal analysis , solidification , a2 . growth from high - temperature solutions , single crystal growth , b1 . rare - earth compounds
the rise of m2 m communications introduced necessity for efficient random access mechanisms , motivating new research approaches that put novel views on the traditional solutions , such as the slotted aloha ( sa ) .one of the promising directions in this respect is the use of successive interference cancellation ( sic ) in the slotted aloha framework , which enables to exploit collisions and thereby boost the throughput .the use of sic in framed sa was originally proposed in .a systematic treatment of the concept was presented in the seminal paper by liva , where the analogies between sic in framed sa and iterative belief - propagation ( bp ) decoding or erasure - correcting codes were identified .this opened the possibility to use the theory and tools of codes - on - graphs , laying the foundations of the coded random access .the ideas of coded random access in a setting with framed sa were further developed in , where the main message is that the use of sic , coupled with a proper access strategy , grants a throughput that tends to 1 asymptotically i.e. , when number of users .the application of coded random access in the original slotted aloha framework , where the users perform access on a slot basis , rather than on a frame basis , was proposed in , introducing the approach of frameless aloha .the operation of frameless aloha is inspired by rateless codes : the slots are `` added '' to the contention process until the base station decides to terminate the contention ; the contention termination criterion can be based , for example , on throughput maximization . in was shown that a simple version of the scheme , where the users access the slots with probability that is uniform both over users and slots , leads to throughput values that are the highest in the reported literature for practical number of users in the range ] , , where , and =\beta ] , , ; 2 ) is the channel coefficient of ; and ( 3 ) is the noise .the received powers $ ] , , are assumed to be independent and identically distributed ( iid ) random variables , that depend on the transmit powers , the statistical distribution of the distance between the user and the bs and the stochastic phenomena on the wireless link ; also , their values do not change during the contention period .the bs stores all received slots ( i.e. , the received composite signals ) and after each received slot , performs sic until there are no new degree one slots , or higher degree slots that are exploitable due to the capture effect , as explained in section [ sec : csa ] .the above process is repeated until the bs terminates the contention period by sending a new beacon ; this effectively and a posteriori determines the value of . for the sake of simplicitywe assume a perfect sic , i.e. , there is no residual interference power remaining after sic is performed .finally , we introduce the criterion for contention termination .denote by : instantaneous fraction of resolved users and instantaneous throughput respectively , where is number of resolved users ( the term + 1 in the denominator of takes into account the slot used for the beacon transmission ) .the termination criterion consists of two conditions : the contention is terminated _ either _ when , _ or _when , where and are the respective thresholds , chosen such that the expected throughput is maximized .here we briefly treat the case when packets arrive through a frequency non - selective rayleigh fading channel , a scenario for which the results are presented section [ sec : results ] .we define as a random variable that represents received signal - to - noise ratio ( snr ) of user , i.e. , , and assume that , , are independent and identically exponentially distributed with mean : a user transmission is captured in slot of degree when its sinr is larger than a capture ratio , i.e. , when : where represents the user s snr , , , are snrs of the interfering users , and .the condition can be rewritten as : where and .the above model implies that , in the case when there are no interfering transmission ( i.e. , when ) , a user transmission is recovered only if , i.e. , the received snr has to be sufficiently high . in other words , a user transmission may not be always recovered from a degree one slot , as it is assumed in the simplified sic scenario , outlined in section [ sec : csa ] .also , it is straightforward to show that , with rayleigh fading , the probability that a user transmission is successfully recovered from a singleton slot is : = e^{-\frac{b}{\bar{\gamma}}}. \label{eq : deg_one}\end{aligned}\ ] ]and - or tree evaluation is a standard tool used for derivation and assessment of the asymptotic performance of erasure - correcting codes that when decoded by the iterative bp algorithm . as such, it can be applied for derivation of the asymptotic performance ( i.e. , when ) of coded slotted aloha , as presented in .we proceed with a brief overview of the and - or evaluation , to the extent necessary for a seamless incorporation of the capture effect . for the general introduction on the and - or tree evaluation, we refer the interested reader to . and - or tree evaluationis concerned with the evaluation of the probabilities that the left - side nodes ( i.e , user transmissions ) remain unknown ( i.e. , unrecovered ) through the iterations of the sic algorithm , see fig .[ fig : graph ] .this is modeled through the exchange of messages flowing between user and slot nodes and carrying the information about the state of the corresponding transmission : not recovered / recovered , which is described with a message value 0/1 , respectively . in each iteration , the probability that a message value is 0/1 is updated according to the following rules .consider a user , who has transmitted replicas of the packet , see fig .[ fig : probs]a ) , and assume that the probability that the incoming message value is 0 is , i.e. , a replica has not been resolved with probability . the probability that the value of the outgoing message is is : i.e., the value of a outgoing message on a edge is 0 only if all incoming messages on the other edges are 0 ( the `` or '' update rule ) . averaging over yields : where denotes the iteration , , . ] and is the probability that a message stems from a node of degree : where is probability that a user performed transmissions .note that is the same as in the standard and - or tree evaluation framework ; the impact of the capture effect is expressed in the message updates performed in slots . consider a slot whose degree is , see fig .[ fig : probs]b ) .the probability that the value of an outgoing message is 1 is : where expresses probability that a user transmission is recovered in a slot of degree , when out of interfering transmissions have been canceled due to sic ( i.e. , out of interfering transmissions remain ) .more specifically , for represents the contribution of the capture effect that may happen on the yet `` unknown '' messages and which may lead to the user recovery .the combinatorial expression is due to the symmetry of the problem setting : the received snrs of all interfering transmissions are iid random variables and the occurrence of the `` appropriate '' capture effect on any out of interfering transmissions is a priori equally likely .we note that was introduced in ; also , setting and for yields the standard `` and '' update rule , when it is assumed that there are no noise and no captures . since perfect sic is assumed , it is easy to show that : where stems from , and where is the probability of the event , defined in the following way : at least captures occurred in the slot of degree , among these is the capture related to the user transmission which corresponds to the outgoing message , and this capture occurred as the -th capture .it is easy to verify that events , , are mutually exclusive ; we proceed by characterizing the probabilities .denote by the received snr of the user transmission corresponding to the outgoing message , and by , , the received snrs of the interfering users . due to the symmetry of the problem setting and the perfect sic ,the probability of is : = \frac{t!}{(t - h + 1 ) ! }\cdot \nonumber \\ & \pr [ x_{i_1 } \geq b ' y_{i_1 } \geq ... \geq x_{i_{h-1 } } \geq b ' y_{i_{h-1 } } \geq x \geq b ' y_{i_h } ] , \end{aligned}\ ] ] where and , , see . in other words ,any ordering of received snrs such that is the -th largest is a priory likely , which is reflected in . at this pointwe note that the computation of in general case is a challenge in its own right , which can be solved using the evaluation method presented in , and refer the interested reader to this work for the details .also , for it can be shown that in the rayleigh fading case : = \frac{e^{-\frac{b}{\bar{\gamma}}}}{(1+b)^{t } } , \end{aligned}\ ] ] where we used the fact that is a random variable with gamma distribution .averaging over slot degrees leads to : where denotes the iteration , , and is the probability that a message stems from a slot of degree : where is the probability that slot degree is . using and specializes and for the frameless aloha : finally, the asymptotic probability of user resolution and the expected throughput are computed as : we conclude by noting that and show the _ expected _ asymptotic performance as functions of the statistical descriptions both of the graph and the capture effect , and as such they are not related to the frameless stopping criterion , as introduced in section [ sec : model ] ., b ) maximum expected throughput , and c ) the corresponding optimal expected slot degree , as functions of the ratio of number of slots and number of users , for capture threshold and ratio of the capture threshold to average snr . ] fig . [ fig : and - or ] shows the asymptotic performance obtained by the and - or tree evaluation : a ) the maximum probability of user resolution , b ) the corresponding maximum expected throughput , and c ) the optimum average slot degree for which and are achieved , as functions of .the results are presented for and ratio of the capture threshold to the average snr . as expected, the increase in adversely affects the throughput .also , for fixed , increase in lowers the throughput ; this could be expected as well , as lower implies : 1 ) higher probability of recovering a user transmission from a degree one slot , see and , and 2 ) more chance for the alignment of the received snrs such that capture occurs in higher - degree slots , cf .( [ eq : capture ] ) . comparing the results for shows that higher throughput is obtained for higher average slot degrees , i.e. , the higher slot - access probabilities , see , which could be expected as well . finally , fig .[ fig : and - or ] shows that the ratio for which the maximum throughput occurs , decreases as this maximum increases .this is due to the behavior of , i.e. , the sooner starts to rise , the higher the , see fig .[ fig : and - or]a ) and . on the other hand , the behavior of reflects the fact that , for more pronounced capture effect ( i.e. , for lower and lower ) and adequate , more users are resolved sooner .the values of the overall maximum throughput and the corresponding , and from fig .[ fig : and - or ] are listed in table [ tab : asymptotic ] , and compared to a scenario where the impacts of both capture effect and noise are neglected . obviously , when the impact of noise is low , i.e. , low , see , capture effect provides for substantially higher throughputs compared to the scenario without capture effect . in the case with a considerable impact of the noise ,i.e. , when , the probability of recovering transmission from a degree one slot is only , see ; this adversely impacts the asymptotically achievable throughput , as shown in table [ tab : asymptotic ] .we conclude by noting that the optimal is substantially higher in scenarios with capture effect , i.e. , capture effect favors more collisions per slot .
the analogies between successive interference cancellation ( sic ) in slotted aloha framework and iterative belief - propagation erasure - decoding , established recently , enabled the application of the erasure - coding theory and tools to design random access schemes . this approach leads to throughput substantially higher than the one offered by the traditional slotted aloha . in the simplest setting , sic progresses when a successful decoding occurs for a single user transmission . in this paper we consider a more general setting of a channel with capture and explore how such physical model affects the design of the coded random access protocol . specifically , we assess the impact of capture effect in rayleigh fading scenario on the design of sic - enabled slotted aloha schemes . we provide analytical treatment of frameless aloha , which is a special case of sic - enabled aloha scheme . we demonstrate both through analytical and simulation results that the capture effect can be very beneficial in terms of achieved throughput .
the potential speedup of quantum algorithms is demonstrated by shor s factoring algorithm , which is exponentially faster than any known classical algorithm .several other quantum algorithms , which are more efficient than their classical counterparts were introduced .factorization is of special interest due to its role in current methods of cryptography .although the origin of the speed - up offered by quantum algorithms is not fully understood , there are indications that quantum entanglement plays a crucial role .in particular , it was shown that quantum algorithms that do not create entanglement can be simulated efficiently on a classical computer .it is therefore of interest to quantify the entanglement produced by quantum algorithms and examine its correlation with their efficiency .this requires to develop entanglement measures for the quantum states of multiple qubits that appear in quantum algorithms .recently , the groverian measure of entanglement was introduced and used for the evaluation of entanglement in certain pure quantum states of multiple qubits . using computer simulations of the evolution of quantum states during the operation of a quantum algorithm, one can obtain the time evolution of the entanglement .such analysis was performed for grover s search algorithm with various initial states and different choices of the marked states .it was shown that grover s iterations generate highly entangled states in intermediate stages of the quantum search process , even if the initial state and the target state are product states . in this paperwe analyze the quantum states that are created during the operation of shor s factoring algorithm .the entanglement in these states is evaluated using the groverian measure .it is found that the entanglement is generated during the pre - processing stage .when the quantum fourier transform ( qft ) is applied to the resulting states , their entanglement remains unchanged .this feature is unique to periodic quantum states , such as those that result from the pre - processing stage of shor s algorithm .when other states , such as product states or random states are fed into the qft , their entanglement does change .another interesting feature is that the entanglement is found to be correlated with the speedup achieved by the quantum factoring algorithm compared to classical algorithms .this means that the cases where no entanglement is created are those in which classical factoring is efficient .the paper is organized as follows . in sec .[ sec : algorithm ] we briefly review shor s factoring algorithm , the qft algorithm , and the quantum circuit used to perform it . in sec .[ sec : groverian ] we describe the groverian entanglement measure and the numerical method in which it is calculated . in sec .[ sec : ent ] we use the groverian measure to evaluate the entanglement created by shor s algorithm .the results are discussed in sec .[ sec : discussion ] and summarized in sec .[ sec : summary ] .shor s algorithm factorizes a given non - prime integer , namely , it finds integers and , such that their product .the algorithm consists of three parts : ( a ) pre - processing stage , in which the quantum register is prepared using classical algorithms and quantum parallelism ; ( b ) quantum fourier transform , which is applied on the output state of the previous stage ; ( c ) measurement of the register and post - processing using classical algorithms . given an integer to be factorized , choose any integer , and find the integer that satisfies prepare a register of qubits ( later referred to as the main register ) in the equal superposition state next , use quantum operations to calculate for all the indices , , of the basis states above , and store the results in an auxiliary register , giving rise to the joint state this essentially completes the pre - processing stage .however , in order to present the next stage of the algorithm more clearly , it is helpful to measure the auxiliary register in the computational basis .suppose that the result of the measurement is a state , where and is the smallest positive integer that gives the value .the order of modulus is defined as an integer that satisfies .the equality is thus satisfied for any integer . from eq .( [ eq : repitition ] ) it follows that the measurement will select from the main register all values of , where is the largest integer which is smaller than .the state of the register after the measurement is therefore -qubit register .the operator is the hadamard gate .the operators , and are the controlled - phase gates , where , 2 and 3 , respectively ., width=321 ] the quantum fourier transform is given by where the quantum circuit of the qft is shown in fig .[ fig:1 ] . to obtain the transformation in eq .( [ eq : qft1 ] ) , the qubits of register in the input ( and throughout the quantum circuit ) are indexed by , from bottom to top .the output of the circuit is stored in register , whose qubits are indexed from top to bottom .we define the operator to be the hadamard gate applied to qubit , and the operator ( where ) to be a controlled phase operator , which applies a phase of only if both qubits and are .we also define for , where we follow the standard notation for quantum operators , namely , those on the right hand side operate first . with these definitionsthe sequence of quantum operations that perform the qft is given by the number of one - qubit and two - qubit gates required in the quantum circuit which performs qft is polynomial in the size of the register . in the simple case in which divides , namely , one obtains where is defined in eq .( [ eq : phi_l ] ) .the resulting state is a superposition of all basis states with indices which are products of . if is not a divisor , namely , is not an integer , eq .( [ eq : qft ] ) should be modified such that the large amplitude states are those which correspond to integers adjacent to , .our choice of in eq .( [ eq:<q < ] ) ensures that with high probability the measurement will yield only states whose indices are the nearest integers to .the third part of the algorithm starts with a measurement of the register .it yields an integer approximation , , of one of the values , .thus , is approximately an integer multiple of . here , again , our choice of in eq .( [ eq:<q < ] ) ensures that in most cases there exist another integer which satisfies . as a result a continued fraction expansion of it is possible to efficiently find and .there is only one such approximation which satisfies eq .( [ eq : approx ] ) for .thus , the correct value of is obtained .if is even we can define which satisfies from eq .( [ eq : x^2 - 1 ] ) we obtain that and are candidates for having a common divisor with . using euclid s greatest common divisor ( gcd ) algorithm , this common divisoris found and the factoring process is completed .consider a quantum algorithm , given by the unitary operator , applied to the equal superposition state . for a certain class of quantum algorithms ,the final , or target state is a computational basis state .this state stores the correct result of the calculation , which can be extracted by measurement .not all quantum algorithms can be expressed in this form , because the final state , before the measurement is done , may be a superposition state . however , in the case of grover s search algorithm with a single marked state , this description applies .consider the case in which such algorithm , , is applied to an arbitrary pure state .the probability of success is defined as the probability that the measurement will still give the state .this probability is given by .the success probability can be used to evaluate the entanglement of the state . to this end , before the algorithm is applied , one applies a local unitary operator , , on each qubit .these operators are chosen such that the success probability of the algorithm will be maximized .the maximal success probability is using eq .( [ eq : m = ae ] ) the success probability can be expressed by this can be re - written as where the s are single - qubit states .( [ eq : pmax ] ) means that for a given initial state , the maximal success probability of such algorithm , , is equal to the maximal overlap of with any product state .the groverian measure of entanglement is defined by for the case of pure states , for which is defined , it is closely related to an entanglement measure introduced in refs . and was shown to be an entanglement monotone .the latter measure is defined for both pure and mixed states .it can be interpreted as the distance between the given state and the nearest separable state and expressed in terms of the fidelity of the two states .based on these results , it was shown that satisfies : ( a ) , with equality only when is a product state ; ( b ) can not be increased using local operations and classical communication ( locc ) . therefore , is an entanglement monotone for pure states .a related result was obtained in ref . , where it was shown that the evolution of the quantum state during the iteration of grover s algorithm corresponds to the shortest path in hilbert space using a suitable metric .consider a pure quantum state of qubits in order to find we form a convenient representation of the tensor product states used in eq .( [ eq : pmax ] ) .the state of each qubit in the product state is given by .\label{eq : e_j}\ ] ] let us denote where , is the most significant bit in the binary representation of .the overlap between and the product state is given by .it can then be written as the phases only introduce a global phase which can be ignored . the groverian entanglement measure for the state is given by namely , the dimension of the parameter space in which the maximization is obtained is . however , the number of terms summed up in the calculation of increases exponentially with the number of qubits . therefore , to make the calculation of feasibleone should minimize the number of evaluations of . the commonly used steepest descent algorithm, requires a large number of evaluations of and is thus computationally inefficient .here we accelerate the calculation by performing the maximization analytically and separately for a single pair of and . during each maximization step , all the other parameters are held fixed . in the maximizationwe have a function of the form where and depend on the other parameters .the maximization of vs. and leads to where and using this method , the number of evaluations of is significantly reduced . to find the global maximum , and then we perform several rounds of maximization over all the parameters .trying different initial conditions we find that the convergence to the global maximum is fast and no other local maxima are detected .shor s factoring algorithm includes a pre - processing stage followed by qft .here we analyze the quantum states generated in each of these stages and evaluate their entanglement using the groverian measure . herewe evaluate the time evolution of the groverian entanglement during the qft process , shown in fig .[ fig:1 ] .the groverian measure is evaluated after each operation of the operator .the operators are local and do not change the entanglement , we first perform this analysis for general quantum states and then focus on the specific quantum states that appear in the factoring algorithm . to examine the effect of qft on the groverian entanglement we construct an ensemble of random product states as well as random states of qubits .the state of each qubit in the random product states is described by eq .( [ eq : e_j ] ) where and are chosen randomly .the random states are drawn from an isotropic distribution in the -dimensional hilbert space .these states turn out to be highly entangled . in fig .[ fig:2 ] we present the time evolution of the groverian measure during the processing of qft on three random product states as well as on a random state of nine qubits . for the random product statesone observes that during most time steps the entanglement remains unchanged .most of the variation takes place at specific times , common to all the different states .clearly , the entanglement is generated by the controlled phase operators .the large variations in are found to take place when is small , namely when is applied on pairs of adjacent qubits .the groverian measure during the operation of qft on a highly entangled random state is also shown in fig . [ fig:2 ] .it exhibits only small variations with no obvious regularity . using . the dotted line ( with zero entanglement )shows the factorization of using .the dashed line shows the factorization of using .,width=321 ] in fig .[ fig:3 ] we present the time evolution of the groverian measure during qft , when it is applied on states obtained from the pre - processing stage of shor s factoring algorithm .the different lines correspond to the factorization process of different numbers .surprisingly , for all numbers that we have tested , the entanglement was essentially unchanged throughout the process , as implied by the horizontal lines .this is in contrast to the behavior observed when qft is applied to general quantum states .a special property of the states generated by the pre - processing is that they are periodic .this motivated us to examine the time evolution of the groverian measure during qft of general periodic states .the state ( up to normalization factor ) is a periodic state of qubits , with period and shift .the summation is over all integers such that , where .it was found that the groverian measure essentially does not change during the qft process of such states , and that the changes which do occur vanish exponentially with the number of qubits .the value of the groverian measure for these states depends almost solely on the odd part of the period .more precisely , for a periodic state with period ( where is odd ) , we obtain .this is easy to explain for states with a period , which are known to be tensor product states . for these states , thus the correct result of is obtained . for general periodic states we do not have an analytical derivation of the expression for .having found that the qft stage of shor s algorithm does not alter the entanglement of states created by the pre - processing stage , it is clear that all the entanglement is produced during pre - processing .we have evaluated this entanglement generated during the factoring process of all the integers in the range . to factorize an integer , , one has to choose another integer . in our analysis, we examined all possible choices within this range , and for each of them we applied the pre - processing stage as described in sec .[ sec : algorithm ] . at the end of the pre - processing stagewe evaluated the groverian measure of the resulting state of the main register , following a measurement of the auxiliary register . in fig .[ fig:4 ] we present the groverian measure for the states obtained after pre - processing vs. for .each dot represents the groverian measure after pre - processing for the integer and for a specific choice of .the solid line represents the function .we observe that all the dots are below this line , which resembles the upper bound of the groverian measure , namely that for any state of qubits . and.,width=321 ] additionally , there are many values of and choices of for which the groverian measure is , namely the factoring process does not involve any entanglement . for these particular choices , it should thus be possible to perform the factoring of efficiently using a classical algorithm .we find that for some of the pairs of and which produce no entanglement , gcd , thus a divisor of can be easily found classically .the rest of these pairs are found to satisfy , for some integer , which means that gcd or gcd are divisors of , which can be easily found by classical algorithms .we thus find that in cases in which no entanglement is produced by the quantum algorithm , it offers no speedup compared to classical algorithms .this is consistent with the assumption that the entanglement generated by a quantum algorithm is correlated with the speedup it provides .it is found that the states prepared by the pre - processing stage of shor s algorithm , like all periodic states , exhibit the property that their groverian entanglement does not change throughout the qft stage .one may take the view that the groverian entanglement somehow represents the amount of quantum information present in a quantum state .this is rather like the von neumann entropy .taking this view , our result may seem natural because the information needed to perform the factoring is already present after the pre - processing stage .the qft only rearranges the information such that it can be extracted by measurement .it is found that the groverian measure of the states generated by shor s algorithm is lower than that of random states , which are almost maximally entangled , with . yet, the maximal entanglement created by the algorithm exhibits the same functional behaviour , where is raplaced by .considering the fact that shor s algorithm is exponentially faster than its known classical counterparts , it is expected to use all the entanglement available .thus , our result provides further indication that classical algorithms are unlikely to perform factoring in polynomial time . unlike shor s algorithm ,grover s search algorithm is only polynomialy more efficient than its classical counterparts .grover s algorithm also creates entanglement , which is bound by a constant lower than unity .a different approach to the analysis of the entanglement generated by shor s factoring algorithm was presented in ref . , where the bi - partite entaglement between the main register and the auxiliary register was evaluated during both the pre - processing and qft stages , using the negativity as an entanglement measure .it was found that the entanglement is primarily generated during the pre - processing stage , in agreement with our results .the quantum states created during the operation of shor s factoring algorithm have been analyzed and the entanglement in these states was evaluated using the groverian measure . it was found that the entanglement is generated during the pre - processing stage and remains unchanged during the qft stage .it was shown that the latter feature is unique to periodic states , such as those obtained from the pre - processing stage , while qft does affect the entanglement of general quantum states .another interesting feature is that the entanglement is found to be correlated with the speedup achieved by the quantum algorithm compared to classical algorithms .this means that the cases where no entanglement is created are those in which classical factoring is efficient .
the intermediate quantum states of multiple qubits , generated during the operation of shor s factoring algorithm are analyzed . their entanglement is evaluated using the groverian measure . it is found that the entanglement is generated during the pre - processing stage of the algorithm and remains nearly constant during the quantum fourier transform stage . the entanglement is found to be correlated with the speedup achieved by the quantum algorithm compared to classical algorithms .
as the model of intrinsically hard combinatorial satisfaction problems , the random -satisfiability ( -sat ) problem was extensively studied in the last twenty years .recent major progresses include mean - field predictions and rigorous bounds on the satisfiability threshold , mean - field predictions on various structural transitions in the solution space of a random -sat formula , and new efficient stochastic algorithms . statistical physics theory predicted that the solution space of a satisfiable random -sat formula ( ) divides into exponentially many gibbs states as the constraint density is beyond a clustering ( dynamic ) transition point . for was proved that the solution space gibbs states are extensively separated from each other , but whether the same picture holds for is still an open question .recent empirical studies revealed that for random -sat formulas with the clustering transition has no fundamental restriction on the performances of some stochastic search algorithms such as walksat and chainsat .for example , the chainsat process is able to find solutions for a random -sat formula with constraint density well beyond the clustering transition value , although during the search process the number of unsatisfied constraints of the formula never increases .the most efficient stochastic algorithm for large random -sat formulas is survey propagation which , for the random -sat problem , is able to find solutions at constraint densities extremely chose to the satisfiability threshold . to understand the high efficiency of these and other stochastic search algorithms, it is desirable to have more detailed knowledge on the energy landscape and the solution space structure of the random -sat problem ( see , e.g. , refs . for some very recent efforts ) .such knowledge will also be very helpful for designing new stochastic search algorithms .a random -sat formula contains variables and clauses , ( ) being the constraint density .each variable has a spin , and each clause prohibits randomly chosen variables from taking a randomly specified spin configuration of the possible ones . the configurations that satisfy a formula forms a solution space .the hamming distance of two solutions is defined as where if and otherwise .two solutions and are regarded as nearest neighbors if they differ on just one variable , i.e. , .the organization of the solution space can be studied graphically by representing each solution as a vertex and connecting every pair of unit - distance solutions by an edge .then the solution space can be regarded as a collection of solution clusters , each of which is a connected component of the solution space in its graphical representation .how many solution clusters does this astronomically huge graph contain ? what is the size distribution of these clusters ?what are the distributions of the minimal , the mean , and the maximal distances between two clusters ?how are the solutions in each cluster organized ?these questions are fundamental to a complete understanding of the random -sat problem , but they are very challenging and so far only few rigorous mathematical answers are achieved .mean - field statistical physics theory is able to give a prediction on the number of solution gibbs states of a given size , but whether there is a strict one - ton - one correspondence between solution gibbs states , which are defined according to statistical correlations of the solution space , and solution clusters is not yet completely clear .following our previous work ref . in this paper we focus on one of the structural aspects of the solution space , namely the organization of a single connected component ( a solution cluster ) .the internal structure of a solution cluster is explored by unbiased and biased random walk processes .we examine mainly solution clusters reached by a very slow belief propagation decimation algorithm , but it appears that the qualitative results are the same for solution clusters reached by various other algorithms .we can verify that the studied solution clusters correspond to the single ( statistically relevant ) gibbs state of the given formulas if the constraint density is lower than , the clustering transition point where exponentially many gibbs states emerge .we find that the solutions in such a giant cluster already aggregate into many different communities when is still much lower than . in a solution cluster ,solutions of the same community are more densely connected with each other than with the other solutions , and the mean hamming distance of solutions belonging to the same community is shorter than the mean solution - solution hamming distance of the whole cluster .the entropy density of a solution community is calculated by the replica - symmetric cavity method of statistical physics and is found to be different for different communities of the same cluster .when the constraint density exceeds , we have the same observation that non - trivial community structures are present in the single solution clusters reached by several stochastic search algorithms .these numerical results are interpreted in terms of the following proposed evolution picture of the solution space of a random -sat formula : ( 1 ) as the number of constraints of the formula increases and becomes close to from below , many relatively densely connected solution communities emerge in the solution spaces and these communities are linked to each other by various inter - community edges ; ( 2 ) the intra- and inter - community connection patterns both evolve with , and finally the single giant component of the solution space breaks into many clusters of various sizes ( probably at ) , each of which contains a set of communities ; ( 3 ) as further increases , the intra- and inter - community connection patterns in each solution cluster keep evolving , leading to the breaking of a solution cluster into sub - clusters .the following section describes the numerical methods used in this paper .the simulation results on random -sat and -sat formulas are reported in sec .[ sec:3sat ] and sec .[ sec:4sat ] , respectively .we conclude this work in sec .[ sec : conclusion ] .a solution cluster contains a huge number of solutions , with being the entropy density .a solution in this cluster is connected to other solutions , .empirically we found that the degrees of the solutions in a cluster are narrowly distributed with a mean much less than ( see fig .[ fig : degreeprofile ] for an example ) .therefore the solutions of a cluster can be regarded as almost equally important in terms of connectivity .however , the connection pattern of the solution cluster can be highly heterogeneous .solutions of a cluster may form different communities such that the edge density of a community is much larger that of the whole cluster ( fig .[ fig : communityschematic ] ( upper panel ) gives a schematic picture , where darker circles indicate solution communities with higher edge densities ) .the communities may even further organize into super - communities to form a hierarchical structure .if a random walker is following the edges of such a community - rich solution cluster , it will be trapped in different communities most of the time and only will spend a very small fraction of its time traveling between different communities .if solutions are sampled by the random walker at equal time interval , the sampled solutions contains useful information about the community structure of the solution cluster at a resolution level that depends on .( color online ) the degree distribution of solutions from a solution cluster .the three curves correspond to three random -sat formulas of variables and constraint density . to get a degree distribution , solutions are _uniformly sampled _ from a solution cluster by a markov chain process .suppose at time the solution is being visited .a variable is chosen with probability from the whole set of variables .if this variable can be flipped without violating any constraint of the formula , it is flipped and the solution is updated to at time , otherwise the old solution is kept at time .we set and sample solutions at an equal time interval of . ]( color online ) ( upper panel ) schematic view of solution communities in a single solution cluster .the mean edge density in the whole cluster ( the largest circle ) is less than the edge densities of individual communities ( small circles ) . a path of single - spin flips linking solutions and of two different communities is shown by the black coiled trajectory .( lower panel ) entropy density as a function of the overlap with a given reference solution .if is a concave function ( case i ) , a rectilinear line with slope can only be tangent to at one point ; if is not concave , then a rectilinear line with certain slop may be tangent to at two points and . in the interval of , may be monotonic [ case ii(a ) ] or be non - monotonic [ case ii(b ) ] ., scaledwidth=90.0% ] two slightly different random walk processes are used in this paper to explore the structure of single solution clusters .the first one is spinflip of ref . , which prefers to flip newly discovered unfrozen variables . starting from an initial solution denoted as at time , the spinflip process explores a solution cluster by jumping between nearest - neighboring solutions .the set of discovered unfrozen ( flippable ) variables is initially empty .suppose the walker resides on at time .the set of flippable variables in this solution is divided into two sub - sets : set contains all the variables that have already been flipped at least once , set contains the remaining flippable variables . in the time interval the spin of a randomly chosen variable in set ( if ) or set ( if otherwise ) is flipped . at time the walker is then in a nearest - neighbor of , and the updated set of unfrozen variables is .a unit time of spinflip corresponds to flips .as newly discovered unfrozen variables are flipped by spinflip with priority , the random walker probably can escape from the local region of the initial solution quicker than an unbiased random walker .however we have checked that this slight bias is not at all significant to the simulation results .there are two reasons :first the random walk process occurs in a high - dimensional space , and second , after a brief transient time the set of newly discovered unfrozen variables becomes empty most of the time .we also use the unbiased random walk process in some of the simulations .the unbiased random walk differs from spinflip in that at each elementary solution update , a variable is uniformly randomly chosen from the set of flippable variables and flipped .as we just mentioned , spinflip converges to the unbiased random walk as the simulation time becomes large enough ( e.g. , ) .a number of solutions are sampled with equal time interval during the random walk process for clustering analysis .the overlap between any two sampled solutions and is defined by we can obtain an overlap histogram from the sampled solutions .a hierarchical minimum - variance clustering analysis is performed on these sampled solutions ( the same method was used by hartmann and co - workers to study the ground state - spaces of some optimization problems ) .initially each solution is regarded as a group , and the distance between two groups is just the hamming distance . at each step of the clustering , two groups and that have the smallest distanceare merged into a single group .the distance between and another group is calculated by where denotes the number of solutions in group .a dendrogram of groups is obtained from this clustering analysis , and the matrix of hamming distances of the sampled solutions is drawn with the solutions being ordered according to this dendrogram . we should emphasize that , by the above - mentioned random walk processes , solutions of a cluster are sampled with probability proportional to its connectivity rather than with equal probability .we can also sample solutions uniformly random by a slight change of the random walk process as explained in the caption of fig .[ fig : degreeprofile ] .we have checked that the results of this paper are not qualitatively changed by this different sampling method .this may not be surprising : for one hand , the degrees of different solutions of the same cluster are very close to each other , and for the other hand , if there is many communities in a solution cluster , their trapping effects will be felt by different random walk processes . for a solution community ,some of the important statistical quantities are the entropy density , the mean overlap between two solutions of the community , and the mean overlap between a solution of the community and a solution outside of the community .the entropy density is defined by where is the number of solutions in the community .following ref . we use the replica - symmetric cavity method of statistical physics to evaluate the values of these quantities .the replica - symmetric cavity method is equivalent to the belief propagation ( bp ) method of computer science .suppose is a sampled solution from a solution community .with respect to this solution , a partition function is defined as = { \sum\limits_{\vec{\sigma}}}^\prime \exp\bigl [ n x q(\vec{\sigma}^1 , \vec{\sigma})\bigr ] \ , \ ] ] where means that only the solutions of the formula are summed . when the reweighting parameter , all solutions contribute equally to the partition function , which is just equal to the total number of solutions . at the other limit of , only those solutions with contribute significantly to . at a given value of , eq .( [ eq : partitionfunction ] ) can be expressed as \ , \ ] ] where is the total number of solutions whose overlap value with is equal to . is referred to as the entropy density of solutions at overlap value .when is large , the summation of eq .( [ eq : partitionfunction-2 ] ) is contributed almost completely by the terms with the maximum value of the function . at a given , the relevant overlap value to therefore determined by and the corresponding entropy density at this value is related to by a legendre transform .the following bp iteration scheme is used to determine the overlap and entropy density as a function of .the function is then obtained from these two data sets by eliminating .when applying the replica - symmetric cavity method to a single random -sat formula , first one needs to define two cavity quantities and : \ , \label{eq : eta - i - a } \\u_{a\rightarrow i } & = & \ln\bigl [ 1-\prod\limits_{j\in \partial a \backslash i } p_{j\rightarrow a}(-j_a^j ) \bigr ] \ .\label{eq : u - a - i}\end{aligned}\ ] ] in the above two equations , is the ( cavity ) probability of variable to take the spin value if it is not constrained by constraint ; denotes the set of variables that are involved in constraint , and is identical to except that variable is missing ; is the satisfying spin value of variable for constraint ( i.e. , ( respectively ) if ( ) satisfies ) .the cavity quantity is the log - likelihood of constraint being satisfied by variables other than variable .the following bp iteration equations can be written down for and ( see , e.g. , refs . ) : \ .\label{eq : u - a - i - iter}\end{aligned}\ ] ] in eq .( [ eq : eta - i - a - iter ] ) , denotes the set of constraints in which is involved , is the a subset of with being removed .after a fixed - point solution is obtained at a given value of for the set of cavity quantities , the overlap is then calculated by the following equation where is the average value of at the reweighting parameter , and is equal to the entropy density is expressed as where \ , \\\delta s_a & = & \ln\bigl [ 1-\prod\limits_{i\in \partial a } \frac{1+j_a^i + ( 1-j_a^i ) e^{\eta_{i\rightarrow a } } } { 2(1+e^{\eta_{i\rightarrow a } } ) } \bigr ] \ .\end{aligned}\ ] ] at a given value of , one can also estimate the mean overlap between two solutions of the solution space by as we will demonstrate in the next two sections , when the reweighting parameter is equal to certain critical values , the calculated entropy density and overlap may change discontinuously with . furthermore , at certain range of the parameter , the bp iteration equations may have two fixed - points with different values and values .such behaviors are caused by the non - concavity of the entropy density function .as shown in fig .[ fig : communityschematic ] ( lower panel ) , if is non - concave , then at certain critical value , eq .( [ eq : qvariation ] ) has two solutions at and , with .when is slightly larger than , we have . therefore the partition function is dominantly contributed by solutions of overlap value , and the total number of these solutions is , while the solutions with overlap form a state .when is slightly smaller than , then and the reverse is true : is contributed predominantly by solutions with overlap , and the total number of these solutions is , and the solutions at overlap form a metastable state . at ,the two fixed - point solutions of the bp iteration equations correspond to these two maximal points of .the non - concavity of at certain range of overlap values is a strong indication that the solution space has non - trivial structures , which might be the existence of many solution clusters , or the existence of many solution communities in the solution cluster of , or both .the reweighting parameter in eq .( [ eq : partitionfunction ] ) can be regarded as an external field which biases the spin of each variable to . at the limit of , for the non - concave cases shown in ii(a ) and ii(b ) of fig .[ fig : communityschematic ] , a real first - order phase - transition will occur at between an energy - favored phase with overlap and an entropy - favored phase with overlap .( color online ) simulation results for a random -sat formula with variables and constraint density : number of discovered unfrozen variables versus the evolution time of spinflip ( upper ) ; the overlap histogram of sampled solutions and the matrix of hamming distances of these solutions for this formula ( lower left ) and for its shuffled version ( lower right ) . ]( color online ) the entropy density at a given overlap value with a reference solution .( a ) results for two solutions s- and s- of the lower left system of fig .[ fig:3sat4p25n1 m ] .( b ) results for two solutions s- and s- of the lower right system of fig .[ fig:3sat4p25n1 m ] .the inset of ( a ) and ( b ) shows the overlap value as a function of the reweighting parameter of the replica - symmetric cavity method ., title="fig : " ] ( color online ) the entropy density at a given overlap value with a reference solution .( a ) results for two solutions s- and s- of the lower left system of fig .[ fig:3sat4p25n1 m ] .( b ) results for two solutions s- and s- of the lower right system of fig .[ fig:3sat4p25n1 m ] .the inset of ( a ) and ( b ) shows the overlap value as a function of the reweighting parameter of the replica - symmetric cavity method ., title="fig : " ] as a first example , fig .[ fig:3sat4p25n1 m ] shows the simulation results for a random -sat formula of .the constraint density of this formula is very close to the satisfiability threshold , and the initial solution for the spinflip random walk process was obtained by survey propagation . the solid line in the upper panel of fig .[ fig:3sat4p25n1 m ] is the number of accumulated unfrozen variables .we notice that this number increases only slowly ( almost logarithmically ) with evolution time , , and only of the variables are found to be unfrozen at time .the lower left panel of fig .[ fig:3sat4p25n1 m ] is the overlap histogram and the matrix of hamming distances of sampled solutions ( with equal interval of ) .as indicated by the fact that only a quarter of the variables have been touched , the random walk process probably has visited only a small fraction of the whole solution cluster in the relatively short evolution time of .however , the overlap histogram and the hamming distance matrix clearly demonstrate that the explored portion of the solution cluster is far from being homogeneous .the overlap histogram has several peaks , and the hamming distance matrix shows that the sampled solutions can be divided into two large groups , each of which can be further divided into several sub - groups .the overlap of the visited solutions with the initial solution has several sudden drops as a function of , and each of these drops is preceded by a plateau of overlap value ( data not shown ) .all these simulation results are consistent with the proposal that several solution communities exist in the studied solution cluster .the solutions of each community are more densely connected to each other than to the outsider solutions .because of the dominance of intra - community connections in each solution community , a random walker in a community - rich graph will be trapped in a single community for a long time before it jumps into another community and discovers new unfrozen variables .this proposed multi - trap mechanism may be the reason of the logarithmic increase of .guided by the hamming distance matrix of fig .[ fig:3sat4p25n1 m ] ( lower left ) , we choose two sampled solutions , solution s- and s- for entropy calculations .the overlap between s- and s- is , and they are suggested by fig . [ fig:3sat4p25n1 m ] ( left lower ) as belonging to two different communities . for s- ,the bp iteration is convergent as long as the reweighting parameter is in the range of ( see fig . [fig : entropya4p25]a ) . at ,bp reports an entropy density and an overlap value with s- .the overlap as a function of has a rapid change at ( the same behavior is observed for the entropy density ) , indicating a rapid change of the statistical property of the solution cluster at as viewed from s- . for s- ,bp is convergent when ; at the entropy density is , and the overlap value is .two fixed - points of bp are obtained at for s- ( fig .[ fig : entropya4p25]a ) , indicating that there is a well - formed community of solutions whose mean overlap with s- is , and this community is embedded in a larger community of mean overlap with s- .the same numerical experiment is also carried out for a random -sat formula of and , starting from an initial solution obtained by walksat , and a set of random -sat formulas of and $ ] , using initial solutions obtained by belief propagation decimation ( see the following subsection ) .the results of these simulations suggest that the existence of community structure in single solution clusters is a general property of random -sat formulas .given a solution for a formula , we can shuffle the connection pattern of to produce a maximally randomized formula under the constraints that ( i ) is still a solution of , ( ii ) each variable participates in the same number of clauses as in and its spin value satisfies the same number of clauses as in , and ( iii ) each clause is satisfied by the same number of spins of as in .when we run spinflip starting from for the shuffled formula we are unable to detect any community structures . for the -sat formula of studied above , the simulation results obtained on a shuffled formula are also shown in fig .[ fig:3sat4p25n1 m ] .the number of discovered unfrozen variables for this shuffled system has a sigmoid form as a function of and it already reaches a high value of at time .the overlap histogram of the sampled solutions ( time interval ) has a gaussian form , and the hamming distance matrix of these sampled solutions is featureless .this and additional shuffling experiments confirm that community structure is present only in a solution cluster of a random -sat formula but not in that of a shuffled formula .the entropy calculations further confirms this point . for the randomized graph of fig .[ fig:3sat4p25n1 m ] ( lower right ) , we have chosen two most separated solutions s- and s- ( with an overlap value ) to perform the entropy calculations . the bp iteration is able to converge even when the reweighting parameter decreases to zero , and at the same entropy density value of is reached ( see fig .[ fig : entropya4p25]b ) .the overlap as a function of does not show any signal of discontinuous behavior .( color online ) the overlap histogram ( in semi - logarithmic plot ) and the matrix of hamming distances of sampled solutions for a random -sat formula of variables .spinflip first runs for steps starting from a solution obtained by belief propagation decimation .solutions are then sampled at equal time interval of .the upper panel corresponds to , ( left ) and ( right ) ; the lower panel corresponds to ( left ) and ( right ) .the most probable overlap values in the shown overlap histograms of ( ) , and ( ) are in agreement with the mean overlap values predicted by the replica - symmetric cavity method for the same formulas , indicating that the solution space for these formulas is composed of one single giant component . ]( color online ) structure of the solution cluster examined in fig .[ fig : transition ] ( upper left , ) .( a ) the entropy density of solutions at a given overlap value with reference solution s- and s- .( b ) two overlap evolution trajectories starting from s- and s- .an evolution trajectory is obtained by an unbiased random walk starting from either s- or s- , the overlap of the visited solution with the starting solution is recorded during the random walk process . in ( a )the two dashed lines are fitting curves of the quadratic form .the fitting parameters are , ( fitting range being , for s- ) and , ( , for s- ) . the inset of ( a ) shows the overlap value as a function of the reweighting parameter ., title="fig : " ] ( color online ) structure of the solution cluster examined in fig .[ fig : transition ] ( upper left , ) .( a ) the entropy density of solutions at a given overlap value with reference solution s- and s- .( b ) two overlap evolution trajectories starting from s- and s- .an evolution trajectory is obtained by an unbiased random walk starting from either s- or s- , the overlap of the visited solution with the starting solution is recorded during the random walk process . in ( a )the two dashed lines are fitting curves of the quadratic form .the fitting parameters are , ( fitting range being , for s- ) and , ( , for s- ) .the inset of ( a ) shows the overlap value as a function of the reweighting parameter ., title="fig : " ] ( color online ) same as fig .[ fig : a3p825 ] , but the solution cluster is the one studied in fig .[ fig : transition ] ( upper right ) , with .the dashed curve in ( a ) is a quadratic fitting curve with fitting parameters , ( for s- , fitting range being ) ., title="fig : " ] ( color online ) same as fig .[ fig : a3p825 ] , but the solution cluster is the one studied in fig .[ fig : transition ] ( upper right ) , with .the dashed curve in ( a ) is a quadratic fitting curve with fitting parameters , ( for s- , fitting range being ) ., title="fig : " ] krzakala _ et al ._ predicted that a clustering transition occurs in the solution space of a random -sat formula at the critical constraint density . at this point , exponentially many gibbs states emerge in the solution space , with a few of these states dominating the solution space .a gibbs state of the mean - field statistical physics theory is defined mainly in terms of the correlation property of the solution space .it is regarded as a set of solutions within which there are no long - range point - to - set correlations .for a large random -sat formula , whether there is a one - to - one correspondence between a solution cluster ( which is defined as a connected component of the solution space ) and a gibbs state of statistical physics is still an open question .but even if there is not a strict one - to - one correspondence , it is natural to believe that a solution cluster and a gibbs state of solutions are closely related . in this section, we investigate the structure of a single solution cluster of a random -sat formula at close to by extensive spinflip simulations on random -sat formulas of size .ten random -sat formulas are generated at each of the constraint density values , and for each of these formulas a solution is constructed using belief propagation decimation , which is then used by spinflip as the starting point .the belief propagation decimation program fixes variables of the input formula sequentially with an interval of at least iterations , and it assigns a spin value to a variable according the predicted marginal spin distribution .we have chosen such an extremely slow fixing protocol with the hope of being able to pick a solution uniformly random from the solution space . for and , we are able to calculate the entropy density of the whole solution space of a formula and the mean overlap between two solutions using the replica - symmetric cavity method , with all the cavity fields initially setting to zero . we have verified that the mean overlap and entropy density values of the solution clusters explored by spinflip are in agreement with the statistical physics predictions . this is consistent with the belief that the whole solution space is ergodic and has only a single ( statistically relevant ) solution cluster . for ,the replica - symmetric cavity method no longer converges on a single formula , and therefore we are not sure whether the explored solution clusters are the dominating clusters .this later ambiguity may not be too significant , as we are mainly interested in the property of the solution cluster before the clustering transition . in each run of spinflip ,the random walk first runs at least time steps starting from the input solution , and then solutions are sampled at equal time interval of . before sampling of solutions , spinflip has enough to time to flip almost all the variables , therefore during the later solution sampling process , spinflip actually performs an unbiased random walk .the overlap histograms and hamming distance matrices of the sampled solutions at show only weak heterogeneous features ( a typical example is shown in fig .[ fig : transition ] upper left ) ; but as increases , the heterogeneity of the solution cluster becomes more and more evident ( for , a typical example is shown in fig .[ fig : transition ] upper right ) .these results might indicate that only weak community structure is present in the studied solution clusters of . however , we must be careful to draw conclusions from figures such as fig .[ fig : transition ] , as the community structures revealed by spinflip also depend on the time interval of solution sampling .even if the solution cluster is composed of extremely many communities , if is of the same order as the typical trapping times of the communities , two sampled solutions of spinflip will only have a low probability of belonging to the same community .then the hamming distance matrix of the sampled solutions will be very homogeneous .for the case of fig .[ fig : transition ] ( upper left ) , we find that is comparable to the typical trapping time of a community ( see fig . [fig : a3p825]b ) .if is chosen to be ten times shorter , the sampled solutions show very evident community structures also at ( data not shown ) .the clustering analysis of sampled solutions is complemented by entropy calculations .for the example of shown in fig .[ fig : transition ] ( upper left ) , we have calculated the entropy densities of solutions at a given overlap with two reference solutions s- and s- . the results are shown in fig . [fig : a3p825 ] . for solution s- , as the reweighting parameter decreases to , both the entropy density and the overlap show a sudden change .this behavior indicates that s- is contained in a solution community of entropy density and of mean overlap with s- . on the other hand, the whole solution cluster has an entropy density and mean overlap with s- .we have performed an unbiased random walk simulation starting from s- ( see fig . [fig : a3p825]b ) to find that the overlap as a function of evolution time ( in logarithmic scale ) indeed has an evident plateau at before it eventually decays to . for the solution s- ,[ fig : a3p825]a shows that there is a region of the reweighting parameter within which two fixed - point solutions of the bp iteration equations coexist .one of the fixed - point of bp describes the statistical property of the solution community , which has an entropy density and mean overlap with s- , while the other fixed - point describes the statistical property of the whole solution cluster , which has an entropy density and mean overlap with s- .if we perform an unbiased random walk process in the solution cluster starting from solution s- , we find that the overlap with s- stays at a plateau value of for a long time until it suddenly ( in logarithmic scale ) drops to a value of ( see fig . [fig : a3p825]b ) , in agreement with the replica - symmetric bp results .similar results are obtained from other sampled solutions . from the different entropy density values of the communities andthe fact that the two reference solutions s- and s- have a small overlap of , we conclude that they belong to different communities of the same solution cluster . and from the fact that the entropy density of the examined solution cluster is the same as the entropy density of the whole solution space ( the later is obtained by the replica - symmetric bp with both random and zero initial conditions ), we conclude this solution cluster is actually the only statistically relevant solution cluster of the whole solution space .qualitatively the same results are obtained for the other studied random -sat formulas of and . we therefore conclude that many solution communities have already formed in the single statistically relevant solution cluster of a large random -sat formula at constraint density . if the solution cluster breaks into many connected components at the clustering transition point , this ergodicity breaking can be understood as the final separation of groups of communities caused by the loss of inter - community links .when the constraint density is beyond the clustering transition value , all the explored single solution communities of the random -sat formulas demonstrate clear community structures , according to the overlap histogram and hamming distance matrices of the sampled solutions ( see fig .[ fig : transition ] upper right for a typical example ) .the existence of community structure in single solution clusters is also confirmed by entropy calculations . as an example, we show in fig .[ fig : a3p925]a the results of the replica - symmetric cavity method on a solution cluster that corresponds to fig .[ fig : transition ] upper right ( ) .we choose solution s- and s- ( with mutual overlap ) as two reference solutions ( similar results are obtained for other sampled solutions ) .for s- , the entropy density and overlap value change suddenly when the reweighting parameter is decreased to , indicating that s- belongs to a solution community of entropy density and mean overlap with s- .this solution community is itself contained in a larger community of entropy density and mean overlap with s- . the evolution trajectory of the overlap value with s- as obtained from an unbiased random walk process ( fig .[ fig : a3p925]b ) , which has a series of plateaus of decreasing heights , is consistent with such a nested ( hierarchical ) organization of communities . for s-, the entropy data suggest that it belongs to a different community of entropy density , whose mean overlap with s- is .this solution community itself form a subgraph of a larger community of entropy density and of mean overlap with s- .the overlap evolution trajectory starting from s- jumps between the values of and at .this jumping behavior demonstrates that the unbiased random walker is able to visit the solution community of s- frequently .this probably indicates that the community of s- is one of the largest communities of the solution cluster . for the studied solution cluster at ,when the reweighting parameter is very small ( for s- and for s- ) , we are unable to find a fixed - point for the replica - symmetric bp equations .as approaches zero , the corresponding dominating solutions probably are distributed into different solution clusters , and the replica - symmetric cavity method is no longer sufficient to describe their statistical properties .( color online ) simulation results on a random -sat formula with variables and constraint density .( upper ) number of discovered unfrozen variables versus the evolution time of spinflip , starting from five different initial solutions .( lower left and lower right ) the overlap histogram of sampled solutions from one initial solution and the matrix of hamming distances of these solutions for this formula ( lower left ) and its shuffled version ( lower right ) . ]( color online ) the entropy density curves as a function of overlap . (a ) results obtained by choosing two reference solutions s- and s- in the solution cluster of fig .[ fig:4sat9p46n100k ] ( lower left ) .( b ) results obtained by choosing two reference solutions s- and s- in the solution cluster of fig .[ fig:4sat9p46n100k ] ( lower right ) .the inset in each sub - figure is the overlap value as a function of the reweighting parameter ., title="fig : " ] ( color online ) the entropy density curves as a function of overlap .( a ) results obtained by choosing two reference solutions s- and s- in the solution cluster of fig .[ fig:4sat9p46n100k ] ( lower left ) .( b ) results obtained by choosing two reference solutions s- and s- in the solution cluster of fig .[ fig:4sat9p46n100k ] ( lower right ) .the inset in each sub - figure is the overlap value as a function of the reweighting parameter ., title="fig : " ] we perform simulations on a single large random -sat formula of variables .the constraint density of the formula is , beyond the clustering transition point .five solutions were obtained using belief propagation decimation for this formula ; was then shuffled with respect to each of these solutions to obtain five new formulas ( see sec .[ sec:3sat4p25n1 m ] ) .the number of discovered unfrozen variables as a function of the evolution time of spinflip on these ten instances are shown in fig .[ fig:4sat9p46n100k ] ( upper panel ) .there is no qualitative difference between the curves of the original formula and those of the shuffled formulas , as compared with the results of the random -sat case in fig .[ fig:3sat4p25n1 m ] .the random walk process is able to flip most of the variables at least once in an evolution time of both on the original and on the shuffled formulas .the lower left and lower right panel of fig .[ fig:4sat9p46n100k ] are , respectively , the overlap histogram and hamming distance matrix of sampled solutions at time interval for the original formula and one of its shuffled version , with the random walk process starting from the same initial solution . from these two figures , we infer that both the solution cluster of the original and the shuffled formula have non - trivial community structures .this is another important difference compared with the random -sat results shown in fig .[ fig:3sat4p25n1 m ] , where the solution cluster of the shuffled formula does not show community structure . for the solution cluster of fig .[ fig:4sat9p46n100k ] ( lower left ) , we choose two solutions s- and s- ( with an overlap of ) for entropy calculations .the entropy density curves as a function of the overlap with these two solutions are shown in fig . [fig : entropya9p46n100k]a . for s-, the replica - symmetric bp iteration equations have two fixed points when the reweighting parameter is in the range of . the fixed point with to the local solution community of s- , which has an entropy density of and mean overlap with s- .the other fixed point with probably corresponds to the whole solution space , which has an entropy density at . for s- , the bp iteration equations are convergent for and but are divergent for .we infer that s- is associated with a solution community of entropy density , whose mean overlap with s- is .these entropy results confirm the indication of fig .[ fig:4sat9p46n100k ] ( left lower ) that s- and s- belong to two different communities ( of the same cluster ) . as the constraint density of the formula is beyond the clustering transition point , its solution space very probably is composed of many extensively separated solution clusters . in agreement with this expectation , the mean - field cavity method predicts that the mean overlap of the whole solution space to the explored solution cluster is , for the solution cluster of the shuffled formula studied in fig .[ fig:4sat9p46n100k ] ( lower right ) , we also choose two solutions s- and s- ( with mutual overlap ) for entropy calculations .the results shown in fig .[ fig : entropya9p46n100k]b confirm that the solution cluster of the shuffled formula has different communities .the community of s- has an entropy density and a mean overlap with s- , while that of s- has an entropy density and a mean overlap with s- . as indicated by the small breaks of the curve of s- in fig .[ fig : entropya9p46n100k]b , the local community of s- probably is a sub - graph of a larger community of entropy density , whose mean overlap with s- is .the entropy density of the whole solution space as obtained at is .the mean overlap of the whole solution space to either of the two reference solutions is .same as fig .[ fig : a3p825 ] , but the solution cluster is for a -sat formula of , whose hamming distance matrix is shown in the lower left panel of fig . [fig : transition ] ., title="fig : " ] same as fig .[ fig : a3p825 ] , but the solution cluster is for a -sat formula of , whose hamming distance matrix is shown in the lower left panel of fig .[ fig : transition ] ., title="fig : " ] similar to sec .[ sec:3satn20k ] , we continue to investigate whether solution communities have formed in the solution space of a random -sat formula before the clustering transition point . for each of the constraint densities , ten random -sat formulas of variablesare generated , and a solution is obtained by belief propagation decimation for each of these formulas .we then use the same random walk protocol as mentioned in sec .[ sec:3satn20k ] to sample a large number of solutions for clustering analysis .two typical solution - clustering results , one for a formula with and the other for a formula with , are shown in fig .[ fig : transition ] lower left and lower right .our simulation results reveal that the connection patterns of all these studied solution clusters at are far from being homogeneous .the lower panel of fig .[ fig : transition ] indicates that there are already many small solution communities in the solution cluster of ; and that the community structures of the solution cluster will be more and more pronounced as increases . to be more quantitative , we have calculated the statistical properties of solution communities by performing bp iterations ( with a reweighting parameter ) starting from various sampled solutions .we show as an example the results of the entropy calculations performed on two solutions s- and s- of the solution cluster of fig .[ fig : transition ] ( lower left ) , with .similar to what we have observed before , as the reweighting parameter decreases , the entropy density and overlap values predicted by the replica - symmetric cavity method show several small sudden changes , and at the bp equations have more than one fixed - point solutions . from these results , we estimate that the solution cluster that contains s- has an entropy density of and a mean overlap with s- , while the solution community of s- has an entropy density and a mean overlap with s- .both of these two solution communities probably have non - trivial internal structures , as indicated by the sudden small drops of the overlap value as a function of ( see the inset of fig . [fig : a9p10]a ) .the whole solution cluster has an entropy density and a mean overlap with either of these two reference solutions .these results are confirmed by the two overlap evolution trajectories shown in fig .[ fig : a9p10]b , which show several plateaus at in the semi - logarithmic plot .the fact that overlap values with s- and s- fluctuate at long times around the theoretically predicted value of confirms that the studied solution cluster is the only statistically relevant cluster of the whole solution space .in summary , this work studied the solution space statistical properties of large random random - and -sat formulas by extensive random walk simulations and by the replica - symmetric cavity method of statistical physics .a solution space is mapped to a huge graph , in which each vertex represents an individual solution and the edge between two vertices means that the two corresponding solutions differ on just one variable .a solution cluster of the solution space is defined as a connected component of solutions , and a solution community of a solution cluster is a set of solutions which are more similar with each other and more densely inter - connected with each other than with the outsider solutions of the solution cluster .the results of this paper suggest that , as the constraint density of a random -sat ( ) formula increases , the solution space of the formula first forms many solution communities before the solution space experiences a clustering transition at the critical constraint density . for , the results of this paper also suggests that the individual solution clusters of the solution space ( which may correspond to different solution gibbs states ) still have rich internal community structures .the entropy density of a single solution community in a solution cluster is calculated by belief propagation iteration with a reweighting parameter . from the observed discontinuity of the overlap ( with a given reference solution ) at certain critical values of , we infer that the solution communities can be regarded as well - defined thermodynamic phases of the partition function eq .( [ eq : partitionfunction ] ) . as the constraint density of a random -sat formula increases, the density of inter - community connections in its solution space will decrease .therefore the solution space will split into many solution clusters as becomes large enough .very probably the splitting of the solution space is not a gradual process , with the solution clusters being divided from the single giant component one after another , but rather being a highly cooperative process with ( exponentially ) many solution clusters emerge at a critical constraint density .if this is really the case , it is very interesting to know whether in the thermodynamic limit of the value of is identical to .one way to check this is to perform simulations on the solution space using two mutually attractive random walkers ) .one may also simultaneously follow the evolution processes of many different solution communities of the same random -sat formula as a function of the constraint density .the main qualitative results of this paper are expected to be applicable also to large random -sat formula with .they may also be applicable to other random constraint satisfaction problems such as the random coloring problem .we have not yet investigated the lowest value of at which solution communities begin to emerge in the solution space of a random -sat formula .this is an important open question for future studies .hz thanks silvio franz and marc mzard for helpful discussions and kitpc ( beijing ) , lptms ( orsay ) , nordita ( stockholm ) for hospitality .this work was partially supported by the nsfc ( 10774150 ) and the china 973-program ( 2007cb935903 ) .the computer simulations were performed on the hpc cluster of itp .
the solution space of a -satisfiability ( -sat ) formula is a collection of solution clusters , each of which contains all the solutions that are mutually reachable through a sequence of single - spin flips . knowledge of the statistical property of solution clusters is valuable for a complete understanding of the solution space structure and the computational complexity of the random -sat problem . this paper explores single solution clusters of random - and -sat formulas through unbiased and biased random walk processes and the replica - symmetric cavity method of statistical physics . we find that the giant connected component of the solution space has already formed many different communities when the constraint density of the formula is still lower than the solution space clustering transition point . solutions of the same community are more similar with each other and more densely connected with each other than with the other solutions . the entropy density of a solution community is calculated using belief propagation and is found to be different for different communities of the same cluster . when the constraint density is beyond the clustering transition point , the same behavior is observed for the solution clusters reached by several stochastic search algorithms . taking together , the results of this work suggests a refined picture on the evolution of the solution space structure of the random -sat problem ; they may also be helpful for designing new heuristic algorithms .
in evolutionary processes , populations acquire changes to their gene content by mutational or recombinational events during reproduction .if those changes improve the adaptation of the organism to its environment , individuals carrying the modified genome have a better chance to survive and leave more offspring in the next generation . through the interplay ofrepeated mutation and selection , the genetic structure of the population evolves and beneficial alleles increase in frequency . in a constant environmentthe population may thus end up in a well adapted state , where beneficial mutations are rare or entirely absent and only combinations of several mutations can further increase fitness . to describe this kind of process ,sewall wright introduced the notion of a fitness landscape . here , the genotype is encoded by the coordinates of some suitable space and the degree of adaptation or reproductive success is modeled as a real number , called fitness , which is identified with the height of the landscape above the corresponding genotype .the evolutionary process of repeated mutation and selection is thus depicted as a hill climbing process .mutations lead to the exploration of new genotypes and selection forces populations to move preferentially to genotypes with larger fitness .if more than one mutation is necessary to increase fitness , the population has reached a local fitness peak .note that some caution is necessary when applying this picture , as the way in which genotypes are connected to one another does not correspond to the topology of a low - dimensional euclidean space but is more appropriately described by a graph or network ( see below ) .the underlying structure is well known from other areas of science , such as spin glasses in statistical physics and optimization problems in computer science .the concept of fitness landscapes has been very fruitful for the understanding of evolutionary processes .while earlier work in this field has been largely theoretical and computational , in recent years an increasing amount of experimental fitness data for mutational landscapes has become available , see ref . for a review .analysis of such data sets provides us with the possibility of a better understanding of the biological mechanisms that shape fitness landscapes and helps us to build better models .thus , identifying properties of fitness landscapes that yield relevant information on evolution is an important task .one such property that has attracted considerable interest is epistasis .epistasis implies that the change in fitness that is caused by a specific mutation depends on the configurations at other loci , or groups of loci , in the genome . in other words , epistasis is the interaction between different loci in their effect on fitness .interactions that only affect the strength of the mutational effect are referred to as magnitude epistasis , while interactions that change a mutation from beneficial to deleterious or vice versa are referred to as sign epistasis . in the absence of sign epistasis, the fitness landscape contains only a single peak and fitness values fall off monotonically with distance to that peak .if sign epistasis is present , the landscape can present several peaks and valleys , which has important implications for the mutational accessibility of the different genotypes and shortens the path to the next fitness optimum .thus the absence of sign epistasis implies a smooth landscape , while landscapes with sign epistasis are rugged . beyond the question of the presence of epistasis, one would like to be able to make more detailed statements about _ how much _ of it is present or _ in which way _ epistasis is realized in the landscape .a very helpful tool to answer these kinds of questions is the fourier decomposition of fitness landscapes introduced in ref .this decomposition makes use of graph theory to expand the landscape into components that correspond to interactions between loci .the coefficients of the decomposition corresponding to interactions between a given number of loci can be combined to yield the _amplitude spectrum_. calculating amplitude spectra numerically for data obtained from models or experiments is straightforward in principle , but so far only a small part of the information contained in the spectra is actually used .to improve this situation , it is important to understand how biologically meaningful features of a fitness landscape are reflected in its amplitude spectrum . in this paper , we take a first step in this direction by analytically calculating spectra for some of the most popular landscape models : the -model introduced by kauffman , two versions of the rough mount fuji ( rmf ) model , and a generic model with correlations that decay exponentially with distance on the landscape .thanks to the linearity of the amplitude decomposition , linear superpositions of these landscapes can also be treated .we calculate the spectra by exploiting their connection to fitness correlation functions originally established in ref .moreover , we compare some experimentally obtained spectra to the predictions of the models to see what features can be explained by these models and which can not . in the next sectionwe begin by introducing the definitions of fitness landscapes and their amplitude spectra on more rigorous mathematical grounds .on the molecular level the genotype of an organism is encoded in a sequence of letters taken from the alphabet of nucleotide base pairs with cardinality .point mutations replace single letters by others , altering the sequence and therefore the properties of the organism .a similar description applies to the space of proteins , where the cardinality of the encoding alphabet equals the number of amino acids .by contrast , in the context of classical genetics the units making up the genotype are genes occurring in different variants ( alleles ) , which again can be described as letters in some alphabet .this provides a coarse - grained view of the genome in which also complex mutational events are represented by replacing one allele by another . for simplicity, fitness landscapes are often defined on sequences comprised of elements of a _ binary _ alphabet , where a common choice is . in the present articlewe prefer the symmetric alphabet for mathematical convenience . referring to the discussion in the preceding paragraph ,we emphasize that the elements of the binary alphabet do not generally stand for bases or encoded proteins but rather indicate whether a particular mutation is present in a gene or not .therefore the restriction to single changes in the sequence does not imply that the treatment is limited to point mutations .all possible sequences of a given length constructed from the alphabet with cardinality form a metric space called the _ hamming space _ .it can be expressed as , where denotes the cartesian product and is the complete graph with nodes . for a binary alphabetthe are hypercubes .their metric is called the _ hamming distance _ , which equals the number of single mutational steps required to transform one sequence into the other . to quantify the degree of adaptation or reproductive success of an organism carrying the genotype ,a real number called fitness is assigned to the corresponding sequence according to to precisely define the different notions of epistasis introduced above , we consider two sequences with .let and , and denote the sequences with a mutation at the locus by and , respectively , with ) .if for some , the fitness landscape is called _epistatic_. if the effect is called _ magnitude epistasis _ , while for it is called _ sign epistasis_. furthermore , the landscape is said to contain _ reciprocal sign epistasis _ if there are pairs of mutations such that , with denoting the sequence mutated at loci and . a landscape with sign epistasisis said to be _ rugged _ , while landscapes containing no epistasis or only magnitude epistasis are called _smooth_. non - epistatic landscapes are also called _ additive _ , as here the individual effects of mutations add up independently .the presence of sign epistasis severely limits which paths on the landscape are accessible to evolution .landscapes that display reciprocal sign epistasis may contain several local fitness maxima , while those that do not have a single maximum .the existence of reciprocal sign epistasis is a necessary but not sufficient condition for the existence of multiple maxima . for an example of a sufficient condition for multiple maxima based on local properties of the landscape see .the _ adjacency matrix _ of the hamming space encodes the neighborhood relations between sequences , and is defined as with denoting the identity of matrices , the graph laplacian is then defined by , and its action on the fitness function yields for and denoting the element of , the eigenfunctions of are given by with and .the corresponding eigenvalues are and thus the degeneracy is .the set of all eigenfunctions forms an orthonormal basis and the landscape can be expressed in terms of a decomposition , called _ fourier expansion _ , which reads see fig .[ fig : eigenfct ] for the visualization of three eigenfunctions on the hypercube . while the s contain the information about the relative influence of the non - epistatic contributions on fitness , the higher order coefficients with describe the relative strength of the contributions of of interacting loci .the zero order coefficient is proportional to the mean fitness of the landscape , where the prefactor reflects the normalization of the .the amplitude spectrum quantifies the relative contributions of the complete sets of to the epistatic interactions .following ref . , we consider _ random field models _ of fitness landscapes where individual instances of the ensemble ( _ realizations _ ) are constructed from random variables according to some specified rule ( see sects .[ nkmodel ] and [ rmfmodel ] ) , and define amplitude spectra as averages over the realizations .two kinds of averages appear : averaging over realizations at a constant point in , and _ spatially _ averaging over all points in . here and in the following angular brackets denote averaging over the realizations of the landscape , while an overbar denotes a spatial average over , as for example in for the definition of the amplitude spectrum the two types of averages need to be distinguished .the first one reads for and . for an additive landscape and for a landscape with epistasis .in was used as a quantifier for the amount of epistasis found in empirical fitness landscapes . note that the values of for different landscapes are contrastable because of the normalization . another way to definethe amplitude spectrum is through with for all .the zero order coefficient is not defined in terms of the fourier coefficients , but is proportional to the mean covariance , , \end{aligned}\ ] ] as defined given in appears to be incorrect . ] in .the main difference between the and the consists in whether averaging is performed separately on the terms in the fraction or on the fraction as a whole . as it is often easier to calculate a fraction of averages than an average of a fraction , the present work concentrates on the .while the are not generally normalized , , a normalized amplitude spectrum can easily be constructed through in ref . it was shown that the differently averaged spectra are related to different types of fitness correlation functions .the _ direct correlation function _ is defined for all sequences of a given hamming distance as this correlation function is linked to the normalized amplitude spectrum , , according to where the are orthogonal functions depending on the underlying graph structure . on the other hand , the _ autocorrelation function _ defined as , with a constant that is independent of .the proof of theorem 5 in can be carried out analogously for the definition ( [ eq : autocorr ] ) without suffering from this constraint . ] where denotes a _ simultaneous _ average over all possible pairs ( ) with as well as over the realizations of the landscape , is linked to the amplitude spectrum according to again , the difference between eq .( [ eq : kt1 ] ) and eq .( [ eq : directcorr ] ) lies in how the averaging is performed . for hamming graphs , the functions closely related to the _ krawtchouk polynomials _ . for the binary case : where unless stated otherwise , here and in the rest of the paper , binomial coefficients are understood to be defined as our primary interest is in the calculation of analytical expressions of the for known .thus , an inversion of eq .( [ eq : kt1 ] ) is needed .this can be achieved by exploiting the orthogonality of the krawtchouk polynomials with respect to the binomial distribution , which implies that multiplying eq .( [ eq : kt1 ] ) by and summing over thus yields and we conclude that now , the calculation of amplitude spectra from autocorrelation functions is possible and at least numerically any spectrum can be calculated from a given correlation function .but for some landscape models even exact analytical solutions can be obtained , as will be shown in the following sections .-model with and different values of . ] -model with and different values of . ]the simplest random field model of a fitness landscape is the _ house - of - cards _ ( hoc ) model . in this model , the fitness valuesare assigned randomly to genotypes according to where the are independent and identically distributed ( i.i.d . )random variables drawn from some distribution . without loss of generality we assume that the have vanishing mean , and finite variance . the amplitude spectrum of the hoc - model is known to be , which also follows from eq .( [ eq : bq ] ) .although the hoc - model has been widely used for the modeling of adaptation , there is by now substantial experimental evidence that the assumption of uncorrelated fitness values overestimates the ruggedness of real fitness landscapes .it is therefore necessary to consider more complex models , which include fitness correlations in a biologically meaningful way .a prototypical model with tunable ruggedness is kauffman s -model , which assumes random epistatic interactions within groups of loci of fixed size and fixed membership . in the classical version ,each locus from a sequence of total length interacts with a set of other loci from the same sequence , which together with the locus itself constitute the -neighborhood _ _ of locus . to take into account more general setups ,the constraint of being a member of the neighborhood will be relaxed here .thus , defining , the -neighborhood is the set .the fitness is assigned as follows : let be random functions with binary arguments . for each of the combinations of the arguments , the are chosen as i.i.d .random variables with variance .the fitness landscape is then defined as thus , each is equivalent to a hoc - landscape of size . for ,respectively , the landscape is maximally rugged and reduces to the totally uncorrelated hoc -model , while for , respectively , all fitness contributions are independent , and the model is fully additive . by changing the ruggedness of the fitness landscape can be tuned . to complete the definition of the model , it has to be specified how the elements of the neighborhoods are chosen . in the most commonly used versions of the model , the interacting lociare either picked at random or taken to be adjacent along the sequence .a third possibility is to subdivide the sequence into blocks of size , such that within blocks every locus interacts with every other but blocks are mutually independent .although the construction of the neighborhoods affects certain properties of the landscapes such as the number of local fitness maxima and the evolutionary accessibility of the global maximum , it turns out that the autocorrelation function does not depend on it .the autocorrelation function of the -model can be calculated starting from eq .( [ eq : autocorr ] ) and is given by see fig .[ fig : autocorrbplk ] .note that previously some incorrect expressions for the correlation functions have been reported in the literature which led to the erroneous conclusion that the choice of the neighborhood affects the amplitude spectra . inserting ( [ rdnk ] ) into eq .( [ eq : bq ] ) yields the evaluation of this expression is somewhat technical and can be found in appendix a. the final result is remarkably simple ( see fig .[ fig : autocorrbplk ] for illustration ) .as expected , the fourier coefficients vanish for and the known case of the hoc - model is reproduced for .moreover , the coefficients satisfy the symmetry and are maximal for , as was previously conjectured in .the -model is already a very flexible model and offers many possibilities for tuning .an even more general model is obtained by considering _superpositions _ of -models , in the sense of -fitness landscapes being added independently .let be a family of -fitness landscapes with neighborhood sizes , .then its superposition is defined by since the different -landscapes are independent , the correlation functions are additive , with statistical weights where .the sum is over all landscapes with neighborhoods of size and contains a zeroth order term that shifts the correlation function by a constant .the amplitude spectrum of the superposition is thus of the form note that the consistent interpretation of an empirical fitness landscape as a superposition of -landscapes requires all to be positive .nevertheless , it can be useful to consider superpositions containing negative to calculate amplitude spectra of fitness landscapes constructed by different means ( see section [ rmfmodel ] for an example ) .interestingly , expression ( [ eq : lkcompo ] ) is also obtained from another type of generalized -model , giving rise to a different biological interpretation of the decomposition .consider again fitness values that are constructed as sums of fitnesses corresponding to hoc - landscapes associated to -like neighborhoods , where is an integer that can be different from , and is the size of the neighborhood , drawn from some distribution .furthermore , for simplicity assume that the variances of the are all the same .the reasoning behind this model is to retain the idea of interacting groups of loci that is inherent in the -model , but to relax the rather unrealistic condition that all these groups are of the same size .rather , it is assumed that there exist some typical distribution for the sizes of the groups . following the procedure explained in , the corresponding autocorrelation functionis easily shown to be which trivially leads to expression ( [ eq : lkcompo ] ) with .the coefficients obtained from the decomposition of experimentally obtained spectra in terms of -spectra could therefore also be interpreted as a probability distribution for the sizes of interacting neighborhoods .again , this interpretation is only consistent if all weights are positive . here, it seems reasonable to expect that for large enough landscapes should become continuous in the sense that the distribution becomes monotonic over large contiguous parts of its support .another model with tunable epistatic effects is the _ rough mount fuji _ ( rmf ) model , which is constructed by superimposing a purely additive model and a hoc - landscape according to in ref . , and the were parameters to be determined empirically from experimental data .here we instead choose as some arbitrary constant , the as i.i.d .random variables , and as another set of i.i.d .random variables with and , compare to the construction of the hoc - model above in sect .[ nkmodel ] .note that , in contrast to the , the do not depend on .the amount of ruggedness is controlled by fixing the variance of the hoc - component , , and the mean of the absolute values of the slopes of the additive model , .the important limiting cases , the hoc - model and the purely additive model , are obtained in the limits and , respectively . , , and various values of .] , , and various values of . ] in the following we write , where is a constant independent of , and the are i.i.d .random variables with and .note that choosing the same mean value for all the s singles out the _ reference sequence _ . on average, the fitness of sequence decays linearly with the hamming distance to the reference sequence and the mean slope is . setting yields a simpler version of the rmf - model that was introduced in . to calculate the autocorrelation function of the rmf - model ,it is convenient to rewrite the fitness as , where . making use of the vanishing mean values of the s and the s, the autocorrelation function reads the covariance of the deterministic part has been evaluated elsewhere and the terms containing random variables can easily be calculated , yielding in order to obtain the spectrum we write as a linear combination of correlation functions of the -model with different s , i.e. with expansion coefficients and for all other s . the can now be calculated making use of the linearity of equation ( [ eq : bq ] ) , yielding in fig . [fig : autocorrbprmf ] , autocorrelation functions and amplitude spectra for the rmf - model with and various choices of are shown .note that the generality of the superposition ansatz made it possible to calculate the for the rmf - model , although the relation to the -model is not obvious at first sight . having in mind that the zeroth component does not contain information about epistasis , we adopt , for the rest of this paper , a more general definition of rmf - landscapes as superpositions of -landscapes with all components being equal to zero , except for , , and an arbitrary zeroth order coefficient that may be of any sign .( main ) and the renormalized spectrum ( inset ) for exponentially decaying fitness correlations . in the inset , the exponential decay is obvious . ]the motivation for the present paper is to identify typical features of amplitude spectra of fitness landscapes and to make use of them for extracting information about the underlying biological system . in the preceding two sections we considered well - established statistical models of fitness landscapes and computed their spectra . as will be further illustrated in sect .[ experimentalfl ] , this analysis provides criteria to judge whether a measured spectrum can be explained by these models or not and , if so , one can use the biological picture behind the model to try to interpret the findings . however , when faced with experimental data , none of the presented models may be general enough to give a good description .if this is the case , an alternative ansatz is to start with a presumably generic correlation function and calculate the corresponding spectrum , which can be compared to the data .this may then also guide the search for improved models . here, we consider a correlation function that decays exponentially with hamming distance with .the resulting expression for the spectrum obtained from eq .( [ eq : bq ] ) , is most easily evaluated using the known form of the generating function of the krawtchouk polynomials and the fact that these polynomials are self - dual in the sense of indeed , inserting ( [ self - dual ] ) into ( [ bqexp ] ) and using ( [ k_gen ] ) yields defining this expression can be rewritten as corresponding to .we conclude that if the spectrum normalized with respect to the number of , , decays exponentially with , then the correlations decay exponentially with distance on the hypercube , see fig .[ fig : bpexp ] . although we are , at the moment , lacking simple stochastic models that produce exponentially decaying correlations , spectra of the form ( [ eq : bexp ] ) have recently been found for fitness landscapes obtained from a dynamical model of molecular signal transduction .it would be interesting to see whether one can construct stochastic models that do not enter too deeply into the dynamics at the cellular level but contain a simple and generic mechanism that gives rise to such correlations .exponentially decaying correlations have also been reported in a recent large - scale study of the fitness landscape of hiv-1 . however , the correlation function calculated in that article is different from the one studied here , as it averages over correlations between fitness values of mutants that are connected by random walks of some length and not over fitness values corresponding to states separated by hamming distance .such random walk correlation functions are also connected in a simple manner to the amplitude spectra , but the relation is different from the one considered here . therefore our results are not directly applicable to these observations .in this section we compare the model spectra to several experimentally measured `` fitness '' landscapes .the quotation marks indicate that not all of the landscapes presented here actually correspond to fitness , but rather to some proxy of it . to be able to compare spectra , the landscapes should be as large and as complete as possible .the four landscapes considered are a six locus landscapes obtained by hall _et al_. for the yeast _ saccharomyces cerevisiae _ , an eight locus landscape for the fungus _ aspergillus niger _ presented by franke _et al_. , and two nine locus landscapes for the plant _ nicotiana tabacum _ studied by omaille _et al_. . a comparative analysis of these ( and other ) empirical landscapes can be found in .all spectra presented in this section were calculated directly by decomposing the fitness landscapes in terms of the eigenfunctions of the graph laplacian . while the first two landscapes mentioned above measure growth rate as a quantifier of fitness , the landscapes presented in measure enzymatic specificity of terpene synthases , that is , the relative production of 5-epi - aristolochene and premnaspirodiene , respectively . as for these landscapesonly 418 out of 512 fitness values were measured , the missing data is estimated by fitting a multidimensional linear model to the measured landscape .the fitness values of states for which there are no measurements are then replaced by the values given by the linear model . on the contrary , for the _a. niger _ landscape considered in , missing fitness values were argued to correspond to non - viable mutants and are therefore set to zero .the way of estimating missing values obviously affects the spectra , but some estimation is necessary to be able to carry out the analysis .we now ask whether the experimental spectra can be expressed as superpositions of -spectra of the form ( [ eq : lkcompo ] ) ( recall that the rmf - model is a particular case of such a superposition ) . of course , such a decomposition is always possible , but the assumption that the biological mechanism responsible for the spectra is really the additive interplay of fixed groups of loci of characteristic sizes is only reasonable if all the coefficients are positive . simply solving the linear system of equations ( [ eq : lkcompo ] )generally yields several negative coefficients .more satisfactory results are obtained by fitting a function of the form ( [ eq : lkcompo ] ) to the data by means of a least square procedure , constraining the coefficients to positive values . here , two ansatzes are considered .first a fit containing all coefficients is carried out , with none of the fixed to zero _ a priori_. this is done to check whether a superposition of type ( [ linearlkmodel2 ] ) with a continuous neighborhood size distribution is appropriate .second , sparse fits containing as few nonzero s as possible are carried out to verify if the landscape could be biologically interpreted as a superposition of a small number of -landscapes of different interaction ranges .one way of selecting s that can be neglected in the fit is to identify those coefficients obtained in the full fit that are much smaller than the others . in all cases ,the term proportional to in ( [ eq : lkcompo ] ) is not considered as it can always be trivially fixed to fit . in fig .[ fig : empirical ] the data for the normalized amplitudes ( black dots ) is shown together with the fit ( green curve ) and the hoc - component ( red dashed line ) of the fit . for the _a. niger _ landscape in error estimates for the fitness values were available , enabling the calculation of error bars to the spectrum .this is done by constructing ensembles of landscapes with fitness values , where is the mean of the replicate experimental measurements of the fitness of genotype and the are normally distributed random numbers with -dependent standard deviations obtained from the replicate measurements .note that the influence of the measurement errors on the spectra is very small and only exceeds the symbol size for the highest component ( ) .at least for this case one can therefore safely exclude that the hoc - component of the spectrum is generated by measurement errors . as can be seen in fig . [fig : empirical](a ) , the spectrum of the yeast landscape is nicely fitted by an ansatz where only and are assumed to be different from zero .this is evidently a superposition of an additive and a hoc - landscape and therefore a rmf - landscape .only the value at seems too small to be fitted by the model .however , this value corresponds to a single component of the decomposition ( [ eq : ft ] ) and the large deviation may be due to the lack of averaging . also for the _a. niger _ landscape from a nice and sparse fit with nonzero coefficients , , and is obtained ( see fig.[fig : empirical](b ) ) .the significant value of implies that there are important interactions between pairs of loci .a rmf -landscape is therefore not an appropriate model of this system .note that this conclusion differs from the analysis presented in , where a reasonable fit to the rmf - model was found for a particular epistasis measure , the number of accessible pathways .this illustrates the importance of using more than one topographic measure for the comparison between empirical and model landscapes . for the spectrum of the 5-epi - aristolochene _n. tabacum _landscape from , the fitting yields reasonable results for an ansatz allowing only , , and to be different from 0 ( see fig . [fig : empirical](c ) ) .this might indicate that , apart from the non epistatic part and the simple pair interactions , there are one or several groups consisting of six strongly interacting alleles . using the same ansatz for the premnaspirodiene landscape yields less convincing results , as the large part of the spectrum seems to be poorly fitted ( see fig .[ fig : empirical](d ) ) . introducing more components into the fitting ansatz yields better results for this part of the spectrum , but such ansatzes can hardly be considered sparse anymore . using the full ansatz to fit the different landscapesdoes not yield any qualitative improvement for the first three landscapes and provides no evidence for an underlying continuous neighborhood size distribution . only for the premnaspirodiene _n. tabacum _ landscape does the fit for the spectrum improve notably , but the obtained spectrum does not support the idea of a continuous distribution of neighborhood sizes ( not shown ) . in general , such a continuous distribution is more likely to emerge for larger landscapes than the relatively small data sets considered here , which suffer from insufficient averaging over groups of loci of different sizes .one should be aware that failing to obtain a reasonable decomposition of an empirical landscape in terms of -spectra does not _ a priori _ rule out the possibility that the landscape is in fact shaped by the mechanisms assumed by a superposition of -models .for example , the failure may be due to an _ inappropriate _ fitness measure , in the following sense .suppose that there exists a fitness proxy , , whose decomposition in terms of -landscapes is sparse , but the proxy actually measured in experiments is , with being some nonlinear function .the decomposition of may then not be sparse anymore and the biological mechanism that shapes the landscape may be obscured .finally , it was checked whether any of the spectra are compatible with the expression ( [ eq : bexp ] ) corresponding to an exponentially decaying correlation function , but no reasonable correspondence was found .of course , this does not allow for the conclusion that exponentially decaying correlations are an unrealistic assumption .possibly , it may again be necessary to go to larger landscape sizes to see such behavior .also , the way in which the mutations constituting the landscape are selected may have an influence on the observed correlations ( see e.g. ) .exploiting the connection between amplitude spectra and fitness autocorrelation functions of fitness landscapes over the boolean hypercube , the amplitude spectrum of kauffman s -model was calculated exactly and found to be of the simple form ( [ eq : lkspectra ] ) . by superimposing -landscapes the spectra of rmf - type modelscould also be obtained .in addition , an -like model with a distribution of neighborhood sizes was introduced and its spectrum was calculated .such an extension of the -model is reasonable , because it can not be assumed in general that every locus interacts with the same number of other loci .this model thus offers more flexibility to fit experimental data . as a last example , the spectrum of a model with exponentially decaying correlations was computed .the hoc , rmf and -models are frequently used for analyzing evolutionary processes , classifying fitness landscape properties and fitting experimental data .therefore a lot of effort has been invested in the understanding of these models , but the link to experimental data is still rather weak .the amplitude spectra calculated in this article should facilitate quantitative comparisons in future studies .the spectra contain a large amount of information about the landscape topography , and it is important to understand how the spectrum encrypts this information in order to be able to interpret the spectra of measured fitness landscapes . as an exemplary application of our results ,four experimental landscapes were fitted by means of the model spectra .three of them could be fitted very nicely with sparse superpositions of -models , while for the fourth one the obtained fit seems less convincing . in none of the cases evidence for a continuous neighborhood size distribution found , which might be due to the small sizes of the landscapes discussed in this article .we claim that the fitting of amplitude spectra can be a useful tool for data analysis , but it has to be emphasized that the spectra can not be assigned to model landscapes in a unique way . also , the collection of models presented here is by no means exhaustive . obtaining analytical expressions for the amplitude spectra of other classes of fitness landscapes is desirable and should prove helpful in guiding the search for suitable models of experimental landscapes . finally , it is important to mention that there are interesting and biologically relevant properties of fitness landscapes that can not be obtained from their spectra , such as , for example , the number of local fitness maxima and the number of selectively accessible pathways . while it was shown in ref . that the ruggedness measure based on the fourier decomposition correlates with both quantities , there is no strict correspondence between these measures of epistatic interactions .amplitude spectra do not distinguish between different kinds of epistasis , i.e. magnitude , sign , or reciprocal sign epistasis , in a qualitative way .therefore , if one is interested in this distinction , other epistasis measures have to be included in the analysis .we thank b. schmiegelt , p.f .stadler and d.m .weinreich for useful discussions and correspondence , and d. hall for providing the original data of the _ s. cerevisiae _ landscape .this work was supported by dfg within sfb 680 , sfb - tr 12 , spp 1590 and the bonn cologne graduate school for physics and astronomy .to evaluate the expression ( [ bq_nk ] ) , an alternative but equivalent formulation for the krawtchouk polynomials is needed . with we obtain the summation over can be carried out using the identity which yields at this point we relax the condition ( [ convention ] ) of positivity on the entries of the binomial coefficients .this allows us to perform an ` upper negation ' in the first binomial factors in eq.([bq_sum2 ] ) , the remaining sum over can now be evaluated using the vandermonde identity , and with another upper negation we arrive at the final result ( [ eq : lkspectra ] ) .franke , j. , klzer , a. , de visser , j.a.g.m . & krug , j. ( 2011 ) .evolutionary accessibility of mutational pathways ._ plos comput biol _* 7(8 ) * , e1002134 .franke , j. & krug , j. ( 2012 ) . ._ j. stat .phys . _ * 148 * , 705722 .stoll , t. ( 2011 ) .reconstruction problems for graphs , krawtchouk polynomials , and diophantine equations . in _ structural analysis of complex networks _ , ed . by m. dehmer ( birkhuser , boston ) pp. 293317 .fontana , w. , stadler , p.f ., bornberg - bauer , e.g. , griesmacher , t. , hofacker , i.l . , tacker , m. , tarazona , p. , weinberger , e.d . & schuster , p. ( 1993 ) .rna folding and combinatory landscapes .e _ * 47 * , 20832099 .
starting from fitness correlation functions , we calculate exact expressions for the amplitude spectra of fitness landscapes as defined by p.f . stadler [ j. math . chem . * 20 * , 1 ( 1996 ) ] for common landscape models , including kauffman s -model , rough mount fuji landscapes and general linear superpositions of such landscapes . we further show that correlations decaying exponentially with the hamming distance yield exponentially decaying spectra similar to those reported recently for a model of molecular signal transduction . finally , we compare our results for the model systems to the spectra of various experimentally measured fitness landscapes . we claim that our analytical results should be helpful when trying to interpret empirical data and guide the search for improved fitness landscape models . fitness landscapes , sequence space , epistasis , fourier decomposition , experimental evolution
full diversity and low decoding complexity have been considered as two fundamental properties which a good space - time block code ( stbc ) should possess for multiple - input multiple - output ( mimo ) wireless communications .the first orthogonal stbc ( ostbc ) was proposed by alamouti which can achieve full transmit diversity for two transmit antennas .inspired by the alamouti scheme , seminal studies focused on the designs of ostbc for its unique orthogonal code structure which ensures a single symbol maximum likelihood ( ml ) decoding .however , ostbc suffers from the reduced symbol rate with an increase of the number of transmit antennas , especially when complex constellations are used . in spite of full diversity advantage , ostbc fails to achieve full channel capacity in mimo channels . to address the problem of low symbol rate and capacity loss in ostbc , linear dispersion code ( ldc )was proposed as a full - diversity scheme that is constructed linearly in space and time .the ldc design can be viewed as a linear combination of a fixed set of dispersion matrices with the transmitted symbols ( or equivalently , combining coefficients ) .diagonal algebraic space - time ( dast ) block codes in and threaded algebraic space - time ( tast ) codes in were also proposed as two typical algebraic designs which can obtain both full diversity and full rate with moderate ml decoding complexity .however , it is noted that the aforementioned high - rate codes rely on ml decoding to collect full diversity which has high decoding complexity .efficient designs of stbc with low decoding complexity were proposed , such as coordinate interleaved orthogonal design ( ciod ) with single - symbol ml decoding in and quasi - orthogonal stbc ( qostbc ) in , but their rates are restricted by the rates of ostbc .recently , full diversity achieving stbc based on linear receivers , such as the minimum mean square error ( mmse ) receiver and zero - forcing ( zf ) receiver , were studied and proposed , .however , it was shown in that the rates of these stbc based on linear receivers are not more than one . to address the complexity and rate tradeoff , a general decoding scheme with code design criterion , referred to as partial interference cancellation ( pic ) group decoding algorithm , was proposed in . in the pic group decoding ,the symbols to be decoded are divided into several groups after a linear pic operation and then each group is decoded separately with ml decoding .therefore , the pic group decoding can be viewed as an intermediate decoding between ml decoding and zf decoding . apparently, pic group decoding complexity depends on the number of symbols to be decoded in each group .moreover , a successive interference cancellation ( sic)-aided pic group decoding was proposed in .based on the design criterion of stbc with pic group decoding derived in , a systematic design of stbc achieving full diversity under pic group decoding was developed in . in subsequent work , a new design of stbc having an alamouti - toeplitz structure was proposed in which provides a lower pic decoding complexity compared with the design in .however , the decoding complexity of the stbc in is equivalent to a joint decoding of complex symbols . in this paper, we propose a design of stbc with pic group decoding that can achieve both full diversity and low decoding complexity .the decoding complexity is equal to a joint decoding of _ real _ symbols for transmit antennas , i.e. , only half decoding complexity of the stbc in . for the proposed stbc , real and imaginary parts of information symbols are parsed into diagonal layers and encoded by linear transform matrices , respectively .the full diversity can be achieved by the proposed stbc with under pic group decoding and with any under pic - sic group decoding , respectively .the code rate is equal to . in particular , for transmit antennas the code has real symbol pairwise ( i.e. single complex symbol ) decoding .furthermore , the code rate is .it should be noted that the existing stbc with single complex symbol ( or real symbol pairwise ) decoding , such as qostbc , , and ciod etc . in symbol rates not larger than one .also the codes with linear receivers have single complex symbol decoding but their rates can not be above one either .simulation results show that the proposed code outperforms the ciod in and the qostbc with the optimal rotation in for transmit antennas at the same bandwidth efficiency .moreover , our code guarantees full diversity without performance loss compared with other pic group decoding based stbc in , and , but a half decoding complexity is reduced .it should be mentioned that the major difference between the code in and the one proposed in this paper is that a complex - valued linear transform matrix is used for input complex signal vector to construct the code in , whereas in this paper two real - valued linear transform matrices are used for real and imaginary parts of the signals , respectively . by doing so , half decoding complexity can be reduced .the rest of this paper is organized as follows .the system model is outlined in section ii . in section iii ,a systematic design of stbc is proposed and a few code design examples are also given .the full diversity is proved under pic group decoding in section iv . in sectionv , simulation results are presented .finally , we conclude the paper in section vi .the following notations are used throughout this paper .column vectors ( matrices ) are denoted by boldface lower ( upper ) case letters .superscripts , and stand for conjugate , transpose , and conjugate transpose , respectively . denotes the field of complex numbers and denotes the real field . denotes the identity matrix , and denotes the matrix whose elements are all .additionally , and represent the real part and the imaginary part of variables , respectively .consider a mimo system with transmit and receive antennas .data symbols are first encoded into a space - time block code of size where is block length of the codeword . in this paper , can be represented in a general dispersion form as follows : where the data symbols are selected from a normalized complex constellation such as qam , and are constant matrices called dispersion matrices . then , the received signal from antennas can be arranged in a matrix as follows where is the channel matrix of size with the entries being independent and identically distributed ( i.i.d ) .the channels are assumed to experience the quasi - static fading . is the noise matrix whose elements are also i.i.d distributed . denotes the average signal - to - noise ratio ( snr ) per receive antenna , and the transmitted power is normalized by the factor such that the average energy of the coded symbols transmitting from all antennas during one symbol period is one .we suppose that channel state information is available at receiver only .to decode the transmitted sequence , we need to extract from . through some operations, we can get an equivalent signal model from ( [ ry ] ) as where is a received signal vector , is a noise vector , and is an equivalent channel matrix . denote , , and .then , we can rewrite ( [ eqyc ] ) as a real matrix form given by = \sqrt{\frac{\rho}{\mu}}\mathcal{h } \left[\begin{array}{c } \mathbf{s}_r\\ \mathbf{s}_i \end{array}\right ] + \left[\begin{array}{c } \mathbf{w}_r\\ \mathbf{w}_i \end{array}\right ] , \end{aligned}\ ] ] where has real column vectors for . in ,a new decoding scheme was proposed , referred to as pic group decoding which aims to address the rate and complexity tradeoff of the code while achieving full diversity . in the pic group decoding, the equivalent channel matrix is divided into a number of column groups with columns for group , , and .then , for group a group zf is applied to cancel the interferences coming from all the other groups , i.e. , , followed by a joint decoding of symbols corresponding to the group .note that the interference cancellation ( i.e. , the group zf ) mainly involves with linear matrix computations , whose computational complexity is small compared to the joint decoding with an exhaustive search of all candidate symbols in one group . to evaluate the decoding complexity of the pic group decoding, we mainly focus on the computational complexity of the joint decoding of each group under the pic group decoding algorithm .the joint decoding complexity can be characterized by the number of frobenius norms calculated in the decoding process . in the pic groupdecoding algorithm , the complexity is then .it can be seen that the pic group decoding provides a flexible decoding complexity which can vary from the zf decoding complexity to the ml decoding complexity .an sic - aided pic group decoding algorithm , namely pic - sic group decoding was also proposed in .similar to the blast detection algorithm , the pic - sic group decoding is performed after removing the already - decoded symbol set from the received signals to reduce the interference .if each group has only one symbol , then the pic - sic group decoding will be equivalent to the blast detection . in ,full - diversity stbc design criteria were derived when the pic group decoding and the pic - sic group decoding are used at the receiver . in the following ,we cite the main results of the stbc design criteria proposed in .[ prop1 ] ( * ? ? ?* theorem 1 ) [ _ full - diversity criterion under pic group decoding _ ] for an stbc with the pic group decoding , the full diversity is achieved when 1 . the code satisfies the full rank criterion , i.e. , it achieves full diversity when the ml receiver is used ; _ and _ 2 . forany , , any non - zero linear combination over of the vectors in the group does not belong to the space linearly spanned by all the vectors in the remaining vector groups , as long as , i.e. , where is the index set corresponding to the vector group and .[ prop2 ] [ _ full - diversity criterion under pic - sic group decoding _ ] for an stbc with the pic - sic group decoding , the full diversity is achieved when 1 . the code satisfies the full rank criterion , i.e. , it achieves full diversity when the ml receiver is used ; _ and _ 2 . at each decoding stage , for , which corresponds to the current to - be decoded symbol group , any non - zero linear combination over of the vectors in does not belong to the space linearly spanned by all the vectors in the remaining groups corresponding to yet uncoded symbol groups , as long as .in this section , a systematic design of linear dispersion stbc is presented and two design examples are given for four and six transmit antennas , respectively .suppose that is even .our proposed stbc is of size , and given by where the codeword matrices for and are given by , ~\mathbf{b}_{m , t , p}\label{b } = \left[\begin{array}{cc } \mathbf{c}^1_i & \mathbf{c}^2_i \\\mathbf{c}^2_i & -\mathbf{c}^1_i \end{array}\right].\end{aligned}\ ] ] note that and are both real matrices of size ( ) . and are real and imaginary parts of which is given by ,\end{aligned}\ ] ] with and the th diagonal layer from left to right written as the vector , given by ^t.\end{aligned}\ ] ] moreover , the real and imaginary parts of are given by respectively where can be different linear transform matrices chosen from and , and the vector are given by ^t.\end{aligned}\ ] ] the rate of the code is which is the same as that of stbcs with pic group decoding proposed in and .it should be mentioned that the code structure ( [ new ] ) is similar to the one in .the main difference is that in a linear transform matrix is used for input complex symbol vectors in the code construction and the matrix does not have to be real - valued , whereas in the design of in ( [ new ] ) , two real linear transform matrices and are used .later , we will see that the proposed code in ( [ new ] ) with pic group decoding of real - valued signals yields lower decoding complexity than the one in .\1 ) for four transmit antennas consider the case with transmit antennas . according to the design in ( [ new ] ) , we have where , \end{aligned}\ ] ] with ^t = \mathbf{\theta}_{a } \left[\begin{array}{cc } s_{\{2(i-1)+1\},r } & s_{\{2(i-1)+2\},r } \end{array}\right]^t ] for . for simplicity , the same linear transform matrix is used for and as ,\end{aligned}\ ] ] with .then , the codeword matrix of is written as , \end{aligned}\ ] ] the rate of the code is and equal to that of in ( * ? ? ?* eq . ( 29 ) ) and in ( * ? ? ?* eq . ( 37 ) ) .\2 ) for six transmit antennas for given ,the code with six transmit antennas is designed as follows where \end{aligned}\ ] ] with = \mathbf{\theta}_a \left[\begin{array}{c } s_{\{3(i-1)+1\},r}\\ s_{\{3(i-1)+2\},r}\\ s_{\{3(i-1)+3\},r } \end{array}\right],\end{aligned}\ ] ] and \end{aligned}\ ] ] with = \mathbf{\theta}_b \left[\begin{array}{c } s_{\{3(i-1)+1\},i}\\ s_{\{3(i-1)+2\},i}\\ s_{\{3(i-1)+3\},i } \end{array}\right],\end{aligned}\ ] ] for .the same linear transform matrix is used for and as \end{aligned}\ ] ] then , the codeword can be written as .\end{split}\ ] ] the code rate for is .in this section , we prove that our proposed stbc can obtain full diversity under pic group decoding and have a lower decoding complexity compared with and .define as the difference between symbols and .following the proof of ( * ? ? ?* theorem 1 ) , three cases should be considered separately in terms of and as follows 1 . both and [ case1 ] +consider .after some row / column permutations , a different codeword matrix can be written as follows \ ] ] where is a matrix and ,i=1,2 .\end{aligned}\ ] ] + from ( [ roxir ] ) and ( [ roxii ] ) , we deduce that there exists at least one vector such that , , because the signal space diversity is obtained from the linear transform matrix .then , we have that is full rank , which can be proved with a similar proof given in ( * ? ? ?* theorem 1 ) .hence , can guarantee full diversity with ml decoding .likewise , it is obvious that can also achieve full diversity since .+ therefore , the code can achieve full diversity under ml decoding . only [ case2 ] + as we mentioned in case [ case1 ] ) , can achieve full diversity under ml decoding if . considering forms the real part in ( [ new ] ) , the code can achieve full diversity under ml decoding . only + similar to case [ case2 ] ) , being the imaginary part of our proposed code can achieve full diversity under ml decoding , which is sufficient to prove that has a property of full diversity . by observing all three cases ,we conclude that the proposed code in ( [ new ] ) can achieve full diversity under ml decoding .compared with the pic grouping schemes derived in and , the separated linear transform of real and imaginary parts of the information symbols in the proposed code contributes to the real symbol decoding . in the following ,we show the main result of the proposed stbc when a pic group decoding with a particular grouping scheme is used at the receiver , as follows .[ tpic ] consider a mimo system with transmit antennas and receive antennas over block fading channels .the stbc as describe in ( [ new ] ) with two diagonal layers in each submatrix is used at the transmitter .the real equivalent channel matrix is .if the received signal is decoded using the pic group decoding with the grouping scheme , where for , i.e. , the size of each real group is equal to , then the code achieves the full diversity .[ m4co ] for the proposed code with transmit antennas in ( [ m4 ] ) , real symbol pairwise ml decoding is achieved in each group , which is equivalent to single complex symbol ml decoding .table i shows the comparison of pic group decoding complexity between the new code in ( [ m4 ] ) and the codes in and . according to this table ,it is obvious that the proposed code for transmit antennas further reduce the decoding complexity to real symbol pairwise ( i.e. , single complex symbol ) decoding in each pic group . in order to prove _ theorem [ tpic ]_ , let us first introduce the following definition and lemma .[ orde ] let be groups of vectors .vector groups are said to be orthogonal if for , is orthogonal to the remaining vector groups . [ leq ] consider the system described in _ theorem [ tpic ] _ with as follows = \sqrt{\frac{\rho}{\mu}}\mathcal{h } \left[\begin{array}{c } \mathbf{s}^1_{1,r}\\ \mathbf{s}^1_{2,r}\\ \mathbf{s}^2_{1,r } \\\mathbf{s}^2_{2,r}\\ \mathbf{s}^1_{1,i}\\ \mathbf{s}^1_{2,i}\\ \mathbf{s}^2_{1,i}\\ \mathbf{s}^2_{2,i } \end{array } \right ] + \left[\begin{array}{c } \mathbf{w}_{r}^1\\ \mathbf{w}_{r}^2\\ \mathbf{w}_{i}^1\\ \mathbf{w}_{i}^2 \end{array}\right],\end{aligned}\ ] ] where the vector are given by ( [ eqn : sip ] ) for and .the equivalent channel matrix is expressed as \\ & = & \left[\begin{array}{cccccccc}\label{picl } \mathbf{g}_1 & \mathbf{g}_2 & \ldots & \mathbf{g}_8 \end{array } \right],\end{aligned}\ ] ] where , { \rm{and}~ } \mathcal{h}_{p , i}^i=\left[\begin{array}{c } \mathbf{0}_{(p-1)\times ( m/2)}\\ \mathrm{diag}(\mathbf{h}_i^{i})\\ \mathbf{0}_{(2-p)\times ( m/2 ) } \end{array } \right],\end{aligned}\ ] ] for and .the channel coefficient vector is evenly divided into two groups with ^t ] . is the channel gain from the transmit antenna to the single receive antenna for . a proof of _ lemma 1 _ is given in appendix .note that ( * ? ? ?* corollary 1 ) proves that the full diversity conditions only need to be proved for one receive antenna case .thus , we only consider the miso system model ( i.e. ) . first , after some column / row permutations , ( [ lemmaeq ] )can be rewritten as & [ \mathbf{g}_5^ { ' } & \mathbf{g}_7^ { ' } ] & [ \mathbf{g}_2^ { ' } & \mathbf{g}_4^ { ' } ] & [ \mathbf{g}_6^ { ' } & \mathbf{g}_8^ { ' } ] \end{array } \right],\nonumber\\ & = & \left[\begin{array}{cccc } \mathcal{f}^{r}_1 & -\mathcal{f}^{i}_1 & \mathbf{0}_{2\times m } & \mathbf{0}_{2\times m } \\\mathcal{f}^{i}_1 & \mathcal{f}^{r}_1 & \mathbf{0}_{2\times m } & \mathbf{0}_{2\times m } \\ \mathcal{f}^{r}_2 & -\mathcal{f}^{i}_2 & \mathcal{f}^{r}_1 & -\mathcal{f}^{i}_1 \\\mathcal{f}^{i}_2 & \mathcal{f}^{r}_2 & \mathcal{f}^{i}_1 & \mathcal{f}^{r}_1\\ \vdots & \vdots & \mathcal{f}^{r}_2 & -\mathcal{f}^{i}_2\\ \vdots & \vdots & \mathcal{f}^{i}_2 & \mathcal{f}^{r}_2 \\ \mathcal{f}^{r}_{\frac{m}{2 } } & -\mathcal{f}^{i}_{\frac{m}{2 } } & \vdots & \vdots \\ \mathcal{f}^{i}_{\frac{m}{2 } } & \mathcal{f}^{r}_{\frac{m}{2 } } & \vdots & \vdots \\ \mathbf{0}_{2\times m } & \mathbf{0}_{2\times m } & \mathcal{f}^{r}_{\frac{m}{2 } } & -\mathcal{f}^{i}_{\frac{m}{2}}\\ \mathbf{0}_{2\times m } & \mathbf{0}_{2\times m } & \mathcal{f}^{i}_{\frac{m}{2 } } & \mathcal{f}^{r}_{\frac{m}{2 } } \end{array } \right],\end{aligned}\ ] ] where both and are real matrix given by ,\\ \mathcal{f}^i_j&=&\left[\begin{array}{cc}\label{fi } \mathbf{f}_{j , j_i } & \mathbf{f}_{j,{\{j+\frac{m}{2}\}}_i } \\ -\mathbf{f}_{j,{\{j+\frac{m}{2}\}}_i } & \mathbf{f}_{j , j_i } \end{array } \right],\end{aligned}\ ] ] for . is a real vector with being the row of the linear transform matrix for and with and being the real and imaginary part of for , respectively .it is worthwhile to mention that from ( [ lemmaeq ] ) , in ] are related to , while in ] are associated with . in appendix, it is shown that the orthogonality between each groups is irrelevant to the linear transform matrices and .therefore , for simplicity is used for both and .next , we prove that any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the vectors in the vector groups . for any ,i.e. , where is a column vector .for any nonzero , we have following three cases .a ) : : if and , then it must exist a minimum index ( ) such that is nonzero and a minimum index ( ) such that is nonzero. therefore , must be all zeros and must be all zeros , too .b ) : : if and , then it must exist a minimum index ( ) such that is nonzero. therefore , must be all zeros and must be all zeros , too .c ) : : and , then it must exist a minimum index ( ) such that is nonzero .therefore , must be all zeros and must be all zeros , too .next , we first focus on the case of a ) .the proof is presented in terms of and . a1 ) : : + in this case , ( [ eqhp1 ] ) can be expressed as .\end{aligned}\ ] ] where . by observing the row to the row in ( [ eqhp2 ] ) ,the vector groups are all zeros , and is orthogonal to the vector groups .thus , it is obvious that in these four rows , any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the vectors in the vector groups .+ furthermore , according to _ definition [ orde ] _ , the vector groups are orthogonal in these four rows .consequently , in these four rows , any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the vectors in the vector groups .considering all rows in ( [ eqhp2 ] ) , any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the vectors in the vector groups .a2 ) : : + in this case , ( [ eqhp1 ] ) can be expressed as .\end{aligned}\ ] ] + it is seen that from the row to the row in ( [ eqhp3 ] ) the groups are all zeros .similarly , we have that in these four rows , any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the vectors in the vector groups .additionally , the vector groups are orthogonal .similar to case a1 ) , we have that any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the vectors in the vector groups .a3 ) : : + in this case , ( [ eqhp1 ] ) can be expressed as + .\end{aligned}\ ] ] + as for this case , the vector groups are all zeros from the row to the row , and the vector groups are orthogonal in ( [ eqhp4 ] ) . similar to the proof for case a1 ) , we have that any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the remaining vectors in . to summarize all the casesa1)-a3 ) , we then conclude that for and any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the vectors in the vector groups .if the case b ) occurs , i.e. , and , then ( [ eqhp1 ] ) can be written as a similar form to ( [ eqhp4 ] ) by replacing by for all .the proof is the same as that of case a3 ) .if the case c ) occurs , i.e. , and , then ( [ eqhp1 ] ) can be written as a similar form to ( [ eqhp3 ] ) by replacing by for all .the proof is the same as that of case a2 ) .therefore , we have proved that for any any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the vectors in the vector groups .similarly , we can prove that any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the vectors in the remaining vector groups , for .note that is a row permutation of for , respectively .we prove that any non - zero linear combination of the vectors in over does not belong to the space linearly spanned by all the vectors in the remaining vector groups , for . according to _ proposition 1 _ , the proof of _ theorem [ tpic ] _is completed . in the preceding discussion , the new code in ( [ new ] )is proved to achieve the full diversity under pic group decoding when only . in the following, we will further show that with any value can obtain full diversity under pic - sic group decoding .consider a mimo system with transmit antennas and receive antennas over block fading channels .the stbc as described in ( [ new ] ) with diagonal layers is used at the transmitter .the equivalent channel matrix is .if the received signal is decoded using the pic - sic group decoding with the grouping scheme and with the sequential order , where for , i.e. , the size of each real group is equal to , then the code achieves the full diversity . the code rate of the full - diversity stbc can be up to symbols per channel use .the proof is similar to that of _ theorem 1_. note that for the code in _ lemma 1 _ can be written as an alternative form similar to the one in ( [ eqhp1 ] ) except the expansion of column dimensions .with aid of _ proposition 2 _ , it is simple to follow the proof for the case of in section iv - b to prove _ theorem 2_. the detailed proof is omitted .in this section , we present some simulation results for four transmit antennas and four receive antennas . in all simulations , the channel model follows that described in section [ model ] . in fig .[ fig:1 ] , four kinds of stbcs are compared : guo - xia s code proposed in ( * ? ? ?* eq . ( 40 ) ) , in ( * ? ? ?* eq . ( 29 ) ) , in ( * ? ? ?* eq . ( 37 ) ) and the new code given in ( [ m4 ] ) .note that all the codes presented in fig .[ fig:1 ] have the same rate of , and 64-qam constellation is used so that we keep the same bandwidth efficiency of 8 bps / hz for each code .[ fig:1 ] shows the bit error rate ( ber ) for four codes based on pic group decoding .firstly , as expected guo - xia s code , , and the new code can achieve full diversity at high snr .then , one can observe that has a very similar performance to and guo - xia s code since we use the same real linear transform matrix for the case .however , compared with and guo - xia s code , the code further increases the number of pic groups and allows two real symbols ( i.e. single complex symbol ) to be decoded in each pic group without performance loss . in fig .[ fig:2 ] , ciod of rate 1 in ( * ? ? ?* eq . ( 85 ) ) and qostbc of rate 1 in ( * ? ? ?* eq . ( 39 ) ) with ml decoding are compared with the code with pic group decoding . in order to make a fair performance comparison , the symbols are chosen from a 256qam signal set for ciod and qostbc , and 64qam for the code .thus , the code has the same bandwidth efficiency with ciod and qostbc at bps / hz . note that qostbc with optimal transformation has a very similar performance to ciod .moreover , one observe that the code outperforms both ciod and qostbc by db . as for this case , the decoding complexity of new code ( real symbols pair - wise ) is equivalent to that of qostbc ( real symbols pairwise ml decoding ) and ciod ( single complex symbol ml decoding ) .[ fig:3 ] presents the performance comparison between the code in and the proposed code with pic and pic - sic group decoding , respectively . here, 64qam is used to keep the same bandwidth efficiency of bps / hz .it can be observed that has a very similar performance to under both pic and pic - sic group decoding .in addition , it is shown that both and can achieve full diversity under pic - sic group decoding , but lose full diversity when pic group decoding is employed which is validated by theorem 1 .in this paper , we proposed a systematic design of stbc that can achieve full diversity with the pic group decoding . by coding the real and imaginary parts of the complex symbols vector independently , the proposed code has a reduced pic group decoding complexity , which is equivalent to a joint decoding of real symbols for transmit antennas .the full diversity of the proposed stbc with diagonal layers was proved for pic group decoding with and pic - sic group decoding with any , respectively .it is worthwhile to mention that for transmit antennas the code admits real symbol pairwise decoding and the code rate is .simulation results show that our proposed code can achieve full diversity with a lower decoding complexity than other existing codes .consider the system described in _ theorem [ tpic ] _ with receive antenna . according to the system model given in ( [ ry ] ) , the matrix form of represented as with the expansion of ( [ eqn : y ] ) , we rewrite as a matrix form we substitute the codeword matrices ( [ ab ] ) into ( [ eqn : yr1 ] ) and([eqn : yi1 ] ) .then , we can obtain \\ = & \sqrt{\frac{\rho}{\mu } } \left(\left[\begin{array}{cc } \mathbf{c}_r^{1 } & \mathbf{c}_r^{2}\\ -\mathbf{c}^{2}_r & \mathbf{c}^{1}_r\\ \end{array}\right ] \left[\begin{array}{c } \mathbf{h}_{r}^1\\ \mathbf{h}_{r}^2 \end{array}\right ] - \left[\begin{array}{cc } \mathbf{c}_i^{1 } & \mathbf{c}_i^{2}\\ \mathbf{c}^{2}_i & -\mathbf{c}^{1}_i\\ \end{array}\right ] \left[\begin{array}{c }\mathbf{h}_{i}^1\\ \mathbf{h}_{i}^2 \end{array}\right]\right ) + \left[\begin{array}{c } \mathbf{w}_{r}^1\\ \mathbf{w}_{r}^2 \end{array}\right]\\ = & \sqrt{\frac{\rho}{\mu } } \left[\begin{array}{c } \mathbf{c}_r^{1}\mathbf{h}_{r}^1+\mathbf{c}_r^{2 } \mathbf{h}_{r}^2-\mathbf{c}_i^{1}\mathbf{h}_{i}^1-\mathbf{c}_i^{2 } \mathbf{h}_{i}^2\\ -\mathbf{c}_r^{2}\mathbf{h}_{r}^1+\mathbf{c}_r^{1 } \mathbf{h}_{r}^2-\mathbf{c}_i^{2}\mathbf{h}_{i}^1+\mathbf{c}_i^{1 } \mathbf{h}_{i}^2 \end{array}\right ] + \left[\begin{array}{c } \mathbf{w}_{1,r}\\ \mathbf{w}_{2,r } \end{array}\right ] , \end{split}\ ] ] \\ = & \sqrt{\frac{\rho}{\mu } } \left(\left[\begin{array}{cc } \mathbf{c}_r^{1 } & \mathbf{c}_r^{2}\\ -\mathbf{c}^{2}_r & \mathbf{c}^{1}_r \end{array}\right ] \left[\begin{array}{c } \mathbf{h}_{i}^1\\ \mathbf{h}_{i}^2 \end{array}\right ] + \left[\begin{array}{cc } \mathbf{c}_i^{1 } & \mathbf{c}_i^{2}\\ \mathbf{c}^{2}_i & -\mathbf{c}^{1}_i\\ \end{array}\right ] \left[\begin{array}{c } \mathbf{h}_{r}^1\\ \mathbf{h}_{r}^2 \end{array}\right]\right ) + \left[\begin{array}{c } \mathbf{w}_{i}^1\\ \mathbf{w}_{i}^2 \end{array}\right]\\ = & \sqrt{\frac{\rho}{\mu } } \left[\begin{array}{c } \mathbf{c}_r^{1}\mathbf{h}_{i}^1+\mathbf{c}_r^{2 } \mathbf{h}_{i}^2+\mathbf{c}_i^{1}\mathbf{h}_{r}^1+\mathbf{c}_i^{2 } \mathbf{h}_{r}^2\\ -\mathbf{c}_r^{2}\mathbf{h}_{i}^1+\mathbf{c}_r^{1 } \mathbf{h}_{i}^2+\mathbf{c}_i^{2}\mathbf{h}_{r}^1-\mathbf{c}_i^{1 } \mathbf{h}_{r}^2 \end{array}\right ] + \left[\begin{array}{c } \mathbf{w}_{1,i}\\ \mathbf{w}_{2,i } \end{array}\right ] , \end{split}\ ] ] where , and .let ] .furthermore , according to the code structure in ( [ c ] ) , ( [ eqn : yr ] ) and ( [ eqn : yi ] ) can be rewritten as \\ = & \sqrt{\frac{\rho}{\mu}}\left[\begin{array}{c } \sum^2_{p=1}\mathbf{c}_{p , r}^{1}\mathbf{h}_{r}^1+\sum^2_{p=1}\mathbf{c}_{p , r}^{2 } \mathbf{h}_{r}^2-\sum^2_{p=1}\mathbf{c}_{p , i}^{1}\mathbf{h}_{i}^1-\sum^2_{p=1}\mathbf{c}_{p , i}^{2 } \mathbf{h}_{i}^2\\ -\sum^2_{p=1}\mathbf{c}_{p , r}^{2}\mathbf{h}_{r}^1+\sum^2_{p=1}\mathbf{c}_{p , r}^{1 } \mathbf{h}_{r}^2-\sum^2_{p=1}\mathbf{c}_{p , i}^{2}\mathbf{h}_{i}^1+\sum^2_{p=1}\mathbf{c}_{p , i}^{1 } \mathbf{h}_{i}^2 \end{array}\right ] + \left[\begin{array}{c } \mathbf{w}_{r}^1\\ \mathbf{w}_{r}^2 \end{array}\right ] , \end{split}\ ] ] \\ = & \sqrt{\frac{\rho}{\mu } } \left[\begin{array}{c } \sum^2_{p=1}\mathbf{c}_{p , r}^{1}\mathbf{h}_{i}^1+\sum^2_{p=1}\mathbf{c}_{p , r}^{2 } \mathbf{h}_{i}^2+\sum^2_{p=1}\mathbf{c}_{p , i}^{1}\mathbf{h}_{r}^1+\sum^2_{p=1}\mathbf{c}_{p , i}^{2 } \mathbf{h}_{r}^2\\ -\sum^2_{p=1}\mathbf{c}_{p , r}^{2}\mathbf{h}_{i}^1+\sum^2_{p=1}\mathbf{c}_{p , r}^{1 } \mathbf{h}_{i}^2+\sum^2_{p=1}\mathbf{c}_{p , i}^{2}\mathbf{h}_{r}^1-\sum^2_{p=1}\mathbf{c}_{p , i}^{1 } \mathbf{h}_{r}^2 \end{array}\right ] + \left[\begin{array}{c } \mathbf{w}_{i}^1\\ \mathbf{w}_{i}^2 \end{array}\right ] , \end{split}\ ] ] where ,\ , p=1,2;\,\ , i=1,2 .\end{aligned}\ ] ] equivalently , we have = \left[\begin{array}{c } \mathbf{w}_{r}^1\\ \mathbf{w}_{r}^2 \end{array}\right]+\\ & \sqrt{\frac{\rho}{\mu}}\left[\begin{array}{c } \mathcal{h}^1_{1,r}\mathbf{x}_{1,r}^{1}+\mathcal{h}^1_{2,r}\mathbf{x}_{2,r}^{1 } + \mathcal{h}^2_{1,r}\mathbf{x}_{1,r}^{2}+\mathcal{h}^2_{2,r}\mathbf{x}_{2,r}^{2}- \mathcal{h}^1_{1,i}\mathbf{x}_{1,i}^{1}-\mathcal{h}^1_{2,i}\mathbf{x}_{2,i}^{1 } -\mathcal{h}^2_{1,i}\mathbf{x}_{1,i}^{2}-\mathcal{h}^2_{2,i}\mathbf{x}_{2,i}^{2}\\ -\mathcal{h}^1_{1,r}\mathbf{x}_{1,r}^{2}-\mathcal{h}^1_{2,r}\mathbf{x}_{2,r}^{2}+\mathcal{h}^2_{1,r}\mathbf{x}_{1,r}^{1}+\mathcal{h}^2_{2,r}\mathbf{x}_{2,r}^{1}-\mathcal{h}^1_{1,i}\mathbf{x}_{1,i}^{2}-\mathcal{h}^1_{2,i}\mathbf{x}_{2,i}^{2}+\mathcal{h}^2_{1,i}\mathbf{x}_{1,i}^{1}+\mathcal{h}^2_{2,i}\mathbf{x}_{2,i}^{1 } \end{array}\right ] , \end{split}\ ] ] =\left[\begin{array}{c } \mathbf{w}_{i}^1\\ \mathbf{w}_{i}^2 \end{array}\right]+\\ & \sqrt{\frac{\rho}{\mu}}\left[\begin{array}{c } \mathcal{h}^1_{1,i}\mathbf{x}_{1,r}^{1}+\mathcal{h}^1_{2,i}\mathbf{x}_{2,r}^{1 } + \mathcal{h}^2_{1,i}\mathbf{x}_{1,r}^{2}+\mathcal{h}^2_{2,i}\mathbf{x}_{2,r}^{2}+ \mathcal{h}^1_{1,r}\mathbf{x}_{1,i}^{1}+\mathcal{h}^1_{2,r}\mathbf{x}_{2,i}^{1 } + \mathcal{h}^2_{1,r}\mathbf{x}_{1,i}^{2}+\mathcal{h}^2_{2,r}\mathbf{x}_{2,i}^{2}\\ -\mathcal{h}^1_{1,i}\mathbf{x}_{1,r}^{2}-\mathcal{h}^1_{2,i}\mathbf{x}_{2,r}^{2}+\mathcal{h}^2_{1,i}\mathbf{x}_{1,r}^{1}+\mathcal{h}^2_{2,i}\mathbf{x}_{2,r}^{1}+\mathcal{h}^1_{1,r}\mathbf{x}_{1,i}^{2}+\mathcal{h}^1_{2,r}\mathbf{x}_{2,i}^{2}-\mathcal{h}^2_{1,r}\mathbf{x}_{1,i}^{1}-\mathcal{h}^2_{2,r}\mathbf{x}_{2,i}^{1 } \end{array}\right ] \end{split}\ ] ] where , { \rm{and}~ } \mathcal{h}_{p , i}^i=\left[\begin{array}{c } \mathbf{0}_{(p-1)\times ( m/2)}\\ \mathrm{diag}(\mathbf{h}_i^{i})\\ \mathbf{0}_{(2-p)\times ( m/2 ) } \end{array } \right],\end{aligned}\ ] ] for and .next , we gather the equations , , and to form a real system as follows &= & \sqrt{\frac{\rho}{\mu } } \left[\begin{array}{cccccccc } \mathcal{h}^1_{1,r}&\mathcal{h}^1_{2,r}&\mathcal{h}^2_{1,r}&\mathcal{h}^2_{2,r}&-\mathcal{h}^1_{1,i}&-\mathcal{h}^1_{2,i}&-\mathcal{h}^2_{1,i}&-\mathcal{h}^2_{2,i}\\ \mathcal{h}^2_{1,r}&\mathcal{h}^2_{2,r}&-\mathcal{h}^1_{1,r}&-\mathcal{h}^1_{2,r}&\mathcal{h}^2_{1,i}&\mathcal{h}^2_{2,i}&-\mathcal{h}^1_{1,i}&-\mathcal{h}^1_{2,i}\\ \mathcal{h}^1_{1,i}&\mathcal{h}^1_{2,i}&\mathcal{h}^2_{1,i}&\mathcal{h}^2_{2,i}&\mathcal{h}^1_{1,r}&\mathcal{h}^1_{2,r}&\mathcal{h}^2_{1,r}&\mathcal{h}^2_{2,r}\\ \mathcal{h}^2_{1,i}&\mathcal{h}^2_{2,i}&-\mathcal{h}^1_{1,i}&-\mathcal{h}^1_{2,i}&-\mathcal{h}^2_{1,r}&-\mathcal{h}^2_{2,r}&\mathcal{h}^1_{1,r}&\mathcal{h}^1_{2,r } \end{array } \right ] \left[\begin{array}{c } \mathbf{x}^1_{1,r}\\\mathbf{x}^1_{2,r}\\ \mathbf{x}^2_{1,r } \\\mathbf{x}^2_{2,r } \\ \mathbf{x}^1_{1,i } \\ \mathbf{x}^1_{2,i } \\\mathbf{x}^2_{1,i } \\\mathbf{x}^2_{2,i } \end{array } \right]\\ & + & \left[\begin{array}{c } \mathbf{w}_{r}^1\\ \mathbf{w}_{r}^2\\ \mathbf{w}_{i}^1\\ \mathbf{w}_{i}^2 \end{array}\right].\end{aligned}\ ] ] in order to obtain the equivalent signal model in ( [ eqyr ] ) , using ( [ roxir ] ) and ( [ roxii ] ) we can rewrite ( [ eqn : ye1 ] ) as = \sqrt{\frac{\rho}{\mu}}\mathcal{h } \left[\begin{array}{c } \mathbf{s}^1_{1,r}\\ \mathbf{s}^1_{2,r}\\ \mathbf{s}^2_{1,r } \\ \mathbf{s}^2_{2,r}\\ \mathbf{s}^1_{1,i}\\ \mathbf{s}^1_{2,i}\\ \mathbf{s}^2_{1,i}\\ \mathbf{s}^2_{2,i } \end{array } \right ] + \left[\begin{array}{c } \mathbf{w}_{r}^1\\ \mathbf{w}_{r}^2\\ \mathbf{w}_{i}^1\\ \mathbf{w}_{i}^2 \end{array}\right],\end{aligned}\ ] ] where the equivalent real channel matrix is given by \\ & = & \left[\begin{array}{cccccccc}\label{pic } \mathbf{g}_1 & \mathbf{g}_2 & \ldots & \mathbf{g}_8 \end{array } \right ] .\end{aligned}\ ] ] according to _ definition [ orde ] _, we obtain that the groups are orthogonal , and the groups are orthogonal as well .the authors would like to thank tianyi xu for his reading and comments on this manuscript .v. tarokh , n. seshadri , and a. calderbank , `` space - time codes for high data rate wireless communications : performance criterion and code construction , '' _ ieee trans .inf . theory _ ,44 , pp . 744765 , mar .v. tarokh , h. jafarkhani , and a. r. calderbank , `` space - time block codes from orthogonal designs , '' _ ieee trans .inf . theory _ ,vol . 45 , pp .14561467 , july 1999 . also ,`` corrections to ` space - time block codes from orthogonal designs ' , '' _ ieee trans .inf . theory _46 , p. 314, jan . 2000 .k. lu , s. fu , and x .-xia , `` closed form designs of complex orthogonal space - time block codes of rates for or transmit antennas , '' _ ieee trans .inf . theory _43404347 , dec . 2005 .h. wang and x .-g xia , `` upper bounds of rates of complex orthogonal space - time block codes , '' _ ieee trans .inf . theory _ , vol 49 , pp .27882796 , oct .2003 .x. guo and x .- g .xia , `` on full diversity space - time block codes with partial interference cancellation group decoding , '' _ ieee trans .inf . theory _ ,55 , pp . 43664385 , oct .2009 . also , `` corrections to ` on full diversity space - time block codes with partial interference cancellation group decoding ' , '' http://www.ece.udel.edu/~xxia/correction_guo_xia.pdf .w. zhang , t. xu , and x .-xia , `` two designs of space - time block codes achieving full diversity with partial interference cancellation group decoding , '' _ ieee trans .inf . theory _ , submitted .http://arxiv.org/abs/0904.1812v3 w.zhang , l. shi , and x .-xia , `` full diversity space - time block codes with low - complexity partial interference cancellation group decoding , '' _ ieee trans ._ , submitted .http://arxiv.org/abs/1003.3908 [ http://arxiv.org/abs/1003.3908 ]j. boutros and e. viterbo , `` signal space diversity : a power and bandwidth efficient diveristy technique for the rayleigh fading channel , '' _ ieee trans .inf . theory _ ,14531467 , july 1998 .
in this paper , we propose a systematic design of space - time block codes ( stbc ) which can achieve high rate and full diversity when the partial interference cancellation ( pic ) group decoding is used at receivers . the proposed codes can be applied to any number of transmit antennas and admit a low decoding complexity while achieving full diversity . for transmit antennas , in each codeword real and imaginary parts of complex information symbols are parsed into diagonal layers and then encoded , respectively . with pic group decoding , it is shown that the decoding complexity can be reduced to a joint decoding of real symbols . in particular , for transmit antennas , the code has real symbol pairwise ( i.e. , single complex symbol ) decoding that achieves full diversity and the code rate is . simulation results demonstrate that the full diversity is offered by the newly proposed stbc with the pic group decoding . 4.6ex mimo systems , space - time block codes , partial interference cancellation , decoding complexity
in structural mechanics , design optimization is the decision - making process that aims at finding the best set of design variables which minimizes some cost model while satisfying some performance requirements . due to the inconsistency between these two objectives ,the optimal solutions often lie on the boundaries of the admissible space .thus , these solutions are rather sensitive to uncertainty either in the parameters ( _ aleatory _ ) or in the models themselves ( _ epistemic _ ) ._ reliability - based design optimization _ ( rbdo ) is a concept that accounts for uncertainty all along the optimization process .basically , the deterministic performance model is wrapped into a more realistic probabilistic constraint which is referred to as the _ failure probability_. despite its attractive formulation , the application field of rbdo is still limited to academic examples .this is mostly due to the fact that it is either based on simplifying assumptions that might not hold in practice ; or in contrast , it requires computationally intensive stochastic simulations that are not affordable for real industrial problems .the present work attempts to propose an efficient strategy that would _ in fine _ bring the rbdo application field to more sophisticated examples , closer to real engineering cases . in other words ,the challenge is _ (i ) _ to provide an optimal safe design within a few hundred evaluations of the performance models and _ ( ii ) _ to be able to quantify and minimize the errors induced by the various assumptions that are made along the development of the resolution strategy .the remaining part of this introduction is devoted to the formulation of the rbdo problem the authors attempt to solve . a short literature review is also provided as an argument for the presently proposed surrogate - based resolution strategy .section [ sec : kriging ] introduces the kriging surrogate model .a specific emphasis is put on the _ epistemic nature _ of the prediction error that is then used in section [ sec : doe ] in order to _ quantify _ and _ minimize _ the surrogate error .section [ sec : hsbrbdo ] involves the adaptive refinement strategy of the kriging surrogate into a nested reliability - based design optimization loop .the convergence of the approach is finally heuristically demonstrated in section [ sec : appli ] through a few academic examples from the rbdo literature . given a parametric model for the random vector describing the environment of the system to be designed, the most basic formulation for the rbdo problem reads as follows: in this formulation , is the objective function to be minimized with respect to the design variables , while satisfying to deterministic soft constraints bounding the so - called _ admissible design space _ defined by the analyst .note that in most applications these soft constraints consist in simple analytical functions that prevent the optimization algorithm from exploring regions of the design space that have no physical meaning ( _ e.g. _ negative or infinite dimensions ) , so that these constraints are inexpensive to evaluate .a _ deterministic design optimization _ ( ddo ) problem would simply require additional performance functions describing system failure with respect to the specific code of practice . as opposed to the previous soft constraints, these functions often involve the output of an expensive - to - evaluate black - box function _ e.g. _ a finite element model .rbdo differs from ddo in the sense that these constraints are wrapped into probabilistic constraints . is the minimum safety requirement expressed here in the form of an acceptable _ probability of failure _ which may be different for each performance function .such probabilities of failure are conveniently defined in terms of the following multidimensional integrals : one should notice that , in the present formulation , the design vector is a set of _ hyperparameters _ defining the random vector . in other words , in this work , design variables are exclusively considered as hyperparameters in the joint probability density function of the random vector because it will later simplify the computation of _ reliability sensitivity _ _ i.e. _ the gradients of the failure probability .there is however no loss of generality since deterministic design variables might possibly be considered as artificially random ( either normal or uniform ) with small variance _i.e. _ sufficiently close to zero . one could possibly argue that this formulation lacks full probabilistic consideration because the cost function is defined in a deterministic manner as it only depends on the hyperparameters of the random vector .a more realistic formulation should eventually account for the randomness of the cost function possibly induced by the one in .however , the present formulation is extensively used in the rbdo literature for simplicity . note however that thanks to the rather low complexity of usual cost models ( analytical functions ) , an accurate simulation - based estimation of a mean cost , say } ] is a useful measure to check if the kriging surrogate is accurate enough for reliability analysis or not , and it is used in this paper as a stopping criterion for the proposed refinement procedure .note that due to the inconvenient order of magnitude of low probabilities , it is more meaningful to work with the generalized reliability indices that are defined as follows : , in the proposed applications , the accuracy criterion is usually set to and the refinement stops when : are estimated by means of simulation techniques ( monte - carlo or subset simulation ) , the proposed accuracy criterion should account for the additional uncertainty induced by the lack of simulations . to do so , in the present paper , and estimatesare replaced with their respective lower and upper 95% confidence bounds based on their associated _ variance of estimation_. thus , should be selected in accordance with the given number of simulations used for reliability estimation . in order to summarize the proposed refinement procedure, we provide the pseudo - code in algorithm [ alg : doe ] .first , we initialize the empty doe , the uniform refinement pseudo - pdf for the first space - filling doe and we select the level of confidence in the metamodel . then , we generate the candidate population from the density function by means of any well - suited mcmc simulation technique ( using _ e.g. _ slice sampling ) .this population is reduced to its clusters center using -means clustering being given .the performance function is evaluated onto these newly selected points and a new kriging model is built from the updated doe .note that the kriging model construction step involves the maximum likelihood estimation of the autocorrelation parameters $ ] .the refinement pseudo - pdf is also updated .finally , a reliability analysis ( using _ e.g. _ subset simulation ) is performed onto the three approximate failure subsets , and in order to compute the proposed error measure .the doe is enriched if and while this error measure exceeds a given tolerance . , , refine : = , figure [ fig : doe_refinement ] illustrates the proposed adaptive refinement strategy applied to a nonlinear limit - state surface from .the upper subfigures show the contours of the refinement pseudo - pdf at each refinement step together with the candidate population generated by slice sampling and its clusters center obtained by -means clustering being given .for this application , the weighting pdf was selected as the uniform density in the -radius hypersphere .it can be observed that the refinement criterion features several modes as argued earlier in this section . in the lower subfiguresone can see the real limit - state surface represented as the dashed black curve , its kriging prediction represented as the black line and its associated margin of uncertainty which is bounded below by the red line and above by the blue line .another interpretation of these figures is that any point within the blue bounded shape is positive with a 95% confidence and any point inside the red bounded shape is negative with the same confidence level .the proposed strategy to solve the rbdo problem in eq .( [ eq : rbdo_ria ] ) consists in nesting the previously introduced kriging surrogate together with the proposed refinement strategy within a classical ( but efficient ) nested rbdo algorithm . in this section ,we describe the space where the kriging surrogates are built . indeed , observing that building the kriging surrogates from an empty doe for each nested reliability analysis ( _ e.g. _ in the space of the standard normal random variables ) would be particularly inefficient , it is proposed to build and refine one unique _ global _ kriging surrogate for all the nested reliability analyses .such a globality can be achieved by working in the so - called _ augmented reliability space _ such as defined in . in , the augmented reliability space is defined as the tensor product between the space of the standardized normal random variables and the design space : , but the dimension of this space ( ) suffers from both the number of random variables and the number of design variables .it is also argued here that this space may cause some loss of information as the performance functions are not in bijection with that augmented space .in contrast , in and in the present approach , the dimension of the augmented reliability space is kept equal to by considering that the design vector simply augments the uncertainty in the random vector .indeed , the augmented random vector has a pdf which accounts for both an _ instrumental uncertainty _ in the design choices and the aleatory uncertainty in the random vector . under such considerations, reads as follows: is the pdf of given the parameters and is the pdf of that can be assumed uniform on the design space .an illustration of this augmented pdf is provided in figure [ fig : augmentedpdf ] in the univariate case .the augmented reliability space is spanned by the axis on the left in this simple case .the doe should cover uniformally a sufficiently large _ confidence region _ of this augmented pdf in order to make the surrogate limit - state surfaces accurate wherever they can potentially be evaluated along the optimization process .more precisely , they should be accurate for extreme design choices ( _ i.e. _ located onto the boundaries of the optimization space ) and extreme values of the marginal random vector ( to be able to compute reliability indices as large as _ e.g. _ ) .a confidence region is essentially the multivariate extension of the univariate concept of _ confidence interval_. under the previous general assumptions , it is hard to give a mathematical form to the contour of this region .however , one may easily build an hyperrectangular region that bounds the confidence region of interest .indeed , such an hyperrectangular region is defined as the tensor product of the confidence intervals on the augmented margins . in order to compute the quantiles bounding these confidence intervals, one should additionally assume that the design parameters are exclusively involved in the definition of the margins _ i.e. _ no parameters in the dependence structure ( the _ copula _ ) as it will never be the case in most rbdo applications . for each margin, the lower quantiles ( at the probability level ) and upper quantiles ( at the probability level ) are respectively solutions of the following optimization problems: where are the quantile functions of the margins . if the domain is rectangular and if one is able to derive an analytical expression for the quantile functions of the margins and their derivatives with respect to the parameters , then these optimization problems might be solved analytically .however assuming a more general setup where one has only numerical definition of these quantities , these problems can be efficiently solved by means of a simple gradient - based algorithm due to the convenient properties of the quantile functions namely , the monotony with respect to the location and shape parameters .finally , the sought hyperrectangle can be easily defined by means of the following indicator function: , the normalizing constant of this pdf could be easily derived ( hyperrectangle volume ) though it is not required by the refinement procedure proposed in section [ sec : doe ] .the kriging surrogate together with its adaptive refinement procedure is finally plugged into a double - loop rbdo algorithm .the outer optimization loop is performed by means of the polak - he optimization algorithm .provided an initial design , this algorithm proceeds iteratively in two steps : _( i ) _ the direction of optimization is determined solving a quasi - sqp sub - optimization problem and _ ( ii ) _ the step size is approximated by the goldstein - armijo approximate line - search rule .the nested reliability and reliability sensitivity analyses are performed with the _ subset simulation _ variance reduction technique onto the kriging surrogates .the subset simulation technique for reliability sensitivity analysis is detailed in .briefly , it takes advantage of the definition of the failure probability given in eq .( [ eq : pfdef ] ) . indeed , pointing out that the limit - state equation does not explicit ly depend on the design variables , the differentiation of the failure probability only requires the differentiation of the joint pdf which can be derived analytically when the probabilistic model is defined in terms of margin distributions and copulas .the trick is inspired from importance sampling and proceeds as follows : }.\end{aligned}\ ] ] the latter quantity is known to have an unbiased consistent estimator which reads as follows : where the sample is the same as the one used for the estimation of the failure probability . in other words , the estimation of does not require any additional simulation runs : it simply consists in a post - processing of the samples generated for reliability estimation .the concept can be easily extended to the subset simulation technique see for the details .the overall methodology was implemented within the ferum v4.0 toolbox .it makes use of the matlab toolboxes functions ` quadprog ` ( for the sqp sub - optimization problem ) and ` slicesample ` ( to generate samples from the refinement pdf ) .we provide a summarized pseudo - code of the proposed strategy in algorithm [ alg : rbdo ] . , , , , refine : = , optimize : = , , optimize : = the first step of the algorithm consists in finding the hyperrectangular region that bounds the confidence region of the augmented probability density function according to section [ sec : augmentedreliabilityspace ] .once this is done , one may define the uniform weighting density in eq .( [ eq : augmentedweightingpdf ] ) and use it within the adaptive population - based refinement procedure detailed in section [ sec : doe ] and algorithm [ alg : doe ] .kriging models are built for each performance function , and they are refined until they meet the selected accuracy regarding reliability estimation .it is worth noting that the kriging surrogates are built in the augmented reliability space spanned by , but used in the current space of random variables spanned by .as soon as they are accurate enough we perform surrogate - based reliability and reliability sensitivity analysis in order to propose an improved design . a quasi - sqp algorithmis then used in order to determine the best improvement direction ; and the goldstein - armijo approximate line - search rule is used to find the best step size along that direction .the current design is improved and the kriging model accuracy for reliability estimation is being checked at the improved design . the convergence is obtained if the optimization has converged ( using the regular criteria in gradient - based deterministic optimization ) and if the kriging models allow a sufficiently accurate reliability estimation according to the proposed error measure .in this section , the proposed adaptive nested surrogate - based rbdo strategy is applied to some examples from the literature for performance comparison purposes .all the kriging surrogates are sequentially refined in order to achieve an empirical error measure on the estimation of the reliability indices .in essence , the purpose of this first basic example is to validate the proposed algorithm with respect to a reference analytical solution .let us consider a long simply - supported rectangular column with section subjected to a constant service axial load .provided and its constitutive material is characterized by a linear elastic behavior through its young s modulus , its critical buckling load is given by the euler formula : allows one to formulate the performance function which will be involved in the probabilistic constraint as: the probabilistic model consists in the 3 independent random variables given in table [ tab : eulerbuckling_stoch ] .llcc * variable * & * distribution * & * mean * & * c.o.v .* + ( mpa ) & lognormal & & + ( mm ) & lognormal & & + ( mm ) & lognormal & & + ( mm ) & deterministic & & + applying the performance function as a design rule in a fully deterministic fashion ( _ i.e. _ using the means of the random variates ) allows to determine the service load so that the initial deterministic design mm satisfies the limit - state equation : the reliability - based design problem consists in finding the optimal means and of the random width and height .the optimal design is the one that minimizes the average cross section area which is approximated as follows: should also satisfy the following deterministic constraint: in order to ensure that the euler formula is applicable , as well as the following safety probabilistic constraint: is the generalized target reliability index . due to the use of lognormal random variates in the probabilistic model together with the simple performance function ( multiplications and divisions ), the problem can be handled analytically .indeed , the isoprobabilistic transform of the limit - state surface equation in terms of standard normal random variates is straightforward and leads to a linear equation .and it finally turns out after basic algebra that the hasofer - lind reliability index ( associated with the exact failure probability ) reads: and denote the parameters of the lognormal random variates .the optimal solution of the rbdo problem is then simply derived by saturating the two constraints in log - scale ( _ i.e. _ with respect to and ) and this leads to the square cross section with parameters : the proposed numerical strategy is applied in order to solve the rbdo problem numerically .the refinement procedure of the limit - state surface is initialized with an initial doe of points and points are sequentially added to the doe if it is not accurate enough for reliability estimation .+ the convergence of the algorithm is depicted in figure [ fig : eulerbuckling_convergence ] for two runs starting from different initial designs .run # 1 is initiated with the optimal deterministic design mm whereas run # 2 is initiated with an oversized design mm .convergence is achieved as all the constraints ( deterministic and reliability - based ) are satisfied and both the cost and design variables have reached a stable value .the algorithm converges to the exact solution derived in the previous subsection which is the square section with width mm the approximation of the exact solution is only due to the numerical error .note that the reliability - based optimal design is 15% higher than the optimal deterministic design for the chosen reliability level ( ) .this optimum is reached using only 20 evaluations of the performance function thanks to the kriging surrogate .the doe used for this purpose is enriched only once and it is then accurate enough for all the design configurations including the optimal design . running the same rbdo algorithm without using the kriging surrogates ( _ i.e. _ using subset simulation onto the real performance function for the nested reliability and reliability sensitivity analyses ) requires about evaluations of the performance function for the same number of iterations of the optimizer and converges to the same optimal design .this simple mechanical example is extensively used in the rbdo literature as a benchmark for numerical methods in rbdo . in this paper, we use the results from the article by as reference .it consists in a short column with rectangular cross section .it is subjected to an axial load and two bending moments and whose axes are defined with respect to the two axes of inertia of the cross section . such a load combination is referred to as _oblique bending _ due to the rotation of the neutral axis . under the additional assumption that the material is elastic perfectly plastic, the performance function describing the ultimate serviceability of the column with respect to its yield stress reads as follows: the stochastic model originally involves three independent random variables whose distributions are given in table [ tab : column_stomodel ] . note that in the original paper , the design variables and are considered as deterministic . since the present approach only deals with design variables that defines the joint pdf of the random vector , they are considered here as gaussian with a small coefficient of variation and the optimization is performed with respect to their mean and . the objective function is formulated as follows : where the product is the expected failure cost which is chosen as proportional to the construction cost .the search for the optimal design is limited to the designs that satisfy the following geometrical constraints : with , and the minimum reliability is chosen as .lllcc & * distribution * & * mean * & * c.o.v .* + & ( n.mm ) & lognormal & & 30% + & ( n.mm ) & lognormal & & 30% + & ( n ) & lognormal & & 20% + & ( mpa ) & lognormal & 40 & 10% + & ( mm ) & gaussian & & 1% + & ( mm ) & gaussian & & 1% + the results are given in table [ tab : column_results ] . in this table , denotes the hasofer - lind reliability index ( form - based ) , and denotes the generalized reliability index estimated by _ subset simulation _ with a coefficient of variation less than 5% .the deterministic design optimization ( ddo ) was performed using the mean values of all the variables in table [ tab : column_stomodel ] without considering uncertainty and thus leads to a 50% failure probability .note that the corresponding optimal cost does not account for the expected failure cost .the other lines of table [ tab : column_results ] shows the results of the rbdo problem .the first row gives the reference results from .the number of performance function calls was not given in the original paper .however it may be estimated to given the methodology the authors used and assuming they targeted a 5% coefficient of variation on the failure probability in their monte - carlo simulation .the second row provides the results from a form - based nested rbdo algorithm ( ria ) .this latter approach seems to lead to a slightly better design though it is due to the first - order reliability assumptions that are not conservative in this case .indeed , subset simulation leads to a little lower generalized reliability index ( with a 5% c.o.v . ) which in turns slightly increases the failure - dependent objective function to .this example shows that form - based approaches can mistakenly lead to non conservative optimal designs without any self - quantification of the possible degree of non conservatism .the third row gives the results obtained by the same nested rbdo algorithm , using however the subset simulation technique as the reliability ( and reliability sensitivity ) estimator .finally , the last row gives the results obtained when using kriging as a surrogate for the limit - state surface .the kriging model used for this application used a constant regression model and a squared exponential autocovariance model .it was initialized with a 50-point doe and sequentially refined with points per refinement iteration .llccr * method * & * opt .design * ( mm ) & * cost * ( mm ) & * # of func . calls * & * reliability * + * ddo * & & & 50 & + + reference ( dsa ) & & & & + form - based ( ria ) & & & 9472 & + present w / o kriging & & & & + present w/ kriging & & & 140 & + another interesting fact about this example is that the reliability constraint is not saturated at the optimum : the algorithm converges at a higher reliability level as illustrated in figure [ fig : column_convergence ] .this is due to the specific formulation of the cost function in eq .( [ eq : rbdo_royset ] ) that accounts for a failure cost that is indexed onto the failure probability . indeed , the cost function behaves itself as a constraint and the optimal reliability level is formulated in terms of an acceptable risk ( probability of occurrence times consequence ) instead of an acceptable reliability index .this mechanical example is originally taken from .it consists in the study of the failure modes of the bracket structure pictured in figure [ fig : bracket_mech ] .the bracket structure is loaded by its own weight due to gravity and by an additional load at the right tip .the two failure modes under considerations are : * the maximum bending in the horizontal beam ( cd , at point b ) should not exceed the yield strength of the constitutive material , so that the first performance function reads as follows : where the maximum bending stress reads : * the maximum axial load in the inclined member ( ab ) should not exceed the euler critical buckling load ( neglecting its own weight ) , so that the second performance function reads as follows : where the critical euler buckling load is defined as : and the resultant of axial forces in member ab reads ( neglecting its own weight ) : the probabilistic model for this example is the collection of independent random variables given in table [ tab : bracket_stoch ] .note that the coefficient of variation of the random design variables is kept constant along the optimization as in the original paper .lllcc & * distribution * & * mean * & * c.o.v .* + & ( kn ) & gumbel & 100 & 15% + & ( gpa ) & gumbel & 200 & 8% + & ( mpa ) & lognormal & 225 & 8% + & ( kg / m ) & weibull & 7860 & 10% + & ( m ) & gaussian & 5 & 5% + & ( mm ) & gaussian & & 5% + & ( mm ) & gaussian & & 5% + & ( mm ) & gaussian & & 5% + the rbdo problem consists in finding the rectangular cross sections of the two structural members that minimize the expected overall structural weight which is approximated as follows : while satisfying a minimum reliability requirement equal to with respect to the two limit - states in eq .( [ eq : bracket_lsf1 ] ) and eq .( [ eq : bracket_lsf2 ] ) considered independently .the search for the optimal design is bounded to the following hyperrectangle : ( in mm ) .the comparative results are provided in table [ tab : bracket_results ] considering the results from as reference .llccc * method * & * opt .design * ( mm ) & * cost * ( kg ) & * # of func . calls * & * reliability * + * * ddo w/ psf** & & 2632 & 40 & + + sora & & 1675 & 1340 & + ria & & 1675 & 2340 & + present w / o kriging & & 1550 & & + present w/ kriging & & 1610 & & + the first row gives the deterministic optimal design that was obtained by using _ partial safety factors _ ( psf ) .it can be seen from the reliability indices that these psf provide a significant safety level .however , one may rather want to find an even lighter design allowing for a lower safety level .to do so , the rbdo formulation of the problem is solved . used the sora technique which is a decoupled form - based approach .the reliability indices at the optimal design were checked using the subset simulation technique ( targeting a coefficient of variation less than 5% ) and revealed that the form - based approach slightly underestimates the first optimal reliability index in this case . the ria technique which is a standard double - loop form - based approach provides the same solution but it is less efficient . implementing the proposed approach without plugging the kriging surrogates converges to similar results but clearly confirms that direct simulation - based approaches are not tractable for rbdo . replacing the performance function by their kriging counterpartsallows to save a significant number of simulations ( opposed to ) and in addition , to provide an error measure on the reliability estimation as opposed to the form - based approaches .however , one may note the disparities between the proposed designs .first , the disparity between form - based methods and the presently proposed strategies is explained by the conservatism of the form assumptions in this case .then , the disparity between the two present approaches is certainly due to the flatness of the sub - optimization problem and the stochastic nature of the simulation - based reliability estimation .the convergence of the algorithm is depicted in figure [ fig : bracket_convergence ] .the aim of the present paper was to develop a strategy for solving reliability - based design optimization ( rbdo ) problems that is applicable to engineering problems involving time - consuming performance models .starting with the premise that simulation - based approaches are not affordable when the performance function involves the output of an expensive - to - evaluate computer model , and that the mpfp - based approaches do not allow to quantify the error on the estimation of the failure probability , an approach based on kriging and subset simulation is explored .the strategy has been tested on a set of examples from the rbdo literature and proved to be competitive with respect to its form - based counterparts .indeed , convergence is achieved with only a few dozen evaluations of the real performance functions .in contrast with the form - based approaches , the proposed error measure allows one to quantify and sequentially minimize the surrogate error onto the final quantity of interest : the optimal failure probability . it is important to note that the numerical efficiency of the proposed strategy mainly relies upon the properties of the space where the kriging surrogates are built : the so - called _ augmented reliability space_. this space is obtained by considering that the design variables in the rbdo problem simply augments the uncertainty in the random vector involved in the reliability problem .building the surrogates in such a space allows one to reuse them from one rbdo iteration to the other and thus saves a large number of performance functions evaluations .it is also worth noting that the original refinement strategy proposed in section [ sec : doe ] makes it possible to add several observations in the design of experiments at the same time , and thus to benefit from the availability of a distributed computing platform to speed up convergence .however , as already mentioned in the literature , it was observed that the number of experiments increases with the number of variables involved in the performance functions and that the kriging strategy loses numerical efficiency when the doe contains more than a few thousands experiments although such an amount of information is not even available in real - world engineering cases .this latter point requires further investigation .a problem involving a nonlinear - finite - element - based performance function and 10 variables is currently investigated and will be published in a forthcoming paper .deheeger f , lemaire m ( 2007 ) support vector machine for efficient subset simulations : 2smart method . in : proc .10th int . conf . on applications of stat . and prob .in civil engineering ( icasp10 ) , tokyo , japan macqueen j ( 1967 ) some methods for classification and analysis of multivariate observations . in : le cam j lm & neyman ( ed ) proc .5 berkeley symp . on math .stat . & prob . ,university of california press , berkeley , ca , vol 1 , pp 281297
the aim of the present paper is to develop a strategy for solving reliability - based design optimization ( rbdo ) problems that remains applicable when the performance models are expensive to evaluate . starting with the premise that simulation - based approaches are not affordable for such problems , and that the most - probable - failure - point - based approaches do not permit to quantify the error on the estimation of the failure probability , an approach based on both metamodels and advanced simulation techniques is explored . the kriging metamodeling technique is chosen in order to surrogate the performance functions because it allows one to genuinely quantify the surrogate error . the surrogate error onto the limit - state surfaces is propagated to the failure probabilities estimates in order to provide an empirical error measure . this error is then sequentially reduced by means of a population - based adaptive refinement technique until the kriging surrogates are accurate enough for reliability analysis . this original refinement strategy makes it possible to add several observations in the design of experiments at the same time . reliability and reliability sensitivity analyses are performed by means of the _ subset simulation _ technique for the sake of numerical efficiency . the adaptive surrogate - based strategy for reliability estimation is finally involved into a classical gradient - based optimization algorithm in order to solve the rbdo problem . the kriging surrogates are built in a so - called augmented reliability space thus making them reusable from one nested rbdo iteration to the other . the strategy is compared to other approaches available in the literature on three academic examples in the field of structural mechanics .
the existence of global strong solutions for system has been proven in in dimension 2 , and more recently in dimension 3 for small data in a framework where the deformations of the solid are limited in regularity .the main result of this second part is theorem [ maintheorem ] , that we can state as follows : let be . then for small enough in , there exists a deformation satisfying the hypotheses * h1**h4 * given above and also such that the solution of system satisfies the notation are explained in section [ secdef ] , below .the proof of this theorem is based on the preliminary stabilization of the linearized system in part i. the idea is the following : if the perturbations of the system ( which are represented by the initial conditions ) are small enough , the behavior of the nonlinear system is close to the evolution of the linearized system .the strategy we follow in this second part consists in rewriting system in cylindrical domains ( that means in domains whose the space component does not depend on time ) , by defining a change of variables and a change of unknowns .then we focus on the nonlinear system thus obtained . the feedback boundary control obtained in part i enables us to stabilize the nonhomogeneous linear part of this system , whereas we have now to consider as a control function a deformation of the solid which satisfies the nonlinear constraints * h1**h4 * given above .so the difficulty is to define properly from this boundary feedback control a deformation of the solid which stabilizes the full nonlinear system .for that the boundary feedback control is extended inside the solid as it is done in the section 6 of part i ; the deformation thus obtained satisfies the linearized version of the constraints stated in the hypotheses * h1**h4*. then we project the displacement associated with this mapping on a set representing the displacements satisfying the nonlinear constraints .the deformation obtained is said to be _admissible _ , that is to say it lies in a well - chosen functional space and it obeys the hypotheses * h1**h4 * ( see definition [ defcontrol ] ) .this projection method enables us to decompose the deformation velocity - on the fluid - solid interface - into two parts : the first part of this decomposition stabilizes the linear part of the nonlinear system , whereas the remaining part satisfies good lipschitz properties with respect to the boundary feedback control .this point is essential if we want to prove by a fixed point method that such a deformation stabilizes the nonlinear system .+ besides the technical aspects of this work , a particular difficulty is the consideration of a control that has to satisfy nonlinear physical constraints . the originality of our contribution can be read in a perspective which concerns the study and the control of the swim of a deformable structure at an intermediate reynolds number . by controlling the velocity of the environing fluid in a bounded domain ,the solid stabilizes to zero the full system which is already dissipative ( because of the viscosity ) , but the strength lies in the fact that this stabilization is obtained for all prescribed exponential decay rate .+ other papers treat of this issue : let us quote for instance the work of khapalov where the incompressible navier - stokes equations are considered .more recently let us mention the work of chambrion & munnier dealing with perfect fluids , and where geometric methods have been used .concerning the swim at a low reynolds number , let us quote the recent paper where prescribed types of deformations are considered .at a high reynolds number , the work of glass & rosier consists in applying the coron return method in order to prove the local controllability of the position and the velocity of a boat , which is able to impose a velocity on a part of its boundary in order to move itself in a fluid satisfying the incompressible euler equations .the result we give concerns only the velocities of the system .the functional framework is given in section [ secdef ] for the unknowns as well as for the control function and the changes of variables .then the change of variables and the change of unknowns are introduced in section [ secchange ] .technical results in relation with the changes of variables are stated and proven in appendix a and appendix b. in section [ secchange ] we also give the nonlinear system written in cylindrical domains , in a form which enables us to make the connection with the linearized system .the main result of part i is used in section [ linearsec ] where the feedback stabilization of the nonhomogeneous linear system is treated. then we construct in section [ secdecompcontrol ] an _ admissible _ deformation which is supposed to stabilize the full nonlinear system in section [ secnonlinear ] . the link with the actual unknownsis made in section [ secconclusion ] as a conclusion .the cofactor matrix associated with some matrix field is denoted by .let us keep in mind that when this matrix is invertible we have the property let us introduce some functional spaces .we use the notation ^d \ \text{or } [ { \mathrm{h}}^s(\omega)]^{d^k},\end{aligned}\ ] ] for some positive integer , for all bounded domain of or .the velocity will be considered in the following functional space that we endow and define with the norm given by analogously we can define spaces of type and for all time - depending domain , where and are non - negative integers . the pressure will be considered in ; at each time it is determined up to a constant that we fix such that .thus in particular from the poincar - wirtinger inequality the pressures defined in can be estimated in as follows or will denote more generally a generic positive constant which does not depend on time or on the unknowns . ] the same estimate will be considered for other functions which play the role of a pressure in . + more classically in the cylindrical domain we set and we keep in mind the continuous embedding let us introduce some functional spaces for the solid displacements - defined in the domain - and the changes of variables which will be defined in the domain . we will mainly consider mappings satisfying , and thus we will consider the displacements where the space is defined as follows we endow it with the scalar product which makes it be a hilbert space , because of the continuous embedding indeed , for , we have the following estimates in the rewriting of system in cylindrical domains , and in the final fixed point method , the mapping which has the most important role is denoted by .we could consider the same type of functional space for this mapping in , but we will only need to consider - and we will only have - estimates of in the space thus for more clarity we set the main reason for which we choose to consider the aforementioned lagrangian mappings in such functional spaces is the following : the changes of variables will be - indirectly - obtained through extensions of the deformations of the solid ; we will need to consider displacements which have the regularity indicated in the spaces given above , and for that we will have to consider displacements of the solid which lie in a hilbert space ( for the projection method ) and which satisfy at least let us specify the notion of _ admissible _ control and the definition we give to the stabilization of the main system .[ defcontrol ] let be .a deformation is said _admissible _ for the nonlinear system if the displacement obeys if is a -diffeomorphism from onto for all , and if for all it satisfies the following hypotheses first , the constraint which forces to be a -diffeomorphism can be relaxed if the control stays close to the identity .indeed , in this work the data are assumed to be small enough , so that we will lead to consider the displacement small enough in the space and so in ; thus we can assume that this constraint is always satisfied .[ defstab ] we say that system is stabilizable with an arbitrary exponential decay rate if for each there exists an it admissible deformation ( in the sense of definition [ defcontrol ] ) and a positive constant - depending only on , and - such that the solution of system satisfies order to use a change of unknowns which will enable us to rewrite the main system , we first extend to the whole domain the mappings , initially defined on .then by this extension - denoted by - we will define new unknowns whose interest lies in the fact that they are supposed to satisfy a system written in cylindrical domains , that is to say domains whose the space component does not depend on time . for a vector field and a rotation which provides an angular velocity , and for an _ admissible _ deformation - in the sense of definition [ defcontrol ] -the aim of this subsection is to construct a mapping which has to satisfy the properties such that for all the function maps onto , onto , and leaves invariant the boundary . for that ,let us first define an intermediate extension .we can extend the mapping to as follows : + if we assume that the mapping satisfies the hypothesis * h2 * , that is to say for all we have the condition and if we also assume that the function is small enough in , then there exists a mapping satisfying the existence and other properties of such an extension are summed up in the statement of proposition [ lemmaxtension ] , in appendix a. from this intermediate extension , the purpose in now to define the extension aforementioned .+ the mapping is obtained from by composing it to the left by the rigid displacement we can not do the same thing for obtaining the mapping from because of the boundary condition on that has to be preserved .thus we define an extension of to the whole domain . for that we can use the same process which has been introduced in , and thus construct which satisfies for all the following properties : so we define as follows and we denote by its inverse for all .+ the constructions of the mappings aforementioned are quite technical , that is why we develop the details of these constructions only in appendix a , in the same time as the regularity deduced on . however , let us note an important point : in appendix a , the definition of the mapping is conditioned by a smallness assumption on the solid deformation . since the deformation which will be chosen in sections [ secnonlinear ] and [ secconclusion ] for stabilizing the full nonlinear system will actually depend only on - and be controlled by - the initial data that we will assume small enough , it is possible to proceed like this .+ note that the definition of such a change of variables is also done in or in .the change of variables we use in this paper has the same properties as the ones constructed in these articles .but the way we proceed here is not the same : in the mapping is constructed by extending the eulerian velocity to the fluid part , but this means is not suitable in a framework where the role of the lagrangian mapping representing the deformation of the solid is central and where its regularity is limited ; indeed the velocity is defined through which is itself defined on the domain . concerning the means used in , the problem solved for constructingthe mapping in this paper is similar to ours , but it requires a smallness assumption on the time existence . herethe hypothesis we make is the smallness of in an infinite time horizon space .we use the change of variables given above in order to transform system into a system which deals with non - depending time domains . forthat we set the change of unknowns for and , and [ remarkc ] let us notice that if and are given , then by using the second equality of we see that satisfies the cauchy problem so is determined in a unique way .thus it is obvious to see that , in , and are also determined in a unique way .moreover , since we have and since the mapping depends only on , and the control , we finally see that if is given , then is determined in a unique way . for what follows ,it is convenient to define the mappings the regularity , dependence with respect to the unknowns , and estimates for the mappings and are given in appendix b. + then , like in , system whose the unknowns are is rewritten in the cylindrical domain as the following system whose the unknowns are : where where $ ] specifies the i - th component of a vector (y , t ) = [ \nabla \tilde{u}(y , t ) \delta \tilde{y}(\tilde{x}(y , t),t)]_i + \nabla^2 \tilde{u}_i(y , t ) : \left(\nabla \tilde{y } \nabla \tilde{y}^t \right)(\tilde{x}(y , t),t ) , \label{ll } \\ & & { \mathbf{m}}(\tilde{u } , \tilde{h } ' , \tilde{\omega})(y , t ) = -\nabla \tilde{u } ( y , t ) \nabla \tilde{y}(\tilde{x}(y , t),t ) \left(\tilde{h}'(t ) + \tilde{\omega}(t ) \wedge \tilde{x}(y , t ) + \frac{{\partial}\tilde{x}}{{\partial}t}(y , t)\right ) , \nonumber \\\label{mm } \\ & & { \mathbf{n}}\tilde{u}(y , t ) = \nabla \tilde{u}(y , t ) \nabla \tilde{y}(\tilde{x}(y , t),t ) \tilde{u}(y , t ) , \label{nn } \\ & & { \mathbf{g}}\tilde{p } ( y , t ) = \nabla \tilde{y}(\tilde{x}(y , t),t)^t \nabla \tilde{p}(y , t ) , \label{gg } \\ & & \tilde{\sigma}(\tilde{u},\tilde{p})(y , t ) = \nu\left(\nabla \tilde{u}(y , t ) \nabla \tilde{y}(\tilde{x}(y , t),t ) + \nabla \tilde{y}(\tilde{x}(y , t),t)^t \nabla \tilde{u}(y , t)^t \right ) - \tilde{p}(y , t ) { \mathrm{i}}_{{\mathbb{r}}^3 } \nonumber\end{aligned}\ ] ] and for , we now set the following change of unknowns : the idea of this second change of unknowns is the following : if we find a control such that the quadruplet is bounded in some infinite - time horizon space , then the intermediate unknowns will be stabilized with an exponential decay rate , and it will be sufficient to deduce from that the same property for the actual unknowns ( see section [ secconclusion ] ) . + the system satisfied by is then transformed into with [ remarkcc ] an important remark is the following : since system and the system satisfied by above are equivalent , and since for an _ admissible _ control satisfying the constraint the compatibility condition is satisfied for system , in system the underlying compatibility condition enables us to have automatically the following equality as soon as on .in this section , let us consider the nonhomogeneous linear system suggested by the writing of system : we assume that the data satisfy and the following compatibility conditions note that in this system the control does not stand for , but it represents a boundary control which satisfies the linearized version of the constraints . that is why this notation is only used for a control for a linear system , like in part i. thus the right - hand - side term in stands for the nonhomogeneous divergence condition can be lifted by setting then the quadruplet is supposed to satisfy the following system where by following the steps of the operator formulation used in part i for the homogeneous linear system , the right - hand - side in can be lifted and the pressure of the resulting system can be eliminated , so that the latter can be formally rewritten as follows with and . in this formulationthe operators , , and are given in part i ( see section 3 ) .+ let us now remind the most important result established in part i of this work .we replace the control by the feedback operator denoted by and defined in section 5 of part i. then in the evolution equation the operator becomes .this latter is stable , so that , , we can estimate let us keep in mind that we have , and so we have also the following estimate for some independent constant .let us consider a boundary stabilizing control which can be chosen in a feedback form , as described above .the purpose of this section consists in defining from this boundary function a deformation which is _ admissible _ in the sense of definition [ defcontrol ] , and which has to have a satisfying lipschitz behavior with respect to the function . the main result of this section is the following : [ thdecompsuper ]let be satisfying if is small enough in , then there exists a mapping which is _ admissible _ in the sense of definition [ defcontrol ] , and which satisfies moreover , if two functions and are close enough to in , then the _ admissible _ deformations and that they define respectively satisfy with when goes to .the proof of this theorem is divided into two steps . in the first one ,construct from a boundary function an internal solid deformation satisfying the linearized versions of constraints . then in a second time we project the displacement on a space of displacements which define _ admissible _ deformations .the first step of the proof of theorem [ thdecompsuper ] consists in defining from a boundary control an internal deformation which satisfies the linearized versions of constraints . for that , let us remind a result of part i , which is a consequence of the addition of proposition 5 and corollary 1 of section 6 : [ lemmaxistence2 ] for satisfying the following system admits a unique solution in , for large enough .this function satisfies the conditions and there exists a positive constant such that besides , if , then the solutions and associated with and respectively satisfy for obtained from , we now define as follows thus the estimates and become the result of this proposition could be actually reduced to saying that there exists a linear continuous operator from to , but the means that we use for obtaining it ( that is to say by considering system - ) is actually useful for a key point in the second part of the proof .let us consider a control which has been obtained in the previous subsection from a boundary velocity . instead of projecting on a set of controls satisfying the nonlinear constraints required by definition [ defcontrol ] , we prefer projecting the displacements , because we choose the space as an hilbertian framework .we denote the displacement by the goal of this subsection is to define ( in a suitable way ) a mapping which satisfies the nonlinear constraints of definition [ defcontrol ] .we associate with it the displacement so that the wanted mapping is now .we can decompose such a mapping as follows let us define the differentiable mapping and the spaces where note that is a space where lie and .that is why the constraints satisfied by and are the same .note that the space takes into account the nonlinear constraints adapted to the displacements .the purpose of this paragraph is to project any displacement on the set , provided that the displacement is close enough to .the definition of such a projection is given by : [ thdecompsuperbis ] let be .if is small enough in , then there exists a unique mapping such that moreover , we have that thus we denote by the projection so obtained .+ if the displacements and are close enough to in , then with and , and when goes to .note that in this statement we do not need to assume that .but the way we have constructed such that in the previous subsection will be useful in the proof below .the proof of this theorem is an application of theorem 3.33 of ( page 74 ) , that we state as follows : let be a hilbert space , a banach space , and a mapping of class from to , such that .let be , and such that is surjective. then there exists such that if , then the following optimization problem under equality constraints admits a unique solution .moreover , the mapping so obtained is . in order to apply this theorem with the only nontrivial assumption to be verified is that the mapping is surjective . for that , let us consider an antecedent of this triplet can be obtained as where is the solution of the following system for large enough , with the previous study of system - ( see section 6 of part i ) can be straightforwardly adapted to get the existence of a solution in for such a system , and thus a displacement .+ since the projection so obtained is , we can notice that its differential at is the identity , and thus a taylor development shows that for and close to , the estimate is obtained in considering a taylor development around for the mapping : \left(z^{\ast}_{\zeta_2 } - z^{\ast}_{\zeta_1 } \right ) \\ & & + o\left(\|z^{\ast}_{\zeta_2 } - z^{\ast}_{\zeta_1 } \|_{\mathcal{w}_{\lambda}(s_{\infty}^0 ) } \right).\end{aligned}\ ] ] since is continuous at , we have and thus we obtain the announced estimate .then , from the displacement we can define a deformation as follows this deformation is _ admissible _ in the sense of definition [ defcontrol ] .+ the interest of such a decomposition ( namely with given by theorem [ thdecompsuper ] ) lies in the fact that the _ admissible _ control so decomposed will enable us to stabilize the nonhomogeneous linear part of system thanks to the term ( see the previous section ) , whereas the residual term satisfies the property , which leads to by combining the second and the third inequality to the estimate , we then obtain respectively and . +lastly , the estimate is reformulated as follows where when goes to , and where the inequality combined to and leads to the estimate , and the inequality combined to and leads to the estimate .system is transformed into system . before proving the stabilization to zero , with an arbitrary exponential decay rate , of system , let us first prove the stability of system for all , for some well - chosen deformation . in system , the mapping has to be _ admissible _ ( in the sense of definition [ defcontrol ] ) .it has to be chosen also in order to stabilize the linear part of this system . for that, we decompose formally the function on as follows let us choose in system the function in the following feedback form : provided that is small enough , by theorem [ thdecompsuper ] we can now define an _ admissible _deformation . from this deformationwe can define a change of variables and the corresponding mappings and ( see ) which enables us to define the right - hand - sides of system ; more precisely , we rewrite this system as follows with note that the mapping is entirely defined by and so by the unknowns , while the mappings and are entirely defined by and the unknowns , and thus by .note that the projection method used in order to define an _ admissible _deformation from a boundary control has been made in infinite time horizon , with regards to the functional spaces considered .it implies that the nonlinear system above is noncausal , that means that the control chosen in a feedback form anticipate _ a priori _ at some time the behavior of the unknowns for later times . in practicewe could define a projection method for increasing times , but the corresponding lipschitz estimates - in infinite time horizon - obtained above do not hold anymore .anyway , let us show that this system admits a unique solution , and thus that it makes sense for all time .[ thstabnonlinx ] for small enough in , system admits a unique solution in , and there exists a positive constant such that let us set a solution of system can be seen as a fixed point of the mapping where satisfies with and which is obtained from by theorem [ thdecompsuper ] .the system above is actually the nonhomogeneous linear system introduced in section [ linearsec ] . in particular the estimate gives some estimates given below are obtained by using a result stated in the appendix b of ( proposition b.1 ) , and that we remind in lemma [ lemmagrubb ] .let us also keep in mind the regularities provided by proposition [ lemmaktilde ] for mappings and .[ lemmah301 ] there exists a positive constant such that for all in we have the only delicate point that consists in verifying that in . for that, let us consider the i - th component of ; we write with and we apply lemma [ lemmagrubb ] with , and to obtain [ lemmah302 ] there exists a positive constant such that for all in we have there is no particular difficulty for obtaining these estimates .[ lemmah3 ] there exists a positive constant such that for all we have the quantity lies in .we apply lemma [ lemmagrubb ] with , and in order to get for proving the regularity , we first write the quantity lies in , so that we have the estimates [ estfmfi ] there exists a positive constant such that for all in we have with there is no particular difficulty for proving the other two estimates , if we refer to the respective expressions of , and given by , and . for some radius ,let us define the ball with and where the constant appears in the estimate .note that is a closed subset of .+ first , for small enough , we claim that for the function is small enough in , because of the estimates of lemma [ lemmah3 ] and the continuity of the operator ( see part i , section 5 ) .so by theorem [ thdecompsuper ] we can define the corresponding _ admissible _ deformation , the mappings and stem , and thus - in virtue of remark [ remarkcc ] - we can define properly the mapping in . letbe , for small enough . in the estimates provided by the previous lemmas [ lemmah301 ] , [ lemmah302 ] , [ lemmah3 ] , [ estfmfi ] , note that from the estimate , , of proposition [ lemmaktilde ] combined to and of theorem [ thdecompsuper ] we can deduce in particular , for enough the mapping is well - defined , and so we can write the estimate that we remind thus we have this shows that for small enough the ball is stable by the mapping .let and be in .we set and we also denote and next and the _ admissible _ deformation given by theorem [ thdecompsuper ] which are defined by and respectively . + for small enough the quadruplet satisfies the system with the deformations and induce respectively the mappings and which - partially - define the right - hand - sides above . + the right - hand - sides , , , and can be expressed as quantities which are multiplicative of the differences for instance ,the nonhomogeneous divergence condition can be written as then the estimates of lemmas [ lemmah301 ] , [ lemmah302 ] , [ lemmah3 ] , [ estfmfi ] can be adapted for this right - hand - sides , so that the estimates , , of proposition [ lemmaktilde ] combined to the estimates and of theorem [ thdecompsuper ] enable us to prove that for small enough we have then the estimate can be applied for the quadruplet : then we have for small enough and thus the mapping is a contraction in .it admits a unique fixed point , that is to say there exists a unique solution of system . the announced estimate is easily deduced from what precedes .after having proven theorem [ thstabnonlinx ] for the unknowns of system , let us remind the relations given by - and and the relations given by let us deduce the stabilization of the main system in the sense of definition [ defstab ] .[ maintheorem ] for small enough in , system is stabilizable with an arbitrary exponential decay rate , in the sense of definition [ defstab ] .first , from remark [ remarkc ] , we can see that for given the quadruplet is determined in a unique way ; in particular we can define the rotation associated with the angular velocity as the solution of the following problem now by considering we can deduce from the estimate of theorem [ thstabnonlinx ] that and by considering we can deduce from the estimate of theorem [ thstabnonlinx ] and the estimates of proposition [ lemmaktilde ] that , since we have , the following estimates hold finally by considering the equalities and for the -st component of vector the following equality we can use the estimates of lemma [ lemmaktilde ] to conclude the proof with the estimate us consider an _ admissible _ deformation - in the sense of definition [ defcontrol ] - which satisfies , in particular , for all the following condition the regularity considered for the datum in this section is the goal of this subsection is to extend to the whole domain the mappings and , initially defined respectively on and .the process we use is not the same as the one given in . instead of extending the eulerian flow given by the deformation of the solid ,we directly extend the deformation of the solid , because the difference in our case lies in the fact that the regularity of the dirichlet data - written in eulerian formulation on the time - dependent boundary - is limited .+ the goal is to construct a mapping such that let us remind a result stated in the appendix b of ( proposition b.1 ) , which treats of sobolev regularities for products of functions , and that we state as : [ lemmagrubb ] let , , and in . if and , then there exists a positive constant such that ( i ) when , + ( ii ) with , , , + ( iii ) except that if equality holds somewhere in ( ii ) .a consequence of this lemma is the following result .[ lemmecomatrice ] let be in .then and , if is small enough in , there exists a positive constant such that besides , if and are small enough in , there exists a positive constant such that for proving , the case is obvious . for the general case ,let us show that the space is stable by product . for that, let us consider two functions and which lie in this space .applying lemma [ lemmagrubb ] with and , we get for the regularity in , we write using the continuous embedding , we get and thus the desired regularity . thus the space is an algebra .the estimate is obtained by the differentiability of the mapping ( see for instance ) ; more precisely , we have so that we get the estimate can be obtained by the mean - value theorem , so its proof is left to the reader .let us first extend the deformation of the solid to the fluid domain , in a mapping that we have already denoted by .[ lemmaxtension ] let be an _admissible _ deformation , in the sense of definition [ defcontrol ] .let us assume that is small enough in , that is to say that the function is small enough in .then there exists a mapping satisfying and such that for some positive constant independent of . besides , if and are two displacements small enough in , then the solutions and of problem , corresponding to and as data respectively , satisfy given the initial datum for , let us consider the system derived in time , as follows this system can be viewed as a modified nonlinear divergence problem , that we state as with + a solution of this system can be viewed as a fixed point of the mapping where satisfies the classical divergence problem indeed , let us first verify that for we have .for that , we remind from the previous lemma that , and we first use the result of lemma [ lemmagrubb ] with and to get for the regularity in , we write where we have used the continuous embedding . thus there exists a positive constant such that the estimate shows in particular that the mapping is well - defined .moreover , for the divergence problem there exists a positive constant ( see for instance ) such that and also thus there exists a positive constant such that let us consider the set with notice that a mapping satisfies in particular the following inequality , obtained in the same way we have proceeded to get the embedding : then the inequality combined to the estimates and show that for we have and thus for small enough , is stable by .notice that is a closed subset of .let us verify that is a contraction in .+ for and in , we denote which satisfies the divergence system and thus the estimate for tackling the lipschitz property of the nonlinearity , we write by reconsidering the steps of the proof of the estimate and by using , we can verify that for small enough the mapping is a contraction in .thus admits a unique fixed point in .+ for the estimate , if and are two solutions corresponding to and respectively , let us just write the system satisfied by the difference : then the methods used above can be similarly applied to this system in order to deduce from it the announced result . + let us now consider , and which provides .let us construct a mapping such that and we can not solve this problem as we have done for problem , because the proof would require the unknowns and arbitrarily small enough , a thing that we can not assume , even _ a posteriori_. instead of that , we utilize the mapping provided by proposition [ lemmaxtension ] , and we search for a mapping such that such a mapping has to satisfy for that , let us proceed as in : we consider a cut - off function , such that in a vicinity of and in a vicinity of .we define the function so that , and we construct as the solution of the following cauchy problem we can verify ( see for instance ) that the mapping so obtained has the desired properties , and thus we can set since and are invertible , the mapping is invertible , and we denote by its inverse . the mapping presents the same type of regularity as the mapping .we sum its properties in the following proposition .+ [ restildex ] [ resx ] let be an _admissible _ control - in the sense of definition [ defcontrol ] - and the extension of provided by proposition [ lemmaxtension ] ( for small enough in ) .let be the mapping given by . for all , the mapping is a -diffeomorphism from onto , from onto , and from onto .we denote by its inverse at some time .we have the proof for the regularity of can be straightforwardly deduced from lemma [ lemmax21 ] in the appendix b of this chapter .we do not give more detail in this section , because here the aim is only to get a change of variables which enables us rewrite the main system as an equivalent one written in fixed domains ( see section [ secchange ] ) .let us remind that for proposition [ lemmaxtension ] enables us to define the extension satisfying and for and which provides such that we can define through the problem where is a regular cut - off function , and then we define and [ lemmakstar ] for and small enough in , let and be the solutions of problem ( see proposition [ lemmaxtension ] ) corresponding to the data and respectively .if we denote by and the inverses of and respectively , we have let us remind the estimate , obtained for and small enough in : first , let us give two intermediate estimates : and provided by the equality and by the fact that is an algebra .then the estimate is obtained by writing the change of variables given by a mapping is slightly the same as the one utilized in ; in considering the writing the steps of the proofs of lemmas 6.11 and 6.12 of can be then repeated , with the difference that in infinite time horizon we rather have where is bounded when are close to and are close to . in order to estimate and , we first apply the grnwall s lemma on in order to get besides , it is easy to see that so that is controlled by . then the term can be treated by writing finally , it is easy to verify that [ lemmax21 ] let and be defined by where , , and are given in the assumptions of the previous lemmas .then we have where and is bounded when goes to .let us write for tackling the difference , let us apply lemma a.3 of the appendix of ; we get the estimate and thus the regularity in . the regularity in be also obtained by applying lemma a.3 of for the time derivative of .+ for the term , we apply lemma a.2 of the appendix of ; we get the estimate and thus the regularity in . here again the regularity in the space is obtained by applying the same lemma on the time derivative .[ lemmaktilde ] let and be defined by where and are given in the assumptions of the previous lemma .then where and is bounded when goes to . for proving ,it is sufficient to write and to apply the previous lemma .the estimate can be proven exactly like the estimate .finally , for the estimate we denote by and the i - th component of and respectively , and we write the equality and apply lemma [ lemmagrubb ] .( mr1156428 ) g. grubb and v. a. solonnikov , boundary value problems for the nonstationary navier - stokes equations treated by pseudo - differential methods , math .scand . , * 69 * ( 1991 ) , no . 2 , 217290 ( 1992 ) .( mr2371113 ) [ 10.1016/j.anihpc.2006.06.008 ] j. p. raymond , _ stokes and navier - stokes equations with nonhomogeneous boundary conditions _ , ann .h. poincar anal .non linaire , * 24 * ( 2007 ) , no . 6 , 921951 .( mr2393436 ) [ 10.1007/s00205 - 007 - 0092 - 2 ] j. san martn , j. f. scheid , t. takahashi and m. tucsnak , _ an initial and boundary value problem modeling of fish - like swimming _ , arch ., * 188 * ( 2008 ) , no . 3 , 429455 .
in this second part we prove that the full nonlinear fluid - solid system introduced in part i is stabilizable by deformations of the solid that have to satisfy nonlinear constraints . some of these constraints are physical and guarantee the _ self - propelled _ nature of the solid . the proof is based on the boundary feedback stabilization of the linearized system . from this boundary feedback operator we construct a deformation of the solid which satisfies the aforementioned constraints and which stabilizes the nonlinear system . the proof is made by a fixed point method . sbastien court ( communicated by the associate editor name )
normal function of the human spine is possible due to a complex interaction of its components ( i.e. , vertebrae , ligaments , discs , rib cage , and muscles ) .age , trauma , spinal disorders , and a host of other parameters can disrupt this interaction to an extent that in certain cases surgery may be required to restore normal function .several spinal disorders have been described in from a mechanical perspective .an understanding of these disorders can assist in the design and development of spinal instrumentation . as biomechanics begins to be intertwined with tissue engineering , a better understanding of the particular disordersmay also provide insight into ` biological ' solutions .in particular , the center of rotation of the upper cervical spine is an important biomechanical landmark that is used to determine upper neck moment , particularly when evaluating injury risk in the automotive environment .also , new vehicle safety standards are designed to limit the amount of neck tension and extension seen by out - of - position motor vehicle occupants during airbag deployments .the criteria used to assess airbag injury risk are currently based on volunteer data and animal studies due to a lack of bending tolerance data for the adult cervical spine . also , lumbar spine pathology accounts for billions of dollars in societal costs each year .although the symptomatology of these conditions is relatively well understood , the mechanical changes in the spine are not .previous direct measurements of lumbar spine mechanics have mostly been performed on cadavers .the methods for in vivo studies have included imaging , electrogoniometry , and motion capture .few studies have directly measured in vivo lumbar spine kinematics with in - dwelling bone pins .in vivo 3d motion of the entire lumbar spine has recently been tracked during gait in . using a direct ( pin - based ) in vivo measurement method ,the motion of the human lumbar spine during gait was found to be triaxial .this appears to be the first 3d motion analysis of the entire lumbar spine using indwelling pins .the results were similar to previously published data derived from a variety of experimental methods .the traditional _ principal loading hypothesis _ , which describes general spinal injuries in terms of spinal tension , compression , bending , and shear , is insufficient to predict and prevent the cause of the back - pain syndrome .its underlying mechanics is simply not accurate enough . on the other hand , to be recurrent , musculo - skeletal injury must be associated with a histological change , i.e. , the modification of associated tissues within the body .however , incidences of _ functional _ musculoskeletal injury , e.g. , lower back pain , generally shows little evidence of _ structural _ damage .the incidence of injury is likely to be a continuum ranging from little or no evidence of structural damage through to the observable damage of muscles , joints or bones .the changes underlying functional injuries are likely to consist of torn muscle fibers , stretched ligaments , subtle erosion of join tissues , and/or the application of pressure to nerves , all amounting to a disruption of function to varying degrees and a tendency toward spasm .for example , in a review of experimental studies on the role of mechanical stresses in the genesis of intervertebral disk degeneration and herniation , the authors dismissed simple mechanical stimulations of functional vertebra as a cause of disk herniation , concluding instead that a complex mechanical stimulation combining forward and lateral bending of the spine followed by violent compression is needed to produce posterior herniation of the disk . considering the use of models to estimate the risk of injurythe authors emphasize the need to understand this complex interaction between the mechanical forces and the living body .compressive and shear loading increased significantly with exertion load , lifting velocity , and trunk asymmetry . also , it has been stated that up to two thirds of all back injuries have been associated with trunk rotation .in addition , load lifting in awkward environment places a person at risk for low back pain and injury .these risks appear to be increased when facing up or down an inclined surface .the safe spinal motions ( flexion / extension , lateral flexion and rotation ) _ are _ governed by standard euler s rotational intervertebral dynamics coupled to newton s micro - translational dynamics . on the other hand , the unsafe spinal events , the main cause of spinal injuries , are caused by intervertebral se(3)jolts , the sharp and sudden , delta ( forces + torques ) combined , localized both in time and in space .these localized intervertebral se(3)jolts do not belong to the standard newton euler dynamics .the only way to monitor them would be to measure in vivo " the rate of the combined ( forces + torques) rise .it is well known that the mechanical properties of spinal ligaments and muscles are rate dependent . as elongation rate increases ,ligaments generally exhibit higher stiffness , higher failure force , and smaller failure strain .previous studies have shown that high - speed multiplanar loading causes soft tissue injury that is more severe as compared to sagittal loading .this paper proposes a new locally coupled loading rate hypothesis , which states that the main cause of both soft and hard tissue spinal injury is a localized euclidean jolt , or , an impulsive loading that strikes a localized spine in several coupled degrees - of - freedom ( dof ) simultaneously . to show this ,based on the previously defined covariant force law , we formulate the coupled newton euler dynamics of the local spinal motions and derive from it the corresponding coupled dynamics .the is the main cause of two forms of local discontinuous spinal injury : ( i ) hard tissue injury of local translational dislocations ; and ( ii ) soft tissue injury of local rotational disclinations .both the spinal dislocations and disclinations , as caused by the , are described using the cosserat multipolar viscoelastic continuum model . while we can intuitively visualize the se(3)jolt , for the purpose of simulation we use the necessary simplified , decoupled approach ( neglecting the 3d torque matrix and its coupling to the 3d force vector ) .note that decoupling is a kind of linearization that prevents chaotic behavior , giving an illusion of full predictability . in this decoupled framework of reduced complexity , we define : the cause of hard spinal injuries ( discus hernia ) is a linear 3d jolt vector hitting some intervertebral joint the time rate - of - change of a 3d force vector ( linear jolt = mass linear jerk ) .the cause of soft spinal injuries ( back pain syndrome ) is an angular 3axial jolt hitting some intervertebral joint the time rate - of - change of a 3axial torque ( angular jolt = inertia moment angular jerk ) .this decoupled framework has been implemented in the human biodynamics engine , a world class neuro musculo skeletal dynamics simulator ( with 270 dofs , the same number of equivalent muscular actuators and two level neural reflex control ) , developed by the present author at defence science and technology organization , australia .this kinematically validated human motion simulator has been described in a series of papers and books , + , + .in the language of modern biodynamics ,, , the general spinal motion is governed by the euclidean se(3)group of 3d motions ( see figure [ ivspine ] ) . within the spinal se(3)group we have both se(3)kinematics ( consisting of the spinal se(3)velocity and its two time derivatives : se(3)acceleration and se(3)jerk ) and the spinal se(3)dynamics ( consisting of se(3)momentum and its two time derivatives : se(3)force and se(3)jolt ) , which is the spinal kinematics the spinal mass inertia distribution . informally , the _ _localized spinal se(3)jolt _ _ is a sharp and sudden change in the localized spinal se(3)force acting on the localized spinal mass inertia distribution .that is , a ` delta'change in a 3d force vector coupled to a 3d torque vector , striking the certain local point along the vertebral column .in other words , the localized spinal se(3)jolt is a sudden , sharp and discontinues shock in all 6 coupled dimensions of a local spinal point , within the three cartesian ( )translations and the three corresponding euler angles around the cartesian axes : roll , pitch and yaw .if the se(3)jolt produces a mild shock to the spine , it causes mild , soft tissue spinal injury , usually resulting in the back pain sindrome .if the se(3)jolt produces a hard shock to the spine , it causes severe , hard tissue spinal injury , with the total loss of movement .therefore , we propose a new _ combined loading rate hypothesis _ of the local spinal injury instead of the old principal loading hypothesis . this new hypothesis has actually been supported by a number of individual studies , both experimental and numerical , as can be seen from the following brief review .one of the first dynamical studies of the head neck system s response to impulsive loading was performed in .the response of a human head / neck / torso system to shock was investigated in , using a 3d numerical and physical models ; the results indicated that the head , cervical muscles and disks in the lumbar region were subjected to the greatest _ force changes _ and thus were most likely to be injured .dependent changes in the lumbar spine s resistance to bending was investigated in , with the objective to show how time related factors might affect the risk of back injury ; the results suggested that the risk of bending injury to the lumbar discs and ligaments would depend not only on the loads applied to the spine , but also on _ loading rate_. cyclic loading tests were performed by to investigate the mechanical responses at different loading rates ; the results indicated that faster _ loading rate _ generated greater stress decay , and disc herniation was more likely to occur under higher loading rate conditions .anterior shear of spinal motion segments was experimentally investigated in ; kinematics , kinetics , and resultant injuries were observed ; dynamic loading and flexion of the specimens were found to increase the ultimate load at failure when compared with quasi - static loading and neutral postures .experimental evidence concerning the distribution of forces and moments acting on the lumbar spine was reviewed in , pointing out that it was necessary to distribute the overall forces and moments between ( and within ) different spinal structures , because it was the _ concentration of force _ which caused injury , and elicited pain .small magnitudes of axial torque was shown to in drake05 to alter the failure mechanics of the intervertebral disc and vertebrae in _ combined loading _ situations . a finite element model of head and cervical spine based on the actual geometry of a human cadaver specimenwas developed in , which predicted the _nonlinear moment - rotation relationship _ of human cervical spine .vertebral end - plate fractures as a result of high rate pressure loading were investigated in , where a slightly exponential relationship was found between _ peak pressure _ and its _ rate of development_. the localized spinal se(3)jolt is rigorously defined in terms of differential geometry + .briefly , it is the absolute time derivative of the covariant force 1form ( or , co - vector field ) applied to the spine at a certain local point . with this respect ,recall that the fundamental law of biomechanics the so called _ covariant force law _ , states : is formally written ( using the einstein summation convention , with indices labelling the three local cartesian translations and the corresponding three local euler angles ) : denotes the 6 covariant components of the localized spinal se(3)force co - vector field , represents the 6 covariant components of the localized spinal inertia metric tensor , while corresponds to the 6 contravariant components of localized spinal se(3)acceleration vector - field .now , the covariant ( absolute , bianchi ) time derivative of the covariant se(3)force defines the corresponding localized spinal se(3)jolt co - vector field : denotes the 6 contravariant components of the localized spinal se(3)jerk vector - field and overdot ( ) denotes the time derivative . are the christoffel s symbols of the levi civita connection for the se(3)group , which are zero in case of pure cartesian translations and nonzero in case of rotations as well as in the full coupling of translations and rotations . in the following , we elaborate on the localized spinal se(3)jolt concept ( using vector and tensor methods ) and its biophysical consequences in the form of the localized spinal dislocations and disclinations .briefly , the of localized spinal motions is defined as a semidirect ( noncommutative ) product of 3d intervertebral rotations and 3d intervertebral micro translations , most important subgroups are the following ( see appendix for technical details ) : in other words , the gauge of intervertebral euclidean micro - motions contains matrices of the form where is intervertebral 3d micro - translation vector and is intervertebral 3d rotation matrix , given by the product of the three eulerian intervertebral rotations , , performed respectively about the by an angle about the by an angle and about the by an angle ( see ) , , ~~r_{\psi } = \left [ \begin{array}{ccc } \cos \psi & 0 & \sin \psi \\ 0 & 1 & 0 \\ -\sin \psi & 0 & \cos \psi% \end{array}% \right ] , ~~r_{\theta } = \left [ \begin{array}{ccc } \cos \theta & -\sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ 0 & 0 & 1% \end{array}% \right ] .\ ] ] therefore , natural intervertebral is given by the coupling of newtonian ( translational ) and eulerian ( rotational ) equations of intervertebral motion . to support our locally coupled loading rate hypothesis , we formulate the coupled newton euler dynamics of localized spinal motions within the .the forced newton euler equations read in vector ( boldface ) form denotes the vector cross product , of two vectors and equals , where is the angle between and , while is a unit vector perpendicular to the plane of and such that and form a right - handed system . ] spinal segment s ( diagonal ) mass and inertia matrices , ) are not diagonal but rather full positive definite symmetric matrices with coupled mass and inertia products .even more realistic , fully coupled mass inertial properties of a spinal segment are defined by the single non - diagonal positive definite symmetric mass inertia matrix , the so - called material metric tensor of the , which has all nonzero mass inertia coupling products .however , for simplicity , in this paper we shall consider only the simple case of two separate diagonal matrices ( ) .] defining the localized spinal mass inertia distribution , with principal inertia moments given in cartesian coordinates ( ) by volume integrals on localized spinal density , ^{t}\qquad \text{and\qquad } % \mathbf{\omega } \equiv { \omega } ^{i}=[\omega _ { 1},\omega _ { 2},\omega _ { 3}]^{t}\]](where ^{t} ] denotes the skew - symmetric part of .similarly , the third equation ( [ dis3 ] ) in components reads }\,dx^{k}\wedge dx^{i}\wedge dx^{j},\text{\qquad or } \\ q_{ijk } & = & -6\partial _ { k}\alpha _ { \lbrack ij]}.\end{aligned}\ ] ] the second equation ( [ dis2 ] ) in components reads }\,dx^{k}\wedge dx^{i}\wedge dx^{j},\text{\qquad or } \\ \dot{q}_{ijk } & = & 6\partial _ { k}s_{[ij]}.\end{aligned}\ ] ] finally , the first equation ( [ dis1 ] ) in components reads in words , we have : * the 2form equation ( [ dis1 ] ) defines the time derivative of the dislocation density as the ( negative ) sum of the disclination current and the curl of the dislocation current . *the 3form equation ( [ dis2 ] ) states that the time derivative of the disclination density is the ( negative ) divergence of the disclination current . *the 3form equation ( [ dis3 ] ) defines the disclination density as the divergence of the dislocation density , that is , is the _ exact _ 3form . *the bianchi identity ( [ dis4 ] ) follows from equation ( [ dis3 ] ) by _ poincar lemma _ and states that the disclination density is conserved quantity , that is , is the _ closed _3form . also , every 4form in 3d space is zero . from these equations, we can conclude that localized spinal dislocations and disclinations are mutually coupled by the underlaying , which means that we can not separately analyze translational and rotational spinal injuries a fact which _ is not _ supported by the literature .based on the previously developed covariant force law , in this paper we have formulated a new coupled loading rate hypothesis , which states that the main cause of localized spinal injury is an external , an impulsive loading striking the spinal segment in several degrees - of - freedom , both rotational and translational , combined . to demonstrate this ,we have developed the vector newton euler mechanics on the euclidean of localized spinal micro - motions . in this way, we have precisely defined the concept of the , which is a cause of rapid localized spinal discontinuous deformations : ( i ) mild rotational disclinations and ( ii ) severe translational dislocations .based on the presented model , we argue that we can not separately analyze localized spinal rotations from translations , as they are in reality coupled . to prevent spinal injuries we need to develop the _ internal se(3)jolt awareness_. to maintain a healthy spine , we need to prevent localized se(3)jolts from striking any part of the spine in any human motion or car crash conditions .special euclidean group , ( the semidirect product of the group of rotations with the corresponding group of translations ) , is the lie group consisting of isometries of the euclidean 3d space .an element of is a pair where and the action of on is the rotation followed by translation by the vector and has the expression using homogeneous coordinates , we can represent as follows , with the action on given by the usual matrix vector product when we identify with the section . in particular , given and , we have or as a matrix vector product , ivancevic and ivancevic 2006a goel , v.k . ,sairyoa , k. , vishnubhotl , s.l . , biyania , a. , ebraheim , n. , spine technology handbook , chapter 6 - spine disorders : implications for bioengineers , elsevier , ( 2006 ) .nightingale , r.w . ,chanceya , v.c . , ottaviano , d. , luck , j.f . ,tran , l. , prange , m. , myersa , b.s . ,flexion and extension structural properties and strengths for male cervical spine segments , j. biomech . , * 40*(3 ) , 535 - 542 , ( 2007 ) .rozumalski , a. , schwartz , m.h . ,wervey , r. , swanson , a. , dykes , d.c . ,novacheck , t. , the in vivo three - dimensional motion of the human lumbar spine during gait , gait posture , 18585041 ( p , s , e , b , d ) , ( 2008 ) ivancevic , v. , ivancevic , t. , geometrical dynamics of complex systems : a unified modelling approach to physics , control , biomechanics , neurodynamics and psycho - socio - economical dynamics .springer , dordrecht , ( 2006 ) .drake , j.d . ,aultman , c.d . ,mcgill , s.m . ,callaghan , j.p ., the influence of static axial torque in combined loading on intervertebral joint failure mechanics using a porcine model .. biomech . * 20 * , ( 10 ) , 1038 - 45 , ( 2005 ) .bilby , b.a . ,eshelby , j.d . , dislocation and the theory of fracture . in : fracture , an advanced treatise , liebowitz , h. , ( ed ) .i , microscopic and macroscopic fundamentals , academic press , new york and london , 99 - 182 , ( 1968 ) .
the prediction and prevention of spinal injury is an important aspect of preventive health science . the spine , or vertebral column , represents a chain of 26 movable vertebral bodies , joint together by transversal viscoelastic intervertebral discs and longitudinal elastic tendons . this paper proposes a new _ locally coupled loading rate hypothesis _ , which states that the main cause of both soft and hard tissue spinal injury is a _ localized euclidean jolt _ , or , an impulsive loading that strikes a localized spine in several coupled degrees - of - freedom simultaneously . to show this , based on the previously defined _ covariant force law _ , we formulate the coupled newton euler dynamics of the local spinal motions and derive from it the corresponding coupled dynamics . the is the main cause of two basic forms of spinal injury : ( i ) hard tissue injury of local translational dislocations ; and ( ii ) soft tissue injury of local rotational disclinations . both the spinal _ dislocations and disclinations _ , as caused by the , are described using the cosserat multipolar viscoelastic continuum model . _ keywords : _ localized spinal injury , coupled loading rate hypothesis , coupled newton euler + dynamics , euclidean jolt dynamics , spinal dislocations and disclinations * contact information : * dr . vladimir ivancevichuman systems integration , land operations divisiondefence science & technology organisation , australiapo box 1500 , 75 labs , edinburgh sa 5111tel : + 61 8 8259 7337 , fax : + 61 8 8259 4193e - mail : vladimir.ivancevic.defence.gov.au
statistical decision problems in engineering applications require to perform state estimation of a dynamic system under uncertainty as to signal presence .this includes fault detection and diagnosis in a dynamical system control , target detection and tracking , image and speech segmentation , speaker identification and source separation , blind deconvolution of communication channels .application of sequential decision rules to the above scenario arouses much interest since it promises a considerable gain in sensitivity , measured by the reduction in the average sample number ( asn ) , with respect to fixed sample size ( fss ) procedures .these advantages are particularly attractive in remote radar surveillance , where the signal amplitude is weak compared to the background noise and stringent detection specifications can be met only by processing multiple frames as in ) . in this case , fss techniques usually result to be inefficient while sequential procedures are known to increase the sensitivity of power - limited systems or , alternatively , to reduce the asn .the adoption of sequential procedures , however , poses some difficulties : since the instant when the procedure stops sampling is not determined in advance ( it is a random stopping time , indeed ) the set of trajectories of the dynamic system to be considered ( i.e. the parameter space ) has an infinite cardinality . on the other hand , sequential testing rules have been already extended to the case of composite hypotheses . in sprt is adopted in a radar framework assuming a prior on the parameter space , in turn consisting of a finite number of elements ( the radar resolution cells ). sub - optimal sequential classification procedures ( also called multi - hypotheses tests ) were also proposed during the past years , such as for the case of independent and identically distributed ( i.i.d . )observations and for the more general setting of non i.i.d . observations .however , all of these studies were restricted to a finite cardinality of the parameter space , an overly restrictive condition , which corresponds to requiring that the dynamic system may only lie in a determined state , with no transition allowed .few works in the past have studied sequential problems for hidden markov models ( hmms ) , which are known to admit a dynamical system representation in the sense of control theory . in the performances of sprts for model estimation in parametrized hmms and the cumulative sum ( cusum ) procedure for change point detection in hmms are studied , while addresses the quickest detection of transient signals represented as hmms using a cusum - like procedure , with possible applications to the radar framework .this paper addresses the problem of sequential detection and trajectory estimation of the state evolution of a dynamical system observed through noisy measurements . in the above framework ,its contributions can be summarized as follows .* at the design stage , a sequential procedure is defined with no restriction as to the parameter space cardinality .the detection part of the procedure realizes an sprt while , in order to estimate the system state trajectory , a gated estimator is defined , in the sense that estimation is enabled by the result of the detection operation .* it is known that wald s sprt for testing simple hypotheses based on i.i.d .observations has a number of remarkable properties , the most appealing being the fact that it simultaneously minimizes the expected sample size under both hypotheses . these properties , however , fail to hold when the observations are not i.i.d ., as it happens when they are generated by a dynamic system . in this paper ,a deep asymptotic analysis for the detection part is given and sufficient conditions under which these properties hold are stated , consistent with previous results in . in particular , it is shown that under a set of rather mild conditions the test ends with probability one and its stopping time is almost surely minimized in the class of tests with the same or smaller error probabilities . furthermore , reinforcing one of such conditions , it is also shown that any moment of the stopping time distribution is first - order asymptotically minimized in the same class of tests .* at the application stage , the general problem of multi - frame target detection and tracking for radar surveillance is considered : in this way , previous limitations on target mobility imposed by other studies are avoided . *finally , a thorough performance analysis is given , aimed primarily at showing the correctness of the asymptotic analysis and at investigating the effects of system parameters .the superiority of sequential detection and estimation rules with respect to fss techniques is also shown in the afore - mentioned radar application .the rest of the paper is organized as follows .next section presents the elements of the problem while section [ seq_rule_section ] addresses the sequential detection and estimation problem .section [ asym_an_section ] presents the asymptotic results while section [ radar_appl_sec ] covers the radar surveillance problem .finally , section [ num_res_sec ] is devoted to the presentation of numerical results , while concluding remarks are given in section [ conclusions ] .for reader s sake , some notation , used throughout the rest of the paper , is first introduced .[ [ notation ] ] notation + + + + + + + + in what follows , all random variables are defined on a common probability space and are denoted with capital letters .lower case letters are used to denote realizations of random variables while calligraphic letters to denote sets within which random variables take values .-algebras are denoted using script letters , being the smallest -algebra generated by the random variable . will be used to denote segments of random variables taken from the process : specifically , for , and . is the operator of expectation : a subscript will be added in case of ambiguity , so that and are expectation when is the true state of nature and hypothesis is true , respectively . denotes the kullback - leibler divergence operator .the acronyms a.s . and a.e .stands for almost sure and almost everywhere . denotes the set of natural numbers , i.e. , the set of integers , the set of real numbers and the set of positive real numbers .finally , the notation means that .consider a dynamic system with a markov evolution . , , is the state vector at time and is the state space , with cardinality . in particular, forms a discrete - time , homogeneous markov chain with given initial distribution and transition probabilities , . a sequence of states , often called trajectory ,is denoted with and has density , with respect to the counting measure . is observed through a set of noisy measurements .the measurement process is , and the sample space of each is , being a -algebra of subsets of .consider a -finite measure on .if the signal is present , is a hmm : given a realization of , is a sequence of conditionally independent random variables , each having density with respect to . on the other hand ,if the measurements contain only noise , is an i.i.d .process , each having density with respect to .thus , for every , the joint distribution of has conditional density with respect to . given these elements , one is to sample the process sequentially and decide , as soon as possible , if measurements are generated by noise alone or if they come from a dynamic system . in the latter case , it can be also required to estimate the system trajectory which has generated such measurements .the parameter space , then , is . as in ,whose focus , however , was on non - sequential decision rules , there is a mutual coupling of detection and estimation and two different strategies may be adopted .indeed , the structure of the decision rule can be chosen so as to improve the detection or the estimation performance .the former case is called a weakly coupled ( or uncoupled ) design while the latter a strongly coupled ( or coupled ) design . in both cases ,the estimator is enabled by the detection operation : this gating , however , can be ( possibly ) optimal for the detection or for the estimation . however , the problem of designing sequential procedures for detection and estimation is considerably more difficult than that of devising fss procedures and the approach taken in general is to extend and generalize the sprt designing a practical , possibly sub - optimal , rule . in this paperthe uncoupled strategy is adopted , this choice being motivated by a number of reasons : it has a very simple structure ; as shown in section [ asym_an_section ] , it exhibits many optimal properties ; detection is the primary interest in many practical applications , as for example , radar surveillance problems later discussed .a sequential decision rule is the pair , where is a stopping rule and a terminal decision rule .since detection and estimation are performed in parallel , the terminal decision rule is itself composed of a detection rule for testing the signal presence and of a trajectory estimator , i.e. . the proposed ( non - randomized ) sequential decision rule is , then , [ sprt_rule ] where , being the likelihood ratio of to . notice that the pair is an sprt for testing ` noise only ' against the alternative ` signal present ' , no matter of its trajectory . , then , is the hypothesis that has density , .the strength of such a sequential test is the pair of probabilities of errors of the first and second kind , and , respectively ( often , in detection problems , is referred to as probability of false alarm , , and as probability of miss , ) .denoting with the stopping time and with its conditional distribution , is the probability that given a realization of , for any ; the relationship between and is : and , for . ]these probabilities of error are given by ] , ] , .[ asym_cond_4 ] there exists a constant such that ] are finite , for all .[ asym_cond_5 ] for every and .[ asym_cond_6 ] the matrix containing the transition probabilities is invertible .since the markov chain is homogeneous and has a finite state space , condition [ asym_cond_1 ] corresponds to requiring that be stationary and ergodic , which will be seen to imply to be stationary and ergodic as well , an essential property for the limiting theorem to be presented . as concerns condition [ asym_cond_2 ] , it can be shown ( recursively ) that it ensures the two densities and are not -a.e .equal for every : otherwise , for some it could not be possible to discriminate between statistical populations drawn from these two distributions , i.e. detection could not be possible . finally , [ asym_cond_3 ] [ asym_cond_6 ] are essentially ` regularity ' conditions which allow to derive the limiting behaviour of the log - likelihood ratios .notice , furthermore , that the moment conditions [ asym_cond_4 ] imply [ asym_cond_3 ] since there always exists a finite constant such that , for any .it turns out that the validity of properties ( [ sprtprop2 ] ) ( [ sprtprop4 ] ) is highly influenced by the limits ] , which proves the first inequality .the other one can be proved similarly . in order to guarantee finiteness of the expected sample size and to obtain its first - order asymptotic minimization , condition [ asym_cond_3 ]must be strengthened requiring [ asym_cond_4 ] [ asym_cond_6 ] to hold .indeed , the following can be proved ( proof is given in the appendix ) .[ asym_optimality ] suppose that conditions [ asym_cond_1 ] , [ asym_cond_2 ] , [ asym_cond_4 ] [ asym_cond_6 ] are fulfilled , and are chosen so that the test belongs to and , as .then , for every , <+\infty ] the transition probability matrix and {x\in\mathcal{s}} ] converges to a function uniformly in ( * ? ? ? * and theorem 4.3 ) ; these two properties will be used to derive the convergence rate of the sequence .to this end , define the function = \notag\\ = & \limsup_{k\rightarrow+ \infty}\frac{1}{k}\ln \operatorname{e}_{h_1}\left [ \lambda_k^p ( \mathbf{z}_{1:k})\right].\notag\end{aligned}\ ] ] given the uniform convergence in and recalling that , property ( [ boug1 ] ) implies that =\ln h(p),\ ; \forall\,p\in i,\notag\ ] ] and then , , so that , from property ( [ boug2 ] ) , . denote now with the set . since , from condition [ asym_cond_4 ] , \leq & \operatorname{e}_{h_1}\left [ \prod_{n=1}^k\max_{x\in\mathcal{s}}\left(\frac{f(z_n| x)}{f(z_n|\theta_0)}\right)^a \right]\leq\notag\\ \leq & \left(\sum_{y\in\mathcal{s}}\sum_{x\in\mathcal{s}}\operatorname{e}_y\left [ \left(\frac{f(z_1| x)}{f(z_1|\theta_0)}\right)^a\right ] \right)^k<+\infty,\notag\end{aligned}\ ] ] it follows that and , thus , the interior part of contains the point .this and the fact that implies that converges to exponentially in ( * ? ? ?* exercise 2.3.25 ) , ( * ? ? ?* theorem iv.1 ) . of random variables is said to converge exponentially to a constant if , for any sufficiently small , there exists a constant c such that . ]the exponential convergence is obviously much stronger than the a.s .convergence granted by theorem [ theorem_conv_as ] .indeed , the former implies that and , which in turn implies that converges -quickly to for any ( * ? ? ?* lemma 3 ) . of random variables is said to converge -quickly to a constant , for some , if <+\infty ] .on the other hand , lemma [ convexity_div ] and jensen inequality allow to write for every realization of , whereby =\notag\\ = & \sum_{x\in\mathcal{s } } \operatorname{e}_{h_1}\big[p\big(\{x_1=x\}|\mathbf{z}_{-\infty:0}\big)\big]\operatorname{d } \big ( f(\cdot|x)\vert f(\cdot|\theta_0)\big)=\notag\\ = & \sum_{x\in\mathcal{s}}\overline{\pi}(x)d\big(f(\cdot|x)\vert f(\cdot|\theta_0 ) \big).\notag\end{aligned}\ ] ] the upper bound on can be proved similarly . from ( [ perm_condition ] ) , setting for some , it follows that has the same value for every : this , along with proposition [ prop_bound_sup ] , demonstrates the upper bound on . as to the lower bound , exploiting lemma [ convexity_div ] and jensen inequality , it follows that for every probability vector on , where , is the set of all of the possible permutations of and . from the demonstration of theorem [ theorem_conv_as ] , equations ( [ mixture_0 ] ) and ( [ lambda_sign ] ) , can be written also as $ ] , and thus , exploiting ( [ min_divergence ] ) , it results that =\notag\\ = & \textstyle \operatorname{d } \left ( \sum_{x\in\mathcal{s } } \frac{1}{m}f(\cdot|x)\vert f(\cdot|\theta_0)\right),\notag\end{aligned}\ ] ] and the lower bound is proved .the bounds on can be proved similarly .v. dragalin , a. g. tartakovsky , and v. veeravalli , `` multihypothesis sequential probability ratio test part i : asymptotic optimality , '' _ ieee trans .inform . theory _45 , no . 7 , pp . 24482461 , nov . , `` multihypothesis sequential probability ratio test part ii : accurate asymptotic expansions for the expected sample size , '' _ ieee trans .inform . theory _46 , no . 4 , pp .13661383 , july 2000 .emanuele grossi was born in sora , italy on may 10 , 1978 .he received with honors the dr .degree in telecommunication engineering in 2002 and the ph.d .degree in electrical engineering in 2006 , both from the university of cassino , italy . from february 2005he spent six month at the department of electrical & computer engineering of the university of british columbia , vancouver , as a visiting scholar .since february 2006 , he is assistant professor at the university of cassino .his research interests concern wireless multiuser communication systems , radar detection and tracking , and statistical decision problems with emphasis on sequential analysis .marco lops was born in naples , italy on march 16 , 1961 .he received the dr .degree in electronic engineering from the university of naples in 1986 .+ from 1986 to 1987 he was in selenia , roma , italy as an engineer in the air traffic control systems group . in 1987he joined the department of electronic and telecommunications engineering of the university of naples as a ph.d .student in electronic engineering .he received the ph.d .degree in electronic engineering from the university of naples in 1992 .+ from 1991 to 2000 he has been an associate professor of radar theory and digital transmission theory at the university of naples , while , since march 2000 , he has been a full professor at the university of cassino , engaged in research in the field of statistical signal processing , with emphasis on radar processing and spread spectrum multiuser communications .he also held teaching positions at the university of lecce and , during , and , he was on sabbatical leaves at university of connecticut , rice university , and princeton university , respectively .
the problem of detection and possible estimation of a signal generated by a dynamic system when a variable number of noisy measurements can be taken is here considered . assuming a markov evolution of the system ( in particular , the pair signal - observation forms a hidden markov model ) , a sequential procedure is proposed , wherein the detection part is a sequential probability ratio test ( sprt ) and the estimation part relies upon a maximum - a - posteriori ( map ) criterion , gated by the detection stage ( the parameter to be estimated is the trajectory of the state evolution of the system itself ) . a thorough analysis of the asymptotic behaviour of the test in this new scenario is given , and sufficient conditions for its asymptotic optimality are stated , i.e. for almost sure minimization of the stopping time and for ( first - order ) minimization of any moment of its distribution . an application to radar surveillance problems is also examined . asymptotic optimality , hidden markov models ( hmm ) , sequential detection and estimation , sprt .
string problems related to dna and/or protein sequences are abundant in bioinformatics .well - known examples include the longest common subsequence problem and its variants , the shortest common supersequence problem , and string consensus problems such as the _ far from most string _ problem and the _ close to most string _ problem .many of these problems are strongly _np_-hard and also computationally very challenging . this work deals with a string problem which is known as the _ minimum common string partition _ ( mcsp ) problem .the mcsp problem can technically be described as follows .given are two _ related _ input strings and which are both of length over a finite alphabet .the term _ related _ refers to the fact that each letter appears the same number of times in each of the two input strings .note that being related implies that and have the same length .a valid solution to the mcsp problem is obtained by partitioning ( resp . ) into a set ( resp . ) of non - overlapping substrings such that .the optimization goal consists in finding a valid solution such that is minimal .consider the following example .given are sequences and .obviously , and are related because * a * and appear twice in both input strings , while * c * and * t * appear once .a trivial valid solution can be obtained by partitioning both strings into substrings of length one , that is , .the objective value of this solution is six .however , the optimal solution , with objective value three , is .the mcsp problem has applications , for example , in the bioinformatics field .chen et al . point out that the mcsp problem is closely related to the problem of sorting by reversals with duplicates , a key problem in genome rearrangement .the original definition of the mcsp problem by chen et al . was inspired by computational problems arising in the context of genome rearrangement such as : may a given dna string possibly be obtained by reordering subsequences of another dna string ? in the meanwhile , the general version of the problem was shown to be _np_-hard .other papers concerning problem hardness consider problem variants such as , for example , the -mcsp problem in which each letter occurs at most times in each input string .the 2-mcsp problem was shown to be apx - hard in .jiang et al . proved that the decision version of the mcsp problem where indicates the size of the alphabet is _np_-complete when .a lot of research has been done concerning the approximability of the problem .cormode and muthukrishnan , for example , proposed an -approximation for the _ edit distance with moves _ problem , which is a more general case of the mcsp problem .other approximation approaches were proposed in .chrobak et al . studied a simple greedy approach for the mcsp problem , showing that the approximation ratio concerning the 2-mcsp problem is 3 , and for the 4-mcsp problem the approximation ratio is in . in the case of the general mcsp problem, the approximation ratio lies between and , assuming that the input strings use an alphabet of size .later kaplan and shafir improved the lower bound to .kolman proposed a modified version of the simple greedy algorithm with an approximation ratio of for the -mcsp .recently , goldstein and lewenstein proposed a greedy algorithm for the mcsp problem that runs in time .he introduced another a greedy algorithm with the aim of obtaining better average results .damaschke was the first one to study the fixed - parameter tractability ( fpt ) of the problem .later , jiang et al . showed that both the -mcsp and mcsp problems admit fpt algorithms when and are constant parameters .fu et al . proposed an time algorithm for the general case and an time algorithm applicable under certain constraints .finally , in recent years researchers have also focused on algorithms for deriving high quality solutions in practical settings .ferdous and sohel rahman , for example , developed a - ant system metaheuristic .blum et al . proposed a probabilistic tree search approach .both works applied their algorithm to a range of artificial and real dna instances from .the first integer linear programming ( ilp ) model , as well as a heuristic approach on the basis of the proposed ilp model , was presented in .the heuristic is a 2-phase approach which in the first phase aims at covering most of the input strings with few but long substrings , while in the second phase the so - far uncovered parts of the input strings are covered in the best way possible .experimental results showed that for smaller problem instances with applying a solver such as cplex to the proposed ilp is currently state - of - the - art . for larger problem instances ,runtimes are typically too high and best results are usually obtained by the heuristic from . in this paperwe introduce an alternative ilp model for solving the mcsp problem .we show that the lp - relaxations of both models are equally strong from a theoretical point of view . an extensive experimental comparison with the model from shows , however , that cplex is able to derive feasible integer solutions much faster with the new model .moreover , the results when given the same computation time as for solving the existing ilp model are significantly better .the remainder is organized as follows . in section [ sec : mip ] , the ilp model from as well as the newly proposed ilp model are described .a polyhedral comparison of the two models is performed in section [ sec : comparison ] .the experimental evaluation on problem instances from the related literature as well as on newly generated problem instances is provided in section [ sec : experiments ] .finally , in section [ sec : conclusions ] we draw conclusions and give an outlook on future work .in the following we first review the existing ilp model for solving the mcsp as proposed in .subsequently , the new alternative model is presented .the existing ilp model from is based on the notion of _ common blocks_. therefore we will henceforth refer to this model as the _ common blocks model_. a common block of input strings and is a triple where is a string which appears as substring in at position and in at position , with . let the length of a common block be its string s length , i.e. , .let us now consider the set of all existing common blocks of and .any valid solution to the mcsp problem can then be expressed as a subset of , i.e. , , such that : 1 . , that is , the sum of the lengths of the common blocks in is equal to the length of the input strings . 2 . for any two common blocks it holds that their corresponding strings neither overlap in nor in .the ilp uses for each common block a binary variable indicating its selection in the solution .in other words , if , the corresponding common block is selected for the solution . on the other side , if , common block is not selected . align ( ilp _ ) & _x_i & [ eqn : objorig ] + & _ i\{1, ,m k^1_i j < k^1_i+|t_i| } x_i = 1 & & j=1, ,n [ eqn : const2 ] + & _ i\{1, ,m k^2_i j < k^2_i+|t_i| } x_i = 1 & & j=1, ,n [ eqn : const3 ] + & x_i \{0 , 1 } & & i=1, ,m the objective function minimizes the number of selected common blocks .equations ( [ eqn : const2 ] ) ensure that each position of string is covered by exactly one selected common block and selected common blocks also do not overlap .equations ( [ eqn : const3 ] ) ensure the same with respect to . note that equations ( [ eqn : const2 ] ) ( and also ( [ eqn : const3 ] ) ) implicitly guarantee that the sum of the lengths of the selected blocks is as finally , note that the number of variables in model is of order .an aspect which the above model does not effectively exploit is the fact that , frequently , some string appears multiple times at different positions as substring in and/or . for example , assume that string * ac * appears five times in and four times in .model will then consider different common blocks , one for each pairing of an occurrence in and in .especially when the cardinality of the alphabet is low and large , it is likely that some smaller strings appear very often and induce a huge set of possible common blocks . to overcome this disadvantage, we propose the following alternative modeling approach .let denote the set of all ( unique ) strings that appear as substrings at least once in both and .for each , let and denote the set of all positions between and at which starts in input strings and , respectively .we now use binary variables for each , , and for each , . in case ,the occurance of string at position in input string is selected for the solution ( where ) .on the other side , if , the occurance of string at position in input string is not selected .the new alternative model , henceforth also referred to as the _ common substrings model _ , can then be expressed as follows .align ( ilp _ ) & _ t t _ k q^1_t y_t , k^1 [ eqn : obj ] + & _ t t _ k q^1_t kj < k+|t| y_t , k^1 = 1 & & j=1, ,n [ eqn : const4 ] + & _ t t _ k q^2_t kj< k+|t| y_t , k^2 = 1 & & j=1, ,n [ eqn : const5 ] + & _ k q^1_t y_t , k^1 = _ k q^2_t y_t , k^2 & & t t [ eqn : const6 ] + & y_t , k^1 \{0 , 1 } & & t t , k q^1_t + & y_t , k^2 \{0 , 1 } & & t t , k q^2_t the objective function counts the number of chosen substrings ; note that would yield the same value .equations ( [ eqn : const4 ] ) and ( [ eqn : const5 ] ) ensure that for each position of input string ( respectively , ) exactly one covering substring is chosen .these equations consider for each position all substrings for which the starting position is at most and less than .equations ( [ eqn : const6 ] ) ensure that each string is chosen the same number of times within and .similarly as in , the requirement that the sum of the lengths of the selected substrings has to sum up to follows implicitly from and . concerning the number of variables involved in model , the following can be observed .a string of length has exactly substrings of size greater than zero . in the worst case ,input strings and are equal , which means that variables are generated . therefore , in the general case , the new model has variables .we compare the two ilp models by projecting solutions of expressed in terms of variables , , into the space of variables , , and , , from .a corresponding solution is obtained by let and be the linear programming relaxations of models and , respectively , obtained by relaxing the integrality conditions . in the followingwe show that both models describe the same polyhedron in the space of -variables and are thus equally strong from a theoretical point .the polyhedron defined by is contained in .we show that for any feasible solution to , the solution in terms of the -variables obtained by is also feasible in . for equations replacing yields which corresponds to the left side of and is thus always equal to one .equations are correspondingly fulfilled . for constraintswe obtain for each and they are therefore also always fulfilled .last but not least , also and trivially hold due to and . the polyhedron defined by is contained in .due to the correspondence , equations can be written in terms of the -variables and therefore also hold for any feasible solution of .correspondingly , equations are always fulfilled for any solution of .if one is interested in a specific solution in terms of the -variables for a feasible solution expressed by -variables , it can be easily derived by considering each and assigning values to variables with in an iterative , greedy fashion so that relations are fulfilled for any and .a feasible assignment of such values must always exist as an individual variable exists for each possible pair of positions in and positions in , due to constraints , and the variable domains . from the above results, we can directly conclude the following . corresponds to when projected into the domain of -variables , and therefore and yield the same lp - values and are equally strong .both and were implemented using gcc 4.7.3 and ibm ilog cplex v12.1 .the experimental results were obtained on a cluster of pcs with 2933 mhz intel(r ) xeon(r ) 5670 cpus having 12 nuclei and 32 gb ram . moreover ,cplex was configured for single - threaded execution .two different benchmark sets were used for the experimental evaluation .the first one was introduced by ferdous and sohel rahman in for the evaluation of their ant colony optimization approach .this set contains in total 30 artificial instances and 15 real - life instances consisting of dna sequences , that is , .remember , in this context , that each problem instance consists of two related input strings .moreover , the benchmark set consists of four subsets of instances .the first subset ( henceforth labelled group1 ) consists of 10 artificial instances in which the input strings have lengths up to 200 .the second subset ( group2 ) consists of 10 artificial instances with input string lengths in ] . finally , the fourth subset ( real ) consists of 15 real - life instances of various lengths in $ ] .the second benchmark set that we used is new .it consists of 10 uniformly randomly generated instances for each combination of and alphabet size . in total , this set thus consists of 300 benchmark instances .the results for the four subsets of instances from the benchmark set by ferdous and sohel rahman are shown in tables [ tab : results : group1]-[tab : results : real ] , in terms of one table per instance subset .the structure of these tables is as follows .the first and second columns provide the instance identifiers and the input string length , respectively. then the results of and are shown by means of five columns each .the first column provides the objective values of the best solutions found within a limit of 3600 cpu seconds . in case optimality of the corresponding solutionwas proven by cplex , the value is marked by an asterisk .the second column provides computation times in the form x / y , where x is the time at which cplex was able to find the first valid integer solution , and y the time at which cplex found the best ( possibly optimal ) solution within the 3600s limit .the third column shows optimality gaps , which are the relative differences in percent between the values of the best feasible solutions and the lower bounds at the times of stopping the runs .the fourth column provides lp gaps , i.e. , the relative differences between the lp relaxation values and the best ( possibly optimal ) integer solution values .and were equal .] finally , the last column lists the numbers of variables of the ilp models .the best result for each problem instance is marked by a grey background , and the last row of each table provides averages over the whole table .the following observations can be made .first , apart from the instances of group1 which are all solved with both models to optimality , the results for subsets group2 , group3 and real are clearly in favor of model . only in one out of 35 cases ( leaving group1 aside ) a better resultis obtained with , and in further four cases the results obtained with are matched . in all remaining casesthe solutions obtained with are better than those obtained with .this observation is confirmed by a study of the optimality gaps .they are significantly smaller for than for .one of the main reasons for the superiority of model over is certainly the difference in the number of the variables .for the instance of group1 , needs , on average , times more variables than .this factor seems to grow with growing instance size .concerning instances of group2 , requires , on average , times more variables .the corresponding number for group3 is .another reason for the advantage of over is that symmetries are avoided .finally , a last observation concerns the computation times : the first feasible integer solution is found for , on average , in about of the time that is needed in the case of .the results for the new set of problem instances are presented in table [ tab : results : new ] .each line provides the results of both and averaged over the 10 instances for a combination between and .the results are presented for each ilp model by means of six table columns .the first five represent the same information as was provided in the case of the first benchmark set .an additional sixth column ( with heading * # opt * ) indicates for each row how many ( out of 10 ) instances were solved to optimality . the additional last table column ( with heading * impr . in * ) indicates the average improvement in solution quality of over .the results permit , basically , to draw the same conclusions as in the case of the results for the instance set treated in the previous subsection .the application of cplex to outperforms the application of cplex to both in final solution quality and in the computation time needed to find the first feasible integer solution .these differences in results become more pronounced with increasing input string length and with decreasing alphabet size . in the case of ,for example , the solutions provided by are on average better than those provided by .the superiority of over is also indicated by the number of instances that were solved to optimality : 160 out of 300 in the case of , and 183 out of 300 in the case of . in order to facilitate the study of the computation times at which the first integer solutions were found ,these times are graphically shown for different values of in three different barplots in figure [ fig : firstsoltime ] .the charts clearly show that the advantages of over are considerable .in fact , the numbers concerning are so small ( in comparison to the ones concerning ) that the bars are not visible in these plots .moreover , these advantages seem to grow with increasing alphabet size .this means that , even though the differences in solution quality are negligible when , the first integer solutions are found much faster in the case of .the average gap sizes concerning the quality of the best solutions found and the best lower bounds at the time of termination are plotted in the same way in the three charts of figure [ fig : gapsize ] .these charts clearly show that , for all combinations of and , the average gap is smaller in the case of .finally , figure [ fig : variables ] shows evolution of the number of variables needed by the two models for instances of different sizes .while ( meta-)heuristic approaches are the state - of - the - art for approximately solving large instances of the mcsp , instances with string lengths of less than about 1000 letters can be well solved with an ilp model in conjunction with a state - of - the - art solver like cplex . in this workwe have proposed the model based on _ common substrings _ that reduces symmetries appearing in the formerly suggested _ common blocks _ model . while our polyhedral analysis indicated that both models are equally strong w.r.t .their linear programming relaxations , there are significant differences in the computational difficulties to solve these models .the new formulation allows for finding feasible solutions of already reasonable quality in substantially less time and also yields better final solutions in most cases where proven optimal solutions could not be identified within the time limit .an important reason for this is to be found in the number of variables needed by the two models .while the existing model from the literature requires variables ( where is the length of the input strings ) , the new model only requires variables . in future workit would be interesting to consider extended variants of the mcsp , in particular such where the input strings need not to be related . in biological applications this would give a greater flexibility as sequences that were also affected by other kinds of mutations can be compared in terms of their reordering of subsequences .another interesting generalization would be to consider more than two input strings .the newly proposed ilp model appears to be a promising basis also for these variants .c. blum acknowledges support by grant tin2012 - 37930 - 02 of the spanish government .in addition , support is acknowledged from ikerbasque ( basque foundation for science ) .our experiments have been executed in the high performance computing environment managed by rdlab ( http://rdlab.lsi.upc.edu ) and we would like to thank them for their support . c. blum , j. a. lozano , and p. pinacho davidson .iterative probabilistic tree search for the minimum common string partition problem . in m.j. blesa , c. blum , and s. voss , editors , _ proceedings of hm 20104 9th international workshop on hybrid metaheuristics _ , volume 8457 of _ lecture notes in computer science _ , pages 154154 .springer verlag , berlin , germany , 2014 .m. chrobak , p. kolman , and j. sgall .the greedy algorithm for the minimum common string partition problem . in k.jansen , s. khanna , j. d. p. rolim , and d ron , editors , _ proceedings of approx 2004 7th international workshop on approximation algorithms for combinatorial optimization problems _ , volume 3122 of _ lecture notes in computer science _ , pages 8495 .springer berlin heidelberg , 2004 .p. damaschke. minimum common string partition parameterized . in k.a. crandall and j. lagergren , editors , _ proceedings of wabi 2008 8th international workshop on algorithms in bioinformatics _, volume 5251 of _ lecture notes in computer science _ , pages 8798 .springer berlin heidelberg , 2008 .s. m. ferdous and m. s. rahman . solving the minimum common string partition problem with the help of ants . in y. tan , y. shi , and h. mo , editors , _ proceedings of icsi 2013 4th international conference on advances in swarm intelligence _ ,volume 7928 of _ lecture notes in computer science _ , pages 306313 .springer berlin heidelberg , 2013 .b. fu , h. jiang , b. yang , and b. zhu .exponential and polynomial time algorithms for the minimum common string partition problem . in w.wang , x. zhu , and d .- z .du , editors , _ proceedings of cocoa 2011 5th international conference on combinatorial optimization and applications _ , volume 6831 of _ lecture notes in computer science _ , pages 299310 .springer berlin heidelberg , 2011 .a. goldstein , p. kolman , and j. zheng .minimum common string partition problem : hardness and approximations . in r.fleischer and g. trippen , editors , _ proceedings of isaac 2004 15th international symposium on algorithms and computation _ , volume 3341 of _ lecture notes in computer science _ , pages 484495 .springer berlin heidelberg , 2005 .i. goldstein and m. lewenstein .quick greedy computation for minimum common string partitions . in r.giancarlo and g. manzini , editors , _ proceedings of cpm 2011 22nd annual symposium on combinatorial pattern matching _ , volume 6661 of _ lecture notes in computer science _ , pages 273284 .springer berlin heidelberg , 2011 .a novel greedy algorithm for the minimum common string partition problem . in i.mandoiu and a. zelikovsky , editors , _ proceedings of isbra 2007 third international symposium on bioinformatics research and applications _ ,volume 4463 of _ lecture notes in computer science _ ,pages 441452 .springer berlin heidelberg , 2007 .p. kolman .approximating reversal distance for strings with bounded number of duplicates . in j.jedrzejowicz and a. szepietowski , editors , _ proceedings of mfcs 2005 30th international symposium on mathematical foundations of computer science _ ,volume 3618 of _ lecture notes in computer science _ ,pages 580590 .springer berlin heidelberg , 2005 .p. kolman and t. wale .reversal distance for strings with duplicates : linear time approximation using hitting set . in t.erlebach and c. kaklamanis , editors , _ proceedings of waoa 2007 4th international workshop on approximation and online algorithms _ , volume 4368 of _ lecture notes in computer science _ , pages 279289 .springer berlin heidelberg , 2007 .d. shapira and j. a. storer .edit distance with move operations . in a.apostolico and m. takeda , editors , _ proceedings of cpm 2002 13th annual symposium on combinatorial pattern matching _, volume 2373 of _ lecture notes in computer science _ , pages 8598 . springer berlin heidelberg , 2002 .
in the minimum common string partition ( mcsp ) problem two related input strings are given . `` related '' refers to the property that both strings consist of the same set of letters appearing the same number of times in each of the two strings . the mcsp seeks a minimum cardinality partitioning of one string into non - overlapping substrings that is also a valid partitioning for the second string . this problem has applications in bioinformatics e.g. in analyzing related dna or protein sequences . for strings with lengths less than about 1000 letters , a previously published integer linear programming ( ilp ) formulation yields , when solved with a state - of - the - art solver such as cplex , satisfactory results . in this work , we propose a new , alternative ilp model that is compared to the former one . while a polyhedral study shows the linear programming relaxations of the two models to be equally strong , a comprehensive experimental comparison using real - world as well as artificially created benchmark instances indicates substantial computational advantages of the new formulation .
the term _ network alignment _ encompasses several distinct but related problem variants . in general ,network alignment aims to find a bijective mapping across two ( or more ) networks so that if two nodes are connected in one network , their images are also connected in the other network(s ) .if such an errorless alignment scheme exists , network alignment can be simplified to the problem of graph isomorphism . however , in general , an errorless alignment scheme may not be feasible across two networks . in that case, network alignment aims to find a mapping with the minimum error and/or the maximum overlap .network alignment has a broad range of applications in systems biology , social sciences , computer vision , and linguistics .for instance , network alignment has been used frequently as a comparative analysis tool in studying protein - protein interaction networks across different species . in computer vision, network alignment has been used in image recognition by matching similar images .it has also been applied in ontology alignment to find relationships among different representations of a database , and in user de - anonymization to infer user / sample identifications using similarity between datasets .a network alignment optimization seeks an assignment of nodes and edges across multiple networks to maximize ( or alternatively minimize ) a cost function with quadratic terms .this problem is closely related to the quadratic assignment problem ( qap ) which is computationally challenging to solve exactly .reference shows that approximating a solution of maximum quadratic assignment problem within a factor better than is not feasible in polynomial time in general .however , owing to numerous applications of quadratic assignment problems in different areas , several algorithms have been designed to solve it approximately : some methods use exact search approaches based on branch - and - bound and cutting plane .these methods can only be applied to very small problem instances owing to their high computational complexity .some methods attempt to solve the underlying qap by linearizing the quadratic term and transforming the optimization into a mixed integer linear program ( milp ) . in practice, the very large number of introduced variables and constraints in linearization of the qap objective function poses an obstacle for solving the resulting milp efficiently .some methods use convex relaxations of the qap to compute a bound on its optimal value .the provided solution by these methods may not be a feasible solution for the original quadratic assignment problem .other methods to solve the network alignment optimization include semidefinite or non - convex relaxations , bayesian inference , message passing or other heuristics .we will review these methods in section [ sec : overview ] . for more details on these methods, we refer readers to references .spectral inference methods have received significant attention in various network science problems such as network clustering and low dimensional embedding problems . however , the use of spectral techniques in the network alignment problem has been limited partially owing to the lack of principled connections between existing spectral network alignment methods and relaxations of the underlying qap .in fact , the performance of existing spectral network alignment methods have been assessed mostly through limited simulations and/or validations with real data where an analytical performance characterization is lacked even in simple cases . in this paper, we propose a network alignment framework that uses an orthogonal relaxation of the underlying qap in a maximum weight bipartite matching optimization .our method simplifies , in a principled way , the network alignment optimization to simultaneous alignment of eigenvectors of ( transformations of ) adjacency graphs scaled by corresponding eigenvalues .we show that our framework not only can be employed to provide a theoretical justification for existing heuristic spectral network alignment methods , but it also leads to a new scalable network alignment algorithm which outperforms existing ones over various synthetic and real networks .we prove that our solution is asymptotically exact with high probability for erds - rnyi graphs , under some general conditions .proofs are based on a characterization of eigenvectors of erds - rnyi graphs , along with some spectral perturbation analysis . for an analytical performance characterization of our proposed method , we consider asymptotically large erds - rnyi graphs owing to their tractable spectral characterization . note that finding an isomorphic mapping across asymptotically large erds - rnyi graphs is a well studied problem and can be solved efficiently through canonical labeling . however , note that in the network alignment problem graphs can be non - isomorphic as well .also note that in our optimality analysis we only consider finite and asymptotically large graphs . for arguments on infinite graphs , see section [ sec : isomorphism ] .moreover , we generalize the objective function of the network alignment problem to consider both matched and mismatched interactions in a standard qap formulation . in general , existing scalable network alignment methods only consider maximizing the number of overlapping edges ( matches ) across two networks and ignore the number of resulting mismatches ( interactions that exist only in one of the networks ) . this limitation can be critical particularly in applications where networks have different sizes and low similarity . through analytical performance characterization , simulations on synthetic networks , and real - data analysis ,we show that the combination of both algorithmic and qualitative aspects of our network alignment framework leads to improved performance of our proposed algorithm compared to existing network alignment methods . on simple examples ,we isolate multiple aspects of the proposed algorithm and show that each aspect is critical in the performance of the method .the proposed algorithmic and qualitative improvements can also be adapted to existing network alignment packages .some network alignment formulations aim to align paths or subgraphs across two ( or multiple ) networks .the objective of these methods is different from our network alignment optimization where a bijective mapping across nodes of two networks is desired according to a quadratic assignment problem .solutions of these different methods may be related .for instance , a bijective mapping across nodes of two networks can provide information about conserved pathways and/or subgraphs across networks , and vice versa .many networks have modular structures in which groups of nodes tend to interact with each other more strongly compared to the rest of the network .one can take advantage of such modular structure to split a large network alignment optimization into small subproblems and use semidefinite programming ( sdp ) over each subproblem , which yields a more accurate approximation to the underlying qap at the expense of more computation . here , we consider a special class of modular networks and propose an extension of the spectral network alignment algorithm where spectral information is used to split a large qap into smaller ones over which a sdp - based optimization is performed .this hybrid method has high performance similar to sdp , while enjoying significantly less computational complexity .we illustrate the effectiveness of our spectral network alignment algorithm against some existing network alignment methods over various synthetic network models including erds - rnyi , power law , regular , and stochastic block structures , under different noise models .having illustrated the efficiency of our algorithm both theoretically and through simulations , we apply it to two real - data applications : we use network alignment to compare gene regulatory networks across human , fly and worm species which we infer by integrating genome - wide functional and physical genomics datasets from encode and modencode consortia .we show that our network alignment method infers conserved regulatory interactions across these species despite long evolutionary distances separating these organisms .moreover , we find strong conservation of centrally - connected genes and biological pathways , especially for human - fly comparisons . in a second application, we show that our spectral network alignment algorithm improves our ability to de - anonymize small subgraphs of the twitter follower network sampled in years 2008 and 2009 , where we de - anonymize user ids in 2009 using user ids in 2008 .this application illustrates the extent of personal information that can be retrieved from network structures , and raises additional considerations that need to be addressed in different privacy - related applications .the rest of the paper is organized as follows . in section [ sec :setup ] , we present the network alignment problem and review existent network alignment techniques . in section [ sec : eigenalign - alg ] , we introduce our proposed algorithm and discuss its relationship with the underlying quadratic assignment problem .moreover , we present the optimality of our method over random graphs , under some general conditions . in section [ sec : lowrank ] , we consider the trace formulation of the network alignment optimization and introduce a network alignment algorithm which uses higher - order eigenvectors of adjacency graphs to align network structures . in section [sec : modular ] , we consider the network alignment problem of modular networks and introduce an algorithm which solves it efficiently . in section [ sec : eval ] , we compare performance of our method with existent network alignment methods over different synthetic network structures . in section [ sec :inference ] , we introduce our network inference framework to construct integrative gene regulatory networks in different species used in the network alignment application . in section [ sec : regulatory ] , we illustrate applications of our method in comparative analysis of regulatory networks across species . in section [ sec : twitter ] , we explain an application of network alignment in user de - anonymization over twitter follower subgraphs . in section[ sec : proofs ] , we present proofs for the main results of the paper .in this section , we introduce the network alignment problem formulation . let and be two graphs ( networks ) where and represent set of nodes and edges of graph , respectively . by abuse of notation ,let and be their matrix representations as well where iff , for .suppose network has nodes , i.e. , .we assume that networks are un - weighted ( binary ) , and possibly directed .the proposed framework can be extended to the case of weighted graphs as well .let be an binary matrix where means that node in network is mapped to node in network .the pair is called a mapping edge across two networks . in the network alignment setup, each node in one network can be mapped to at most one node in the other network , i.e. , for all , and similarly , for all .let be a vectorized version of .that is , is a vector of length where , . to simplify notation ,define .two mappings and can be matches which cause overlaps , can be mismatches which cause errors , or can be neutrals ( figure [ fig : match - mismatch - illustration]-a ) .[ def : match - mismatch ] suppose and are undirected graphs .let and where and .then , * and are _ matches _ if and . * and are _ mismatches _ if only one of the edges and exists . * and are _ neutrals _ if none of the edges and exists. definition [ def : match - mismatch ] can be extended to the case where and are directed graphs . in this case , mappings and are matches / mismatches if they are matches / mismatches in one of the possible directions .however , it is possible to have these mappings be matches in one direction , while they are mismatches in the other direction ( figure [ fig : match - mismatch - illustration]-b ) .these mappings are denoted as _ inconsistent mappings _, defined as follows : [ def : inconsistent ] let and be two directed graphs and and where and .if edges , , and exist , however , does not exist , then mappings and are _existing network alignment formulations aim to find a mapping matrix which maximizes the number of matches between networks .however , these formulations can lead to mappings which cause numerous mismatches , especially if networks have different sizes and low similarity . in this paper, we propose a more general formulation for the network alignment problem which considers both matches and mismatches simultaneously . for a given alignment matrix across networks and , we assign an _ alignment score _ by considering the number of matches , mismatches and neutrals caused by : where , , and are scores assigned to matches , neutrals , and mismatches , respectively .note that existing alignment methods ignore effects of mismatches and neutrals by assuming which is restrictive particularly in aligning graphs with different number of nodes and low similarity . in the following ,we re - write as a quadratic assignment formulation .consider two undirected graphs and .we form an _ alignment network _ represented by adjacency matrix in which nodes are mapping edges across the original networks , and the edges capture whether the pair of mapping edges induce matches , mismatches or neutrals ( figure [ fig : framework ] ) .[ def : alignment - network ] let and , where and .=\begin{cases } s_1 , & \text{if } ( i , j ' ) \text { and } ( r , s ' ) \text { are matches},\\ s_2 , & \text{if } ( i , j ' ) \text { and } ( r , s ' ) \text { are neutrals},\\ s_3 , & \text{if } ( i , j ' ) \text { and } ( r , s ' ) \text { are mismatches } , \end{cases}\ ] ] where , , and are scores assigned to matches , neutrals , and mismatches , respectively .we can re - write as follows : =(s_1+s_2 - 2s_3)g_1(i , r)g_2(j',s')+(s_3-s_2)(g_1(i , r)+g_2(j',s'))+s_2.\ ] ] we can summarize and as follows : where represents matrix kronecker product , and is an matrix whose elements are all ones .a similar scoring scheme can be used for directed graphs .when graphs are directed , some mappings can be inconsistent according to definition [ def : inconsistent ] , i.e. , they are matches in one direction and mismatches in another. scores of inconsistent mappings can be assigned randomly to matched / mismatched scores , or to an average score of matches and mismatches ( i.e. , ) . for random graphs ,inconsistent mappings are rare events .for example , suppose network edges are distributed according to a bernoulli distribution with parameter .then , the probability of having an inconsistent mapping for a particular pair of paired nodes across networks is equal to . therefore , their effect in network alignment is negligible particularly for large sparse networks . throughout the paper , for directed graphs , we assume inconsistent mappings have negligible effect unless it is mentioned explicitly .alignment scores , and of can be arbitrary in general .however , in this paper we consider the case where with the following rationale : suppose a mapping matrix has a total of non - zero edges .for example , if networks have nodes and there is no unmapped nodes across two networks , .the total number of matches , mismatches and neutrals caused by this mapping is equal to .thus , for mapping matrices with the same number of mapping edges , without loss of generality , one can assume that , alignment scores are strictly positive ( otherwise , a constant can be added to the right - hand side of ) . in general , mappings with high alignment scores might have slightly different number of mapping edges owing to unmapped nodes across the networks which has a negligible effect in practice . moreover , in the alignment scheme , we wish to encourage matches and penalize mismatches .thus , throughout this paper , we assume . in practice , some mappings across two networks may not be possible owing to additional side information .the set of possible mappings across two networks is denoted by . if , the problem of network alignment is called _unrestricted_. however , if some mappings across two networks are prevented ( i.e. , , for ) , then the problem of network alignment is called _restricted_. in the following , we present the network alignment optimization which we consider in this paper : [ def : network_alignment ] let and be two binary networks .network alignment aims to solve the following optimization : where is defined according to , and is the set of possible mappings across two networks . in the following , we re - write using the trace formulation of a standard qap . here, we consider undirected networks and with and nodes , respectively . without loss of generality , we assume .moreover , we assume that all nodes of the network are mapped to nodes of the network ( i.e. , there are no un - aligned nodes in ) .define , and .we can rewrite the objective function of optimization using as follows : where thus , the network alignment optimization can be reformulated as follows : network alignment problem is an example of a quadratic assignment problem ( qap ) .reference shows that approximating a solution of maximum quadratic assignment problem within a factor better than is not feasible in polynomial time in general .however , owing to various applications of qap in different areas , many methods exist for approximate solutions . in the following , we briefly summarize previous works by categorizing them into four groups and explain advantages and shortcomings of each .for more details on these methods , we refer readers to references . ** exact search methods : * these methods provide a global optimal solution for the quadratic assignment problem. however , owing to their high computational complexity , they can only be applied to very small problem instances .examples of exact algorithms include methods based on branch - and - bound and cutting plane . ** linearizations : * these methods attempt to solve qap by eliminating the quadratic term in the objective function of optimization , transforming it into a mixed integer linear program ( milp ) .an existing milp solver is applied to find a solution for the relaxed problem .examples of these methods are lawler s linearization , kaufmann and broeckx linearization , frieze and yadegar linearization , and adams and johnson linearization . these linearizations can provide bounds on the optimal value of the underlying qap . in general , linearization of the qap objective functionis achieved by introducing many new variables and new linear constraints . in practice, the very large number of introduced variables and constraints poses an obstacle for solving the resulting milp efficiently . * * semidefinite / convex relaxations and bounds : * these methods aim to compute a bound on the optimal value of the network alignment optimization , by considering the alignment matrix in the intersection of the sets of orthogonal and stochastic matrices .the provided solution by these methods may not be a feasible solution of the original quadratic assignment problem .examples of these methods include orthogonal relaxations , projected eigenvalue bounds , convex relaxations , and matrix splittings . in the computer vision literature , use spectral techniques to approximately solve qap by inferring a cluster of assignments over the feature network .then , they use a greedy approach to reject assignments with low associations .+ in particular , introduces a convex relaxation of the underlying network alignment optimization based on matrix splitting which provides bounds on the optimal value of the underlying qap .the proposed sdp method provides a bound on the optimal value and additional steps are required to derive a feasible solution .moreover , owing to its computational complexity , it can only be used to align small networks , limiting its applicability to alignment of large real networks . in section [ sec :modular ] , we address these issues and introduce a hybrid method based on our proposed scheme in section [ sec : eigenalign - alg ] , and the semidefinite relaxation of to align large modular network structures with low computational complexity . * * other methods : * there are several other techniques to approximately solve network alignment optimization .some methods use a lagrangian relaxation , bayesian framework , or message passing , or some other heuristics . in section[ sec : eval ] , we assess the performance of some of these network alignment techniques through simulations .network alignment optimization is closely related to the problem of _ graph isomorphism _ ,defined as follows : let and be two binary networks . and are isomorphic if there exists a permutation matrix such that . the computational problem of determining whether two finite graphs are isomorphic is called the _ graph isomorphism problem_. moreover , given two isomorphic networks and , the problem of graph isomorphism aims to find the permutation matrix such that .the computational complexity of this problem is unknown .network alignment and graph isomorphism problems are related to each other . loosely speaking, network alignment aims to minimize the distance between premuted versions of two networks ( or , alternatively to maximize their overlap ) .therefore , if the underlying networks are isomorphic , an optimal solution of the network alignment optimization should be the same ( or close ) to the underlying permutation matrix , where . in the following lemma, we formalize such a connection between the network alignment optimization and the classical graph isomorphism problem : [ lem : iso - align - relation1 ] let and be two isomorphic erds - rnyi graphs such that =p ] .let be a graph resulting from flipping edges of independently and randomly with probability .suppose where is a permutation matrix .let and .then , for any selection of scores , maximizes the expected network alignment objective function of optimization . the proof is presented in section [ subsec : proof - lem : iso - align - relation2 ] . finding an isomorphic mapping across sufficiently large erds - rnyi graphscan be done efficiently with high probability ( w.h.p . ) through canonical labeling .canonical labeling of a network consists of assigning a unique label to each vertex such that labels are invariant under isomorphism .the graph isomorphism problem can then be solved efficiently by mappings nodes with the same canonical labels to each other .one example of canonical labeling is the degree neighborhood of a vertex defined as a sorted list of neighborhood degrees of vertices .note that , network alignment formulation is more general than the one of graph isomorphism ; network alignment aims to find an optimal mappings across two networks which are not necessarily isomorphic .we now introduce _ eigenalign _ , an algorithm which solves a relaxation of the network alignment optimization leveraging spectral properties of networks . unlike other alignment methods, eigenalign considers both matches and mismatches in the alignment scheme .moreover , we prove its optimality ( in an asymptotic sense ) in aligning erds - rnyi graphs under some technical conditions . in the following ,we describe the eigenalign algorithm : .,height=264 ] [ alg : eigenalign ] let and be two binary networks whose corresponding alignment network is denoted by according to .eigenalign algorithm solves the network alignment optimization in two steps : * * eigenvector computation step : * in this step , we compute , an eigenvector of the alignment network with the maximum eigenvalue . ** linear assignment step : * in this step , we solve the following maximum weight bipartite matching optimization : this framework is depicted in figure [ fig : framework ]. in the rest of this section , we provide intuition on different steps of the eigenalign algorithm through both quadratic assignment relaxation argument as well as a fixed point analysis . in section [ subsec : erdos ] , we discuss optimality of eigenalign over random graphs . in section [ sec : lowrank ] , we will introduce an extension of the eigenalign algorithm which uses higher - order eigenvectors of adjacency graphs to align network structures . in this section , we explain eigenalign as a relaxation of the underlying quadratic assignment optimization . for simplicity , we assume all mappings across networks are possible ( i.e. , ) . in the restricted network alignment setup , without loss of generality , one can eleminate rows and columns of the alignment matrix corresponding to mappings that are not allowed . in the eigen decomposition step of eigenalign , we ignore bijective constraints ( i.e. , constraints and ) because they will be satisfied in the second step of the algorithm through a linear optimization . by these assumptions, optimization can be simplified to the following optimization : to approximate a solution of this optimization , we relax integer constraints to constraints over a hyper - sphere restricted by hyper - planes ( i.e. , and ) . using this relaxation , optimization is simplified to the following optimization : in the following , we show that , the leading eigenvector of the alignment matrix is an optimal solution of optimization .suppose is an optimal solution of optimization .let be a solution of the following optimization which ignores non - negativity constraints : following the rayleigh ritz formula , the leading eigenvector of the alignment matrix is an optimal solution of optimization ( i.e. , ) .now we use the following theorem to show that in fact : [ thm : perron ] suppose is a matrix whose elements are strictly positive .let be an eigenvector of corresponding to the largest eigenvalue .then , , .moreover , all other eigenvectors must have at least one negative , or non - real component .since is a solution of a relaxed version of optimization , we have . using this inequality along with perron - frobenius theorem lead to , as the unique solution of optimization .the solution of the eigen decomposition step assigns weights to all possible mappings across networks ignoring bijective constraints .however , in the network alignment setup , each node in one network can be mapped to at most one node in the other network . to satisfy these constraints, we use eigenvector weights in a linear optimization framework of maximum weight bipartite matching setup of optimization . in this part , we analyze computational complexity of the eigenalign algorithm [ alg : eigenalign ] .suppose the number of nodes of networks and are .let be the number of possible mappings across two networks .in an unrestricted network alignment setup , we have . however , in a restricted network alignment , may be significantly smaller than .eigenalign has three steps : * \(i ) the alignment network should be formed which has a computational complexity of because all pairs of possible mappings should be considered .* \(ii ) in the eigen decomposition step , we need to compute the leading eigenvector of the alignment network .this operation can be performed in almost linear time in using qr algorithms and/or power methods .therefore , the worst case computational complexity of this part is . *\(iii ) finally , we use eigenvector weights in a maximum weight bipartite matching algorithm which can be solved efficiently using linear programming or hungarian algorithm .the worst case computational complexity of this step is .if the set has a specific structure ( e.g. , small subsets of nodes in one network are allowed to be mapped to small subsets of nodes in the other network ) , this cost can be reduced significantly . in section [ sec : regulatory ] , we see this structure in aligning regulatory networks across species as genes are allowed to be aligned to homologous genes within their gene families .[ prop : complexity ] the worst case computational complexity of the eigenalign algorithm is .[ remark : greedy ] in this section , we analyze optimality of the eigenalign algorithm over erds - rnyi graphs , for both isomorphic and non - isomorphic cases , and under two different noise models. while real networks often have different structures than erds - rnyi graphs , the analytical tractability and the spectral characterization of these graphs makes it possible to actually prove that the proposed relaxations are asymptotically tight , at least in special cases , providing a rigorous foundation for the proposed eigenalign algorithm . in this section ,we only consider finite and asymptotically large graphs . for arguments on infinite graphs ,see section [ sec : isomorphism ] .suppose is an undirected erds - rnyi graph with nodes where =p ] . in words ,the operation flips edges of independently randomly with probability .* noise model ii : in this model , we have , where and are binary random matrices whose edges are drawn i.i.d . from a bernoulli distribution with =p_e ] . under this model ,edges of flip independently randomly with probability , while non - connecting tuples in will be connected in with probability . because is an erds - rnyi graph with parameter ,choosing leads to the expected density of networks and be .we define as follows : where is a permutation matrix . throughout this section, we assume that we are in the restricted network alignment regime : we desire to choose mappings across two networks among possible mappings where , , and . true mappings ( if ) are included in , while the remaining mappings are selected independently randomly .moreover , we choose scores assigned to matches , neutrals and mismatches as , and , respectively , where and .these selections satisfy score conditions .[ thm : erdos - noisy ] let be the solution of the eigenalign algorithm [ alg : eigenalign ] .then , under both noise models and , if , and , then as , the error probability goes to zero : \to 0.\nonumber\ ] ] theorem [ thm : erdos - noisy ] states that , the eigenalign algorithm is able to recover the underlying permutation matrix which relates networks and to each other according to . on the other hand , according to lemma [ lem : iso - align - relation2 ] , this permutation matrix is in fact optimizes the expected network alignment score . [prop : erdos - p - max - qap ] under conditions of theorem [ thm : erdos - noisy ] , the permutation matrix inferred by eigenalign maximizes the expected network alignment objective function defined according to optimization . in noise models and ,if we put , then is isomorphic with because there exists a permutation matrix such that . for this case, we have the following corollary : [ thm : erdos ] let and be two isomorphic erds - rnyi graphs with nodes such that , where is a permutation matrix . under conditions of theorem [ thm : erdos - noisy ] , as , the error probability of eigenalign solution goes to zero .we present proofs of theorem [ thm : erdos - noisy ] and corollary [ thm : erdos ] in sections [ subsec : proof - thm - erdos ] and [ subsec : proof - erdos - noisy ] . in the following , we sketch main ideas of their proofs : since input networks and are random graphs , the alignment network formed according to will be a random graph as wellthe first part of the proof is to characterize the leading eigenvector of this random alignment network . to do this ,we first characterize the leading eigenvector of the expected alignment network which in fact is a deterministic graph . in particular, we prove that eigenvector scores assigned to true mappings is strictly larger than the ones assigned to false mappings . to prove this , we characterize top eigenvalues and eigenvectors of the expected alignment network algebraically .the restricted alignment condition ( i.e. , ) is necessary to have this bound .then , we use wedin sin theorem from perturbation theory , gershgorian circle theorem from spectral matrix theory , and chernoff bound to characterize the leading eigenvector of the random alignment network for sufficiently large .finally , we use chebyshev s inequality to show that the error probability of the eigenalign algorithm is asymptotically zero w.h.p . notethat finding an isomorphic mapping across asymptotically large erds - rnyi graphs ( corollary [ thm : erdos ] ) is a well studied problem and can be solved efficiently through canonical labeling . however, those techniques do not address a more general network alignment problem similar to the setup considered in theorem [ thm : erdos - noisy ] . for more details , see section [ sec : isomorphism ] .theorem [ thm : erdos - noisy ] and corollary [ thm : erdos ] consider a restricted network alignment case where .as explained briefly in the proof sketch and with more details in lemma [ lem : average ] , this technical condition is necessary to show that , expected eigenvector scores of true mappings are strictly larger than the ones of false mappings as . in section [ sec : eval ] and through simulations , we show that error of the eigenalign algorithm is empirically small even in an unrestricted network alignment setup .the eigenalign algorithm introduced in section [ sec : eigenalign - alg ] uses the leading eigenvector of the alignment graph to align graph structures .eigenalign can be viewed as the rank - one approximation of the linearization of the underlying quadratic assignment problem . in this section ,we introduce an extension of eigenalign which uses higher - order eigenvector of adjacency graphs to align network structures .we refer to this extension as _ lowrank align_. lowrank align can be useful specially in cases where leading eigenvectors of graphs are not informative .this case occurs for instance in the alignment of regular graph structures .moreover , lowrank align does not require an explicit formation of the alignment graph which can be costly for large networks if all mappings across networks are possible .suppose and are input networks with and nodes . for simplicity , in this sectionwe assume that they are symmetric . recall that represents the mapping matrix whose size is . in this section ,we use the trace formulation of the quadratic assignment problem .the maximum overlap network alignment optimization can be written as follows : optimization finds a mapping matrix that maximizes the number of overlapping edges ( matches ) across two graphs .however , such a mapping can cause numerous mismatches .we discuss a balanced network alignment optimization later in this section . for simplicity ,we assume .all discussions can be extended to the case where .let be the set of all permutation matrices of size .thus , optimization can be written as follows : let be an optimal solution of optimization .finding an optimal solution of this optimization is known to be np - hard .if , we have , in other words , we can add and subtract multiples of identity to make the resulting symmetric matrices positive definite , without changing the structure of the problem .thus , without loss of generality , we assume that matrices and are positive semi - definite .several algorithms have considered relaxations / approximations of this optimization as follows : suppose is a solution of optimization such that ( i.e. , its distance from the optimal solution is bounded by ) . may not be a valid permutation matrix .one way to find a permutation matrix using is to project over the space of permutation matrices : however , it has been shown that an optimal solution of optimization has a poor perform in practice . in the following , we propose an alternative algorithm to efficiently infer a permutation matrix using with a certain performance guarantee : consider the following optimization : this is a maximum weight bipartite matching optimization which can be solved exactly using linear programming .let be an optimal solution of optimization .define [ thm : gaurantee ] let and be optimal solutions of optimizations and , respectively .we have , latexmath:[\ ] ] where the size of the top and bottom blocks are and , respectively .let be the leading eigenvector of . then ,if , , and , we have , suppose is an eigenvector of the matrix with the corresponding eigenvalue .owing to the symmetric structure of the matrix , we have , using in the eigen decomposition equality , we have , thus , we have , where , let be the largest root of , which corresponds to the leading eigenvector of the matrix . to prove the lemma , it is sufficient to show that , this is true if and . to show this , we need to show , which is true under the conditions of the lemma .this completes the proof . if and , we have .thus , lemma [ lemma : c - eigenvector ] leads to .this completes the proof .we use the same setup considered in the proof of theorem [ thm : noiseless - modular ] to form the expected alignment network . considering and , expectedscores of different mapping - pairs illustrated in figure [ fig : proof - combination - ap ] can be approximated as follows : the proof is similar to the one of theorem [ thm : noiseless - modular ] . to use lemma [ lem : perron - increase ] , we need to have , which results in following conditions : because , we then have , to use lemma [ lemma : c - eigenvector ] , we need to have . using , we have : to show the non - negativity of the right - hand side of , it is sufficient to show the non - negativity of polynomial i. this polynomial has two roots at if , because the value of the polynomial i at is positive , if , the polynomial is always non - negative .if , we need to have , which guarantees the non - negativity of polynomial i. the rest of the proof is similar to the one of theorem [ thm : noiseless - modular ] .we use the same setup considered in the proof of theorem [ thm : noiseless - modular ] to form the expected alignment network .suppose and correspond to the expected objective function of the network alignment optimization [ opt : alignment ] using permutation matrices and , respectively , where , we wish to show that , .we have , where and are defined according to . under conditions of theorem [ thm : noiseless - modular ] , we have according to .thus , using and , we need to show that .we have , this polynomial have two roots at because , the minimum root is always negative .moreover , at , the polynomial value is negative .thus , is negative if this completes the proof .we use the same setup considered in the proof of theorem [ thm : noiseless - modular ] to form the expected alignment network . similarly to the proof of theorem [ thm : noiseless - modular - qap ] , we need to show according to .we have , which is positive if this completes the proof .in this paper , we made a principled connection between spectral network alignment techniques and relaxations of the network alignment optimization ( quadratic assignment problem ) . using such connection, we proposed a network alignment algorithm which employs an orthogonal relaxation of the underlying qap in a maximum weight bipartite matching optimization .our method simplifies , in a principled way , the network alignment optimization to simultaneous alignment of eigenvectors of ( transformations of ) adjacency graphs scaled by corresponding eigenvalues .our framework also provides a theoretical justification for other existing heuristic spectral network alignment methods .our proposed framework advances existing network alignment methods not only in algorithmic aspects , but also in qualitative terms of the network alignment objective function .our formulation considers both matched and mismatched interactions in its optimization and therefore , it is effective in aligning networks even with low similarity .this is critical in applications where networks have low similarities such as comparative analysis of biological networks of distal species .this idea can also be adapted to existing network alignment packages . for erds - rnyi graphs graphs, we proved that the proposed algorithm is asymptotically optimal with high probability , under some general conditions . through simulations, we compared the performance of our algorithm with the one of existing network alignment methods based on belief propagation ( netalign ) , spectral decomposition ( isorank ) , lagrange relaxation ( klau optimization ) , and a sdp - based method .our simulations illustrated the effectiveness of our proposed algorithm in aligning various network structures such as erds - rnyi , power law , regular , and stochastic block structures , under different noise models .for modular graph structure , we proposed a framework that uses spectral network alignment to split the large network alignment optimization into small subproblems .this enables the use of computationally expensive , but tight , semidefinite programming relaxations over each subproblem .this hybrid method has high performance similar to sdp , with significantly less computational complexity .designing other sdp - based network alignment methods with low computational complexity for various network models remains a promising direction for future work .moreover , in the our network alignment framework one can use other relaxations of qap ( for example by considering the alignment matrix in the intersection of the sets of orthogonal and stochastic matrices ) to obtain tighter bounds .here we considered two real - data applications .we applied our proposed network alignment algorithm to compare gene regulatory networks across human , fly and worm species which we inferred by integrating genome - wide functional and physical genomics datasets from encode and modencode consortia . our method inferred conserved regulatory interactions across these species despite large evolutionary distances spanned .moreover , we found strong conservation of centrally - connected genes and biological pathways , especially for human - fly comparisons . in a second application , we used network alignment in user de - anonymization over twitter follower subgraphs sampled in two different years .this application illustrates the extent of personal information that can be retrieved from network structures , and raises additional considerations that need to be addressed in different privacy - related applications .this study was designed by s. feizi , g. quon and m. kellis . s. feizi initiated , developed and analyzed the methods , and performed the experiments .g. quon and m. kellis contributed to the inference and comparative analysis of regulatory networks across species .m. mdard and a. jadbabaie contributed to analysis of the methods , characterizing performance guarantees , and comparison with other optimization techniques .m. mendoza contributed to the inference of regulatory networks and processing of expression datasets .the manuscript was written by s. feizi , and commented on by all the authors .authors thank encode and modencode consortia for collecting and providing genome - wide functional genomics datasets .we thank fontom5 consortium for providing initial tf lists .we thank przemyslaw grabowicz for providing the twitter follower subgraphs .authors acknowledge funding from coding instead of splitting ( 6927164 ) and afosr complex networks program and onr basic research challenge program on decentralized and online optimization .mariana recamonde - mendoza acknowledges the capes foundation / brazil ( bolsista capes - pdse - processo bex n 7137 - 12 - 5 ) .r. singh , j. xu , and b. berger , `` global alignment of multiple protein interaction networks with application to functional orthology detection , '' _ proceedings of the national academy of sciences _ , vol .105 , no .35 , pp . 1276312768 , 2008 .j. flannick , a. novak , b. s. srinivasan , h. h. mcadams , and s. batzoglou , `` graemlin : general and robust alignment of multiple large interaction networks , '' _ genome research _16 , no . 9 , pp . 11691181 , 2006 .b. p. kelley , b. yuan , f. lewitter , r. sharan , b. r. stockwell , and t. ideker , `` pathblast : a tool for alignment of protein interaction networks , '' _ nucleic acids research _suppl 2 , pp .w83w88 , 2004 .d. conte , p. foggia , c. sansone , and m. vento , `` thirty years of graph matching in pattern recognition , '' _ international journal of pattern recognition and artificial intelligence _ , vol .18 , no . 03 , pp . 265298 , 2004 . c. schellewald and c. schnrr ,`` probabilistic subgraph matching based on convex relaxation , '' in _ energy minimization methods in computer vision and pattern recognition_.1em plus 0.5em minus 0.4emspringer , 2005 , pp . 171186s. lacoste - julien , b. taskar , d. klein , and m. i. jordan , `` word alignment via quadratic assignment , '' in _ proceedings of the main conference on human language technology conference of the north american chapter of the association of computational linguistics_.1em plus 0.5em minus 0.4emassociation for computational linguistics , 2006 , pp .112119 .s. melnik , h. garcia - molina , and e. rahm , `` similarity flooding : a versatile graph matching algorithm and its application to schema matching , '' in _ data engineering , 2002 .18th international conference on_.1em plus 0.5em minus 0.4emieee , 2002 , pp .117128 .k. makarychev , r. manokaran , and m. sviridenko , `` maximum quadratic assignment problem : reduction from maximum label cover and lp - based approximation algorithm , '' in _ automata , languages and programming_.1em plus 0.5em minus 0.4emspringer , 2010 , pp .594604 .w. p. adams and t. a. johnson , `` improved linear programming - based lower bounds for the quadratic assignment problem , '' _ dimacs series in discrete mathematics and theoretical computer science _ ,16 , pp . 4375 , 1994 .q. zhao , s. e. karisch , f. rendl , and h. wolkowicz , `` semidefinite programming relaxations for the quadratic assignment problem , '' _ journal of combinatorial optimization _ , vol . 2 , no . 1 ,pp . 71109 , 1998 .j. t. vogelstein , j. m. conroy , v. lyzinski , l. j. podrazik , s. g. kratzer , e. t. harley , d. e. fishkind , r. j. vogelstein , and c. e. priebe , `` fast approximate quadratic programming for large ( brain ) graph matching , '' _ arxiv preprint arxiv:1112.5507 _ , 2011 .m. leordeanu and m. hebert , `` a spectral technique for correspondence problems using pairwise constraints , '' in _ computer vision , 2005 .iccv 2005 .tenth ieee international conference on _ , vol .2.1em plus 0.5em minus 0.4emieee , 2005 , pp .14821489 .e. kazemi , h. s hamed , and m. grossglauser , `` growing a graph matching from a handful of seeds , '' in _ proceedings of the vldb endowment international conference on very large data bases _ , vol . 8 , no .epfl - article-207759 , 2015 .e. m. loiola , n. m. m. de abreu , p. o. boaventura - netto , p. hahn , and t. querido , `` a survey for the quadratic assignment problem , '' _european journal of operational research _ ,176 , no . 2 ,pp . 657690 , 2007 .d. l. sussman , m. tang , d. e. fishkind , and c. e. priebe , `` a consistent adjacency spectral embedding for stochastic blockmodel graphs , '' _ journal of the american statistical association _ ,499 , pp . 11191128 , 2012 .a. athreya , v. lyzinski , d. j. marchette , c. e. priebe , d. l. sussman , and m. tang , `` a central limit theorem for scaled eigenvectors of random dot product graphs , '' _ arxiv preprint arxiv:1305.7388 _ , 2013 .e. segal , m. shapira , a. regev , d. peer , d. botstein , d. koller , and n. friedman , `` module networks : identifying regulatory modules and their condition - specific regulators from gene expression data , '' _ nature genetics _ , vol .34 , no . 2 ,pp . 166176 , 2003 .j. kuczynski and h. wozniakowski , `` estimating the largest eigenvalue by the power and lanczos algorithms with a random start , '' _ siam journal on matrix analysis and applications _, vol . 13 , no . 4 , pp . 10941122 , 1992 .e. segal , m. shapira , a. regev , d. peer , d. botstein , d. koller , and n. friedman , `` module networks : identifying regulatory modules and their condition - specific regulators from gene expression data , '' _ nature genetics _34 , no . 2 ,pp . 166176 , 2003 .z. bar - joseph , g. k. gerber , t. i. lee , n. j. rinaldi , j. y. yoo , f. robert , d. b. gordon , e. fraenkel , t. s. jaakkola , r. a. young _ et al ._ , `` computational discovery of gene modules and regulatory networks , '' _ nature biotechnology _ , vol . 21 , no . 11 , pp .13371342 , 2003 .d. marbach , s. roy , f. ay , p. e. meyer , r. candeias , t. kahveci , c. a. bristow , and m. kellis , `` predictive regulatory models in drosophila melanogaster by integrative inference of transcriptional networks , '' _ genome research _ , vol .22 , no . 7 , pp . 13341349 , 2012 .s. a. mccarroll , c. t. murphy , s. zou , s. d. pletcher , c .- s .chin , y. n. jan , c. kenyon , c. i. bargmann , and h. li , `` comparing genomic expression patterns across species identifies shared transcriptional profile in aging , '' _ nature genetics _36 , no . 2 ,pp . 197204 , 2004 .j. o. woods , u. m. singh - blom , j. m. laurent , k. l. mcgary , and e. m. marcotte , `` prediction of gene phenotype associations in humans , mice , and plants using phenologs , '' _ bmc bioinformatics _ , vol .14 , no . 1 ,, 2013 .j. j. faith , b. hayete , j. t. thaden , i. mogno , j. wierzbowski , g. cottarel , s. kasif , j. j. collins , and t. s. gardner , `` large - scale mapping and validation of escherichia coli transcriptional regulation from a compendium of expression profiles , '' _ plos biology _ , vol . 5 , no . 1, 2007 .s. roy , j. ernst , p. v. kharchenko , p. kheradpour , n. negre , m. l. eaton , j. m. landolin , c. a. bristow , l. ma , m. f. lin _et al . _ , `` identification of functional elements and regulatory circuits by drosophila modencode , '' _ science _330 , no . 6012 ,17871797 , 2010 .d. j. reiss , n. s. baliga , and r. bonneau , `` integrated biclustering of heterogeneous genome - wide datasets for the inference of global regulatory networks , '' _ bmc bioinformatics _, vol . 7 , no . 1 , p. 280, 2006 .a. greenfield , a. madar , h. ostrer , and r. bonneau , `` dream4 : combining genetic and dynamic information to identify biological networks and dynamical models , '' _ plos one _ , vol . 5 , no . 10 , p. e13397 , 2010 .d. marbach , j. c. costello , r. kffner , n. m. vega , r. j. prill , d. m. camacho , k. r. allison , m. kellis , j. j. collins , g. stolovitzky _ et al ._ , `` wisdom of crowds for robust gene network inference , '' _ nature methods _ , vol . 9 , no . 8 , pp . 796804 , 2012 .d. marbach , r. j. prill , t. schaffter , c. mattiussi , d. floreano , and g. stolovitzky , `` revealing strengths and weaknesses of methods for gene network inference , '' _ proceedings of the national academy of sciences _ , vol .107 , no .14 , pp . 62866291 , 2010 .r. bonneau , d. j. reiss , p. shannon , m. facciotti , l. hood , n. s. baliga , and v. thorsson , `` the inferelator : an algorithm for learning parsimonious regulatory networks from systems - biology data sets de novo , '' _ genome biology _, vol . 7 , no . 5 , p. r36 , 2006 .e. wingender , x. chen , r. hehl , h. karas , i. liebich , v. matys , t. meinhardt , m. pr , i. reuter , and f. schacherer , `` transfac : an integrated system for gene expression regulation , '' _ nucleic acids research _ , vol .28 , no . 1316319 , 2000 .s. m. gallo , d. t. gerrard , d. miner , m. simich , b. des soye , c. m. bergman , and m. s. halfon , `` redfly v3 . 0 : toward a comprehensive database of transcriptional regulatory elements in drosophila , '' _ nucleic acids research _suppl 1 , pp .d118d123 , 2011 .m. i. barrasa , p. vaglio , f. cavasino , l. jacotot , and a. j. walhout , `` edgedb : a transcription factor - dna interaction database for the analysis of c. elegans differential gene expression , '' _ bmc genomics _ ,8 , no . 1 , p. 21, 2007 .a. p. boyle , c. l. araya , c. brdlik , p. cayting , c. cheng , y. cheng , k. gardner , l. w. hillier , j. janette , l. jiang _ et al ._ , `` comparative analysis of regulatory information and circuits across distant species , '' _ nature _ , vol .7515 , pp . 453456 , 2014 .d. thomas , v. wood , c. j. mungall , s. e. lewis , j. a. blake , g. o. consortium __ , `` on the use of gene ontology annotations to assess functional similarity among orthologs and paralogs : a short report , '' _ plos computational biology _8 , no . 2 , p. e1002386 , 2012 .y. benjamini and y. hochberg , `` controlling the false discovery rate : a practical and powerful approach to multiple testing , '' _ journal of the royal statistical society .series b ( methodological ) _ , pp . 289300 , 1995 .b. zhou , j. pei , and w. luk , `` a brief survey on anonymization techniques for privacy preserving publishing of social network data , '' _ acm sigkdd explorations newsletter _ , vol .10 , no . 2 ,pp . 1222 , 2008 .m. f. schwartz , a. r. brecher , j. whyte , and m. g. klein , `` a patient registry for cognitive rehabilitation research : a strategy for balancing patients privacy rights with researchers need for access , '' _ archives of physical medicine and rehabilitation _ , vol .86 , no . 9 , pp .18071814 , 2005 .
network alignment refers to the problem of finding a bijective mapping across vertices of two graphs to maximize the number of overlapping edges and/or to minimize the number of mismatched interactions across networks . this problem arises in many fields such as computational biology , social sciences and computer vision and is often cast as an expensive quadratic assignment problem ( qap ) . although spectral methods have received significant attention in different network science problems such as network clustering , the use of spectral techniques in the network alignment problem has been limited partially owing to the lack of principled connections between spectral methods and relaxations of the network alignment optimization . in this paper , we propose a network alignment framework that uses an orthogonal relaxation of the underlying qap in a maximum weight bipartite matching optimization . our method takes into account the ellipsoidal level sets of the quadratic objective function by exploiting eigenvalues and eigenvectors of ( transformations of ) adjacency graphs . our framework not only can be employed to provide a theoretical justification for existing heuristic spectral network alignment methods , but it also leads to a new scalable network alignment algorithm which outperforms existing ones over various synthetic and real networks . moreover , we generalize the objective function of the network alignment problem to consider both matched and mismatched interactions in a standard qap formulation . this can be critical in applications where networks have low similarity and therefore we expect more mismatches than matches . we assess the effectiveness of our proposed method theoretically for certain classes of networks , through simulations over various synthetic network models , and in two real - data applications ; in comparative analysis of gene regulatory networks across human , fly and worm , and in user de - anonymization over twitter follower subgraphs .
there are many combinatorial problems that can be represented as a constrained satisfaction problem ( csp ) in which we are to satisfy a number of constrains defined over a set of discrete variables .an interesting example is the low density parity check code in information theory . here a code word consists of variables that satisfy parity - check constraints .each constraint acts on a few variables and is satisfied if sum of the variables module is zero .another example is finding the fixed points of a random boolean network .again we have boolean variables represented by the nodes of a directed network .the state of a node at a given time step is a logical function of the state of its incoming neighbors in the previous time step .thus a fixed point of the problem is one that satisfies constraints , one for each variable , where a constraint enforces the variable taking the outcome of the logical function .+ from a physical point of view there exist a close relation between these problems with frustrated systems exhibiting glassy behavior , such as spin glasses .the methods and concepts developed in the study of these systems enable us to get a better understanding of the above problems .+ random satisfiability problem is a typical csp that allows us to study combinatorial csp s in a simple framework .it is the first problem whose np - completeness has been proven . the problem is defined over logical variables that are to satisfy logical constraints or clauses .each clause interacts with some randomly selected variables that can appear negated or as such with equal probability .the clause is satisfied if at least one of the variables are true . herethe interest is in the satisfiability of the problem and finding the solutions or ground state configurations that result to the minimum number of violated clauses . for small number of clauses per variable , a typical instance of the problem is satisfiable ,that is there is at least one configuration of variables that satisfies all the clauses . on the other hand , for large a typical instance ofthe problem is unsatisfiable with probability one .we have a sharp transition at that separates sat and unsat phases of the problem .+ the interaction pattern of clauses with variables make a graph that is called the factor graph .notice that larger number of interactions lead to much frustration and thus make the problem harder both in checking its satisfiability and finding its solutions .therefore , one way to make the problem easier is to reduce it to some smaller subproblems with smaller number of interactions .then we could utilize some local search algorithms ( like walksat and its generalizations ) to solve the smaller subproblem .however , for a given number of variables and clauses the chance to find a solution decreases as we remove the interactions from the factor graph . moreover , the number of subproblems with a given number of interactions is exponentially large .these facts make the above reduction procedure inefficient unless we find a way to get around them .+ survey propagation algorithm is a powerful massage passing algorithm that helps us to check the satisfiability of the problem and find its solutions . in ref . we showed that as long as we are in the sat phase we can modify this algorithm to find the satisfiable spanning trees .there , we also showed that there is a correspondence between the set of solutions in the original problem and those of the satisfiable spanning trees . indeedthe modified algorithm enabled us to remove some interactions from the problem such that the obtained subproblem is still satisfiable .+ in this paper we are going to investigate the modified algorithm in more details , by studding its performance for different classes of subproblems .there is a free parameter in the algorithm that allows us to control the number of interactions in the subproblems . in this way we can construct ensembles of satisfiable subproblems with different average number of interactions .the largest subproblem is the original problem and the smallest one is a subproblem in which each clause interacts with just one variable .the latter satisfiable subproblems , which we call minimal satisfiable subproblems , result directly to the solutions of the original problem .we will show how the number of solutions ( in replica symmetric approximation ) and the complexity ( in one - step replica symmetric approximation ) varies for different subproblems close to the sat - unsat transition .+ the paper is organized in this manner : first we define more precisely the random k - satisfiability problem and its known features . in section[ 3 ] we briefly introduce belief and survey propagation algorithms that play an essential role in the remaining parts of the paper .section [ 4 ] has been divide to four subsections that deal with satisfiable subproblems .we start by some general arguments and then represent numerical results for different satisfiable subproblems .finally section [ 5 ] is devoted to our conclusion remarks .a random satisfiability problem is defined as follows : we take logical variables .then we construct a formula of clauses joined to each other by logical and .each clause contains a number of randomly selected logical variables . in the random k - sat problemeach clause has a fixed number of variables .these variables , which join to each other by logical or , are negated with probability , otherwise appear as such . for example is a 2-sat formula with clauses and logical variables .a solution of is a configuration of logical variables that satisfy all the clauses .the problem is satisfiable if there is at least one solution or satisfying configuration of variables for the formula .given an instance of the problem , then we are interested to know if it is satisfiable or not . a more difficult problem is to find the ground state configurations which lead to the minimum number of violated clauses . + the relevant parameter that determines the satisfiability of is . in the thermodynamic limit( and ) is satisfied with probability one as long as .moreover , it has been found that for the problem is in the hard - sat phase . at have a dynamical phase transition associated with the break down of replica symmetry .assuming one - step replica symmetry breaking , one obtains and for random 3-sat problems .although this approximation seems to be exact near the sat - unsat transition but it fails close to the dynamical transition where higher order replica symmetry breaking solutions are to be used . + a useful tool in the study of csp s is the factor graph which is a bipartite graph of variable nodes and function nodes ( clauses ) .the structure of this graph is completely determined by a matrix with elements ; if clause contains , it is equal to if appears in and otherwise . in a graph representation , we add an edges between function node and variable node if . the edges will be shown by a filled line if and by a dashed line if .+ we also define an energy ( or cost function ) for the problem which is the number of violated clauses for a given configuration of variables \equiv \sum_{a=1}^m \prod_{j=1}^k \left(\frac{1-j_{a , i_j^a}s_{i_j^a}}{2}\right).\ ] ] here we introduced spin variables and is the index of variable in clause .a solution of the problem is a configuration of zero energy and the ground states are those configuration having the minimum energy . note that the presence of two variables in the same clause results to direct interactions between the corresponding spin variables .in this section we give a brief description of some massage passing algorithms which help us to get some insights about the solution space of the problem . these algorithm have an iterative nature and can give information for single instances of the problem . for more details about the algorithms and their originsee .+ in the following we restrict ourselves in the sat phase where there are some solutions that satisfy the problem .these solutions are represented by points in the -dimensional configuration space of the variables .if the number of interactions is low enough we can assume a replica symmetric structure for the organization of the solutions in this space .it means that the set of solutions make a single cluster ( or pure state ) in which any two solutions can be connected to each other by a path of finite steps when approaches to infinity .belief propagation algorithm enables us to find the solutions and their number ( the cluster s size or entropy of the pure state ) in this case .consider the set of solutions with members .each member is defined by values for the variables .we consider the probability space made by all the solutions with equal probability .then let us define the warnings as the probability that all variables in clause , except , are in a state that violate . assuming a tree - like structure for the factor graph ( i.e. ignoring the correlations between neighboring variables ) , can be written as where is the probability that variable dose not satisfy clause .we also denote by the set of variables belong to clause and by the set of clauses that variable contributes in . in belief propagation algorithm is given by where here denotes to the set of clauses in that variable appears in them as it appears in clause , see fig .[ f1 ] .the remaining set of clauses are denoted by .starting from initial random values for s , one can update them iteratively according to eqs .[ eta0 ] , [ pauj ] and [ pii ] .if the factor graph is spars enough and the problem is satisfiable then the iteration may converge with no contradictory warnings .utilizing these warnings one can use the following relations to find the entropy of the pure state where ,\\ \nonumber s_i=\log[\pi_i^-+\pi_i^+],\end{aligned}\ ] ] and in these equations are the set of function nodes in with and is the number of clauses in .+ it has been shown that the above algorithm gives exact results for tree - like factor graphs .when we have one - step replica symmetry breaking , the set of solutions organize in a number of well separated clusters with their own internal entropies .suppose there are of such clusters .in a coarse grained picture , we can assign a state to each cluster of the solution space . for a given cluster variable has the same value in all the solutions belong to the cluster . otherwise , that is if variable is not frozen and alternates between and , .again we can define a probability space in which all the clusters have the same probability . as before is the probability ( in new space ) that all variables in clause , except , are in states that violate clause .notice that we have to take into account the extra state , which is called the joker state , in the calculations . generalizing the belief propagation relations one obtains where but now \prod_{b\in v_a^s(j)}\left(1-\eta_{b\rightarrow j}\right),\\ \nonumber \pi_{j\rightarrow a}^s=[1-\prod_{b\in v_a^s(j)}\left(1-\eta_{b\rightarrow j}\right)]\prod_{b\in v_a^u(j)}\left(1-\eta_{b\rightarrow j}\right).\end{aligned}\ ] ] the above equations can be solved iteratively for s . as long as we are in the sat phase , the above algorithm may converge with no contradictory warnings .then the configurational entropy or complexity of the problem reads where ,\\ \nonumber \sigma_i=\log[\pi_i^-+\pi_i^0+\pi_i^+],\end{aligned}\ ] ] and \prod_{a\in v_+(i)}(1-\eta_{a\rightarrow i}),\\ \nonumber \pi_{i}^+=[1-\prod_{a\in v_+(i)}(1-\eta_{a\rightarrow i})]\prod_{a\in v_-(i)}(1-\eta_{a\rightarrow i}),\\ \nonumber \pi_{i}^0=\prod_{a\in v(i)}(1-\eta_{a\rightarrow i}).\end{aligned}\ ] ] to find a solution of the problem we can follow a simple survey inspired decimation algorithm that works with the biases a variable experience .let us define the probability for variable to be frozen in state in a randomly selected cluster of solutions .similarly we define and . then according to the above definitions we have after a run of survey propagation algorithm we have the above biases and fix the most biased variable , i.e. one with largest . then we can simplify the problem and again run survey propagation algorithm .we repeat the above process until we reach a simple problem with all warnings equal to zero . this problemthen can be solved by a local search algorithm .consider a satisfiable random ksat problem and the associated factor graph with variable nodes , function nodes and edges .all the function nodes have the same degree and a variable node has degree which , in the thermodynamic limit , follows a poisson distribution of mean .if is a solution of the problem , then any function node in the factor graph has at least one neighboring variable node that satisfies it .it means that for any solution we can remove some of the edges in the factor graph while the obtained subproblem is still satisfiable and for all function nodes .obviously we can do this until each function node is only connected to one variable node , the one that satisfies the corresponding clause .so it is clear that for a satisfiable problem there exist many subproblems ranging from the original one , with edges ( or interactions ) , to a minimal one with edges in its factor graph . in generalwe define as the ensemble of satisfiable subproblems defined by the parameter . for example is the ensemble of satisfiable subproblems with edges . + an interesting point is the presence of a correspondence between the solutions of the original problem and solutions of an ensemble of subproblems with edges .obviously any solution of the subproblems in is also a solution of the original problem .moreover , as described above , for any solution we can remove some of the edges until we obtain a subproblem of exactly edges . in showed that this correspondence holds also for the set of spanning trees which is a subset of .these correspondence relations will allow us to construct the ensembles and to find the solutions of the original problem by solving a subproblem .+ notice that as the number of interactions in a problem decreases we have to pay less computational cost to solve it .in fact , tree - like factor graphs can easily be solved by efficient local search algorithms . and if someone could give the ensemble of minimal subproblems , the whole set of solutions would be available . nowthe main questions are : how can we construct these satisfiable subproblems and what can be said about the properties of these subproblems ? in the following we try to answer these questions by a simple modification of survey propagation algorithm , introduced in .+ for a given ensemble of subproblems we would have members . in a given ensemble ,we assign weight to edge as a measure of its appearance frequency in the ensemble , that is , where if the edge appears in and otherwise .let be a measure defined on the space of all subgraphs with equal probability for all subgraphs that belong to and zero otherwise .this probability can be written in terms of s it is then easy to show that and .suppose that we have obtained s for the ensemble from another way . as an estimate of we write .\ ] ] then we expect that ,\ ] ] gives a good estimate of .+ suppose that we have obtained all the members in ensemble .assuming replica symmetry , we could run belief propagation on each member of the ensemble and obtain its entropy .then we could define , the average of entropy taken over ensemble .similarly we could run survey propagation algorithm and define as the average complexity of subproblems in .actually we will not follow the above procedure and get around the difficult problem of finding all the ensemble members .let us describe our procedure for the case of survey propagation algorithm .generalization to the belief propagation algorithm would be straightforward . + to obtain s for an ensemble we go through a self - consistency approach .we run survey propagation algorithm on the original factor graph but at the same time we take into account the fact that each edge has its own probability of appearing in the ensemble . nowthe survey along edge is updated according to the following rule ,\ ] ] where as before is given by eq.[pauj1 ] with \prod_{b\in v_a^s(j)}\left(1-w_{b , j}\eta_{b\rightarrow j}\right),\\ \nonumber \pi_{j\rightarrow a}^s=[1-\prod_{b\in v_a^s(j)}\left(1-w_{b , j}\eta_{b\rightarrow j}\right)]\prod_{b\in v_a^u(j)}\left(1-w_{b , j}\eta_{b\rightarrow j}\right).\end{aligned}\ ] ] an essential step here is the determination of s in a given ensemble .remember that a given ensemble is a set of satisfiable subproblems which completely define the probabilities along the edges of the factor graph .thus , if with a given set of s we find a large warning sent from to , we expect a high probability for the presence of that edge in the ensemble . herewe make a crucial assumption and use the following ansatz ^{\mu}.\ ] ] that incorporates the above fact .we take as a free parameter and denote the resulted ensembles by . for a given we would have an ensemble of satisfiable subproblems with different number of edges . because of the functional form of the above ansatz , the average number of edges in the ensemble decreases by increasing .therefore , to obtain smaller satisfiable subproblems we will need to run the algorithm for larger values of . + starting from initially random s and s we iterate the above equations until ( i ) it converges to some fixed point , ( ii ) or results to contradictory warnings ( iii ) or dose not converge in a predefined limit for the number of iterations . we think that as long as the original problem is satisfiable the algorithm will converge in a finite fraction of times that we run it .+ if the algorithm converges then we can utilize our definition for s and construct satisfiable subproblems . to construct a subproblem in we go through all the edges and select them with probabilities s .we hope that such a subproblem be satisfiable with a considerable probability .moreover , it is reasonable that we pay more computational cost to find smaller satisfiable subproblems which are closer to the solutions of the original problem . in the followingwe will study some properties of satisfiable subproblems including the spanning trees of the original factor graph and the minimal subproblems .+ we start from initially random values of for all the edges . then in each iteration of the algorithm we update and for all the edges according to eqs .[ eta2 ] , [ pii2 ] and [ weta ] .the edges are selected sequentially in a random way .the algorithm converges if for all the edges the differences between new and old values of are less than .we bound the number of iterations from above to and if the algorithm dose not converge in this limit , we say that it diverges .in the following we will work with and .moreover , we consider -sat problems where each clause in the original problem has just variables . + close to the sat - unsat transition .number of variables is and statistical errors are of order .,width=302 ] .statistical errors are of order .,width=302 ] let us first study the convergence properties of the modified algorithm . to this endwe repeat the algorithm for a number of times and define as the fraction of times in which the algorithm converges . in fig .[ f2 ] we display for the modified survey propagation algorithm . it is observed that decreases by increasing . moreover , diminishes more rapidly for larger .it is reasonable because the removal of edges becomes harder as we get closer to the sat - unsat transition . what happens if we increase the problem size ?figure [ f3 ] shows the finite size effects on convergence probability .these effects are significant due to the small problem sizes studied here . moreover ,as expected , the probability decreases more rapidly as increases .statistical errors are about the point s sizes . ,width=302 ] to see how the number of edges changes with we obtained the average weight of an edge , , and its standard deviation , , in converged cases .the average number of edges is given by .[ f4 ] shows how these quantities behave with .we found that as gets larger decreases and finally ( not shown in the figure ) approaches to , the minimum possible value to have a satisfiable subproblem when . + .the results are for and statistical errors are of order ., width=302 ] using our arguments in previous subsection we can obtain an estimate of the number of members in the ensemble , . in fig .[ f5 ] we show how changes with .here we have displayed the results just for small s where we are interested in . for larger s, decreases to its value for . + for .statistical errors are about the point s sizes.,width=302 ] for .statistical errors are about the point s sizes.,width=302 ] as described in the previous section we can obtain the average entropy of a typical subproblem in by running belief propagation on it .the results have been displayed in fig .similarly the average complexity of a subproblem is obtained by running survey propagation algorithm .figure [ f7 ] shows this quantity for some values of .as the figures shows both and diminish with and ; removing edges from the factor graph and approaching the sat - unsat transition both decrease the number of solutions and complexity . notice that for a fixed value of we can define the threshold where the complexity vanishes .it is a decreasing function of and we know already that .+ suppose that the algorithm converges and returns the weights s for all the edges of the factor graph .it is not difficult to guess that maximum spanning trees have a larger probability to be a satisfiable spanning tree .a maximum spanning tree is a spanning tree of the factor graph with maximum weight . for a given and a converged case we can construct maximum spanning trees in the following way : we start from a randomly selected node in the original factor graph and find the maximum weight among the edges that connect it to the other nodes. then we list the edges having a weight in the -neighborhood of the maximum one and add randomly one of them to the new factor graph .if we repeat the addition of edges times we obtain a spanning tree factor graph which has the maximum weight on its edges . notice that taking a nonzero interval to define the edges of maximum weight at each step , along with the randomness in choosing one of them , allow to construct a large number of maximum spanning trees . in this waywe define as the probability that a maximum spanning tree be satisfiable if the algorithm converges . to find out the satisfiability of the subproblem we use a local search algorithm ( focused metropolis search ) introduced in . .the problem size is and statistical errors are of order .,width=302 ] figure [ f8 ] displays this quantity versus for some values of .the probability to find a satisfiable spanning tree becomes considerable even for a very small and finally approaches to .for instance , if then at almost half the maximum spanning trees are satisfiable .for these parameters the fraction of converged cases is nearly ( see fig .although the algorithm provides a simple way of constructing satisfiable spanning trees but in general finding them is not an easy task .for example for a satisfiable problem with parameters , we found no satisfiable spanning tree among randomly constructed ones ..,width=302 ] figure [ f9 ] shows the satisfiability of maximum spanning trees for some larger problem sizes at .hopefully , by increasing the satisfiability probability enhances for smaller values of and gets more rapidly its saturation value .we hope that this behavior of compensate the decrease in for larger problem sizes .a look at figs .[ f3 ] and [ f9 ] shows that for , and at we have and .it means that of runs we can extract on average satisfiable spanning trees .having a satisfiable spanning tree then we can find its solutions ( which are also the solutions of the original problem ) by any local search algorithm .this , besides the other methods , provides another way of finding the solutions of the original problem . in fig .[ f10 ] we obtained the entropy of typical satisfiable spanning trees by running belief propagation on them .as the figure shows this entropy decreases linearly with . and . , width=302 ] , and .statistical errors are about the point sizes.,width=302 ] it will be interesting to compare the structural properties of satisfiable spanning trees with those of randomly constructed ones .to this end we obtained the degree distribution of variable and function nodes in the corresponding spanning trees . in fig .[ f11 ] we compare the degree distributions of variables . for function nodes we found no significant difference between the two kinds of spanning trees . however , the degree distribution of variable nodes is slightly broader for the satisfiable spanning trees .there are more low and high degree nodes in these spanning trees .another feature of satisfiable spanning trees is their low diameter compared to the random ones ; take the node having maximum degree as the center of spanning tree .the distance of a node from the center is defined as the number of links in the shortest path connecting the center to the node .we define the largest distance in the network as its diameter .the diameter of the two sets of spanning trees has been compared in fig .satisfiable spanning trees have a diameter which is almost half the diameter of the random spanning trees . + and . , width=302 ] a minimal subproblem has the minimum possible number of edges where each function node is connected to at most one variable node .having such a subproblem it is easy to check its satisfiability .the solutions of a minimal satisfiable subproblem will be the solutions of the original problem .moreover for any solution of the original problem there is at least one minimal satisfiable subproblem .the total number of minimal subproblems is that makes the exhaustive search among them for satisfiable ones an intractable task .+ suppose that the algorithm for a given has been converged and returned the weights s for all the edges .among the edges emanating from function node we choose the one with maximum weight .if there are more than one edge of maximum weight then we select one of them randomly .notice that we treat all the edges in the -neighborhood of the maximum weight in the same manner .for all the function nodes we do the above choice to construct a minimal subproblem .then we check the satisfiability of the subproblem and repeat the process for a large number of minimal subproblems obtained from converged runs of the algorithm .we define as the probability that a minimal subproblem be a satisfiable one .this quantity has been displayed in fig .[ f13 ] . .number of variables is and statistical errors are of order .,width=302 ] again we see that even for very small , is close to .when the parameters are this happens at . according to fig .[ f2 ] , at these parameters we have to run the algorithm on average times to find a converged case . in fig .[ f14 ] we compare for two different problem sizes . as the figure shows there is no significant difference between the two results . + .statistical errors are of order .,width=302 ] having a minimal satisfiable subproblem we will be able to find the solutions directly .any variable node that has at least one emanating edge is frozen in the obtained set of solutions . in fig .[ f15 ] we have showed the fraction of free variables versus . notice that is the fraction of frozen variables and gives the number of solutions in a typical satisfiable subproblem .as expected the number of frozen variables increases as we get closer to the sat - unsat transition . andstatistical errors are of order .,width=302 ] finally we look at the degree distribution of variable nodes in the minimal satisfiable subproblems . in fig .[ f16 ] we compare the degree distribution at with the random case in which the edges have been distributed randomly while the other parameters are the same .we observe that the real distribution is broader than the random one .low and high degree nodes have more contribution in the minimal satisfiable subproblems .we encountered the same phenomenon in fig .[ f11 ] that compares degree distribution of satisfiable spanning trees with the random ones . and statistical errors are of order .,width=302 ]in summary we showed that there is a way to reduce a random k - satisfiability problem to some simpler subproblems their solutions are also the solutions of the original problem . to achieve this we modified the known message passing algorithms by assigning some weights to the edges of the factor graph .finding satisfiable subproblems allowed us to compute the expected value of their entropy and complexity . in the case ofsatisfiable spanning trees we could compare their structural properties with those of random spanning trees .we could also construct the minimal satisfiable subproblems and study some interesting features of their factor graph .+ the modified algorithm studied in this paper can be used , besides the the present algorithms , to find the solutions of a constrained satisfaction problem in the sat phase . moreover , it provides a way to find the satisfiable subproblems which is not an easy task .comparing satisfiable subproblems with equivalent random ones might provide some insights about the nature of satisfiable problems and so their solutions .+ due to the computational limitations , the results have been restricted to small problem sizes of order . in this paperwe tried to show the trend by studding different problem sizes .
how can we remove some interactions in a constraint satisfaction problem ( csp ) such that it still remains satisfiable ? in this paper we study a modified survey propagation algorithm that enables us to address this question for a prototypical csp , i.e. random k - satisfiability problem . the average number of removed interactions is controlled by a tuning parameter in the algorithm . if the original problem is satisfiable then we are able to construct satisfiable subproblems ranging from the original one to a minimal one with minimum possible number of interactions . the minimal satisfiable subproblems will provide directly the solutions of the original problem .
it is a known fact that every linear homogeneous system of first - order differential equations admits a _ linear superposition rule _ , namely , its general solution can be written in terms of a linear combination of a family of linearly independent particular solutions and a set of constants to be related to initial conditions .nevertheless , it is not so well - known that these systems can be viewed as a particular example of a larger class of first - order systems , the so - called _ lie systems _ , that admit a ` more general ' type of superposition rule which , for instance , need not be a linear combination of particular solutions . additionally , a more general type of superposition rule has been found for the denominated _ quasi - lie systems _ , which include , as particular cases , lie systems .taking now into account that linear homogeneous systems of second - order differential equations also admit a certain type of linear superposition rule , it is natural to ask ourselves what kind of systems of second - order differential equations admit their general solution to be obtained , in a more general way , in terms of certain families of particular solutions and sets of constants .as witnessed by some previous works , the analysis of this question may lead to finding new insights and results in the study of many problems of mathematical and physical interest .motivated by the above facts , the main aim of this work is to present some results concerning the study of superposition rules for second - order differential equations .more specifically , we next provide several definitions of these new types of superposition rules along with some results describing special kinds of second - order differential equations admitting them .as a direct consequence of our results , it turns out that the theories of lie and quasi - lie systems can be applied to analyse second - order differential equations .finally , we carry out various applications of our results to the analysis of a number of second - order differential equations appearing in the physics and mathematical literature , paying special attention to second - order riccati equations .let us here briefly recall some fundamental notions to be used throughout this work . for a detailed description of these and other related topics , see .a _ superposition rule _ for a system of differential equations defined on is a map such that the general solution , , of the system can be written , at least locally , as in terms of any ` generic ' set of particular solutions and a set of constants to be related to the initial conditions of the system .the characterization of those systems of the form ( [ sys ] ) admitting a superposition rule is due to the norwegian mathematician sophus lie . in modern geometric terms , lie s characterization ( the today called _ lie theorem _ ) states that a system of the form ( [ sys ] ) admits a superposition rule , i.e. it is a _ lie system _ , if and only if its associated -dependent vector field , i.e. , can be cast into the form where are a set of vector fields on spanning a finite - dimensional lie algebra of vector fields , the so - called _ vessiot - guldberg lie algebra_. in a more general picture , the -dependent vector field associated with a system of the form ( [ sys ] ) might be cast into the form where the vector fields do not need to close a lie algebra , but they must span a finite - dimensional vector space admitting a subspace such that \subset w ] . in such a case , the pair is said to form a _ quasi - lie scheme _ and there exists a group of -dependent transformations , the so - called _ group of the scheme _ , , whose elements transform our initial system ( [ sys ] ) into new ones related to -dependent vector fields of the form , see . moreover , if for a certain -dependent transformation of , the -dependent vector field can be cast into a form similar to ( [ decliesys ] ) , the initial system is said to be a _ quasi - lie system with respect to the scheme . in this case, it can be proved that the general solution of the initial system can be written as in terms of any ` generic ' set of particular solutions and a set of constants . in other words ,system ( [ sys ] ) is said to admit a _-dependent superposition rule _ .motivated by the recent works studying second - order differential equations from the point of view of the theory of lie systems , it turns out that the appropriate definition of superposition rule for second - order differential equations must be as follows . given a system of second - order differential equations on , we say that it admits a _ superposition rule _ if there exists a mapping such that the general solution , , of the system can be written , at least locally , as in terms of any ` generic ' set of particular solutions , their derivatives with respect to the independent variable , and a set of constants to be related to the initial conditions .a useful concept in order to recognize second - order differential equations admitting a superposition rule is the sode lie system notion .let us define this concept .given a second - order differential equation ( [ secor ] ) , we say that it is a _sode lie system _ if its associated first - order system is a lie system .the interest on the sode lie system concept is motivated by the following result , whose demonstration follows straightforwardly from ( * ? ? ?* proposition 1 ) . * proposition 1 . * _ every sode lie system of the form ( [ secor ] ) admits a superposition rule , where is a superposition rule for ( [ asofir ] ) and is the projection map related to the tangent bundle . _recently , the theory of quasi - lie schemes introduced a new type of superposition rule for systems of first - order differential equations that we next define .a system ( [ secor ] ) is said to admit a _-dependent superposition rule _ if there exists a map such that its general solution can be cast into the form in terms of any ` generic ' set of particular solutions , their derivatives with respect to the independent variable , and a set of constants to be related to the initial conditions . in a similar wayas proposition 1 shows the existence of superposition rules for sode lie systems , it can be proved the following result that ensures the existence of -dependent superposition rules for a more general class of systems of second - order differential equations .* proposition 2 . *_ every second - order system ( [ secor ] ) , whose associated first - order system ( [ asofir ] ) is a quasi - lie system with respect to some quasi - lie scheme , admits a -dependent superposition rule of the form , where is a -dependent superposition rule associated with the quasi - lie system ( [ asofir ] ) . _let us briefly explain the application of the previous theoretical results to the analysis of some second - order differential equations appearing in the mathematical and physics literature .our first aim is concerned with analysing the second - order differential equation related to the study of bcklund transformations for the sawada - kotera pde and appearing in the study of the so - called riccati chains .more specifically , we pretend to prove that equation ( [ pi ] ) is a sode lie system and to describe one of its associated superposition rules by means of proposition 1 . in order to do so , note that equation ( [ pi ] ) is such that its associated first - order system ( [ asofir ] ) describes the integral curves of the -dependent vector field , where and .it can be proved that the vector fields and span , along with the vector fields an eight - dimensional lie algebra of vector fields isomorphic to , see .it follows that equation ( [ pi ] ) is a sode lie system and , in virtue of proposition 1 , it admits a superposition rule . following a method to obtain superposition rules described in , it can be proved that the general solution for system ( [ pi ] ) can be written as where and , with , are certain functions depending only on the particular solutions and their derivatives , and are real constants . for a detailed description of this result ,we refer the reader to . apart from system ( [ pi ] ) , many other second - order differential equations are sode lie systems that , when transformed into first - order ones , are related to lie systems whose associated -dependent vector fields are described by linear combinations of the form ( [ decliesys ] ) of vector fields in . among these sode lie systems, we can single out the equations appearing , for instance , in the study of differential equations with maximal number of lie symmetries . as a consequence of proposition 1, all the members of this family admit the same superposition rule as equation ( [ pi ] ) and therefore their general solutions can also be cast into the form ( [ supfirstricc ] ) .let us sketch now how proposition 2 can be used to derive a -dependent superposition rule for the second - order riccati equations of the form with , , , and .for a full description of the following techniques , see . in order to apply proposition 2 to equation ( [ nle ] ) , it is necessary to prove that the system obtained by adding the variable to the equation ( [ nle ] ) , is a quasi - lie system . consider the following set of vector fields a linear space of vector fields and define .the linear space is a two - dimensional abelian lie algebra of vector fields and it can be proved that \subset v_2 $ ] . hence , the pair forms a quasi - lie scheme . note that system ( [ fosor ] ) describes the integral curves of the -dependent vector field of the form ( [ decqualiesys ] ) in terms of the vector fields of .consequently , the so - called group of the scheme associated with can be used to transform the system determined by into a new system determined by a -dependent vector field taking values in , see ( * ? ? ?* proposition 1 ) .in particular , among the elements of , it is easy to find that the -dependent change of variables , converts system ( [ fosor ] ) into the lie system describing the integral curves of the -dependent vector field in consequence system ( [ fosor ] ) is a quasi - lie system with respect to the scheme and proposition 2 ensures the existence of a -dependent superposition rule for every member of the family ( [ nle ] ) .more specifically , the superposition rule for system gives rise to a -dependent superposition rule for the system associated with , by inverting the previous -dependent change of variables ( * ? ? ?* section 4 ) . from this superposition rule, it follows straightforwardly ( cf . ) that the general solution for every member of the family ( [ nle ] ) can be cast into the form where and , with , are certain -dependent functions depending on any generic family of particular solutions and their derivatives , and are two real constants .several types of superposition rules for systems of second - order differential equations have been introduced , and some classes of these systems have been proved to admit these new superposition rules . as an application ,we have derived -dependent and -independent superposition rules for various second - order differential equations appearing in the physics and mathematical literature .the definitions and methods showed here seem to be generalizable to the setting of higher - order systems of differential equations , where they could provide , for instance , a method to study the differential equations of the so - called _ riccati hierarchy _ appearing in the study of bcklund transformations of several pdes .we aim to investigate these and other related topics in future works. 99 j.f .cariena , j. grabowski and g. marmo , _ lie scheffers systems : a geometric approach , _ bibliopolis , naples , 2000 .cariena , j. grabowski , and g. marmo , _ rep .phys . _ * 60 * , 237258 ( 2007 ) .cariena , j. grabowski , and j. de lucas , _ j. phys .a _ * 42 * , 335206 ( 2009 ) . c. rogers , w.k .schief , and p. winternitz , _ j. math .* 216 * , 246264 ( 1997 ) .cariena and j. de lucas , __ * 3 * , 122 ( 2011 ) .cariena , j. de lucas , and m.f .raada , _ sigma symmetry integrability geom .methods appl . _* 4 * , 031 ( 2008 ) .cariena , j. de lucas , and m.f .raada , `` nonlinear superpositions and ermakov systems '' , in _ differential geometric methods in mechanics and field theory _ , edited by f. cantrijn , m. crampin , and b. langerock , academia press , prague , 2007 , pp .1533 . j.f .cariena , m.f .raada , and m. santander , _ j. math .phys . _ * 46 * , 062703 ( 2005 ) .a.m. grundland and d. levi , _a _ * 32 * , 39313937 ( 1999 ) .
the main purpose of this work is to introduce and analyse some generalizations of diverse superposition rules for first - order differential equations to the setting of second - order differential equations . as a result , we find a way to apply the theories of lie and quasi - lie systems to analyse second - order differential equations . in order to illustrate our results , several second - order differential equations appearing in the physics and mathematical literature are analysed and some superposition rules for these equations are derived by means of our methods . address = departamento de fsica terica , universidad de zaragoza , pedro cerbuna 12 , 50.009 , zaragoza , spain address = institute of mathematics , polish academy of sciences , ul . niadeckich 8 , p.o . box 21 , 00 - 956 , warszawa , poland
in this paper , we consider the following multi - block structured convex optimization model where the variables and are naturally partitioned into and blocks respectively , and are block matrices , s and s are some closed convex sets , and are smooth convex functions , and s and s are proper closed convex ( possibly nonsmooth ) functions .optimization problems in the form of have many emerging applications from various fields .for example , the constrained lasso ( classo ) problem that was first studied by james _ as a generalization of the lasso problem , can be formulated as where , are the observed data , and , are the predefined data matrix and vector .many widely used statistical models can be viewed as special cases of , including the monotone curve estimation , fused lasso , generalized lasso , and so on . by partitioning the variable into blocks as where as well as other matrices and vectors in correspondingly , and introducing another slack variable , the classo problem can be transformed to which is in the form of .another interesting example is the extended linear - quadratic programming that can be formulated as where and are symmetric positive semidefinite matrices , and is a polyhedral set .apparently , includes quadratic programming as a special case . in general , its objective is a piece - wise linear - quadratic convex function . let , where denotes the indicator function of . then where denotes the convex conjugate of . replacing by and introducing slack variable , we can equivalently write into the form of : for which one can further partition the -variable into a number of disjoint blocks .many other interesting applications in various areas can be formulated as optimization problems in the form of , including those arising from signal processing , image processing , machine learning and statistical learning ; see and the references therein . finally , we mention that computing a point on the central path for a generic convex programming in block variables : boils down to where and indicates the sum of the logarithm of all the components of .this model is again in the form of .our work relates to two recently very popular topics : the _ alternating direction method of multipliers _ ( admm ) for multi - block structured problems and the first - order primal - dual method for bilinear saddle - point problems .below we review the two methods and their convergence results .more complete discussion on their connections to our method will be provided after presenting our algorithm .one well - known approach for solving a linear constrained problem in the form of is the augmented lagrangian method , which iteratively updates the primal variable by minimizing the augmented lagrangian function in and then the multiplier through dual gradient ascent .however , the linear constraint couples and all together , it can be very expensive to minimize the augmented lagrangian function simultaneously with respect to all block variables . utilizing the multi - block structure of the problem, the multi - block admm updates the block variables sequentially , one at a time with the others fixed to their most recent values , followed by the update of multiplier .specifically , it performs the following updates iteratively ( by assuming the absence of the coupled functions and ) : where the augmented lagrangian function is defined as : when there are only two blocks , i.e. , , the update scheme in reduces to the classic 2-block admm .the convergence properties of the admm for solving 2-block separable convex problems have been studied extensively .since the 2-block admm can be viewed as a manifestation of some kind of operator splitting , its convergence follows from that of the so - called douglas - rachford operator splitting method ; see .moreover , the convergence rate of the 2-block admm has been established recently by many authors ; see e.g. .although the multi - block admm scheme in performs very well for many instances encountered in practice ( e.g. ) , it may fail to converge for some instances if there are more than 2 block variables , i.e. , .in particular , an example was presented in to show that the admm may even diverge with 3 blocks of variables , when solving a linear system of equations .thus , some additional assumptions or modifications will have to be in place to ensure convergence of the multi - block admm .in fact , by incorporating some extra correction steps or changing the gauss - seidel updating rule , show that the convergence can still be achieved for the multi - block admm .moreover , if some part of the objective function is strongly convex or the objective has certain regularity property , then it can be shown that the convergence holds under various conditions ; see . using some other conditions including the error bound condition and taking small dual stepsizes , or by adding some perturbations to the original problem , authors of establish the rate of convergence results even without strong convexity .not only for the problem with linear constraint , in multi - block admm are extended to solve convex linear / quadratic conic programming problems . in a very recent work , sun , luo and ye propose a randomly permuted admm ( rp - admm ) that basically chooses a random permutation of the block indices and performs the admm update according to the order of indices in that permutation , and they show that the rp - admm converges in expectation for solving non - singular square linear system of equations . in , the authors propose a block successive upper bound minimization method of multipliers ( bsumm ) to solve problem without variable .essentially , at every iteration , the bsumm replaces the nonseparable part by an upper - bound function and works on that modified function in an admm manner . under some error bound conditions and a diminishing dual stepsize assumption , the authors are able to show that the iterates produced by the bsumm algorithm converge to the set of primal - dual optimal solutions . along a similar direction ,cui et al . introduces a quadratic upper - bound function for the nonseparable function to solve 2-block problems ; they show that their algorithm has an convergence rate , where is the number of total iterations .very recently , has proposed a set of variants of the admm by adding some proximal terms into the algorithm ; the authors have managed to prove convergence rate for the 2-block case , and the same results applied for general multi - block case under some strong convexity assumptions .moreover , shows the convergence of the admm for 2-block problems by imposing quadratic structure on the coupled function and also the convergence of rp - admm for multi - block case where all separable functions vanish ( i.e. ) .recently , the work generalizes the first - order primal - dual method in to a randomized method for solving a class of saddle - point problems in the following form : where and .let and .then it is easy to see that is a saddle - point reformulation of the multi - block structured optimization problem which is a special case of without variable or the coupled function . at each iteration ,the algorithm in chooses one block of -variable uniformly at random and performs a proximal update to it , followed by another proximal update to the -variable .more precisely , it iteratively performs the updates : [ alg : r1st - pd ] where is a randomly selected block , and and are certain parameters .when there is only one block of -variable , i.e. , , the scheme in becomes exactly the primal - dual method in .assuming the boundedness of the constraint sets and , shows that under weak convexity , convergence rate result of the scheme can be established by choosing appropriate parameters , and if s are all strongly convex , the scheme can be accelerated to have convergence rate by adapting the parameters . *we propose a randomized primal - dual coordinate update algorithm to solve problems in the form of .the key feature is to introduce randomization as done in to the multi - block admm framework . unlike the random permutation scheme as previously investigated in ,we simply choose a subset of blocks of variables based on the uniform distribution .in addition , we perform a proximal update to that selected subset of variables . with appropriate proximal terms ( e.g. , the setting in ) , the selected block variables can be decoupled , and thus the updates can be done in parallel . *more general than , we can accommodate coupled terms in the objective function in our algorithm by linearizing such terms . by imposing lipschitz continuity condition on the partial gradient of the coupled functions and and using proximal terms ,we show that our method has an expected convergence rate for solving problem under mere convexity assumption .* we show that our algorithm includes several existing methods as special cases such as the scheme in and the proximal jacobian admm in .our result indicates that the convergence rate of the scheme in can be shown without assuming boundedness of the constraint sets .in addition , the same order of convergence rate of the proximal jacobian admm can be established in terms of a better measure . *furthermore , the linearization scheme allows us to deal with stochastic objective function , for instance , when the function is given in a form of expectation ] denotes the set .we use and as index sets , while is also used to denote the identity matrix ; we believe that the intention is evident in the context . given ,we denote : * block - indexed variable : ; * block - indexed set : ; * block - indexed function : ; * block - indexed gradient : ; * block - indexed matrix : ] with and ] and ] and ] with and any subset of ] , the algorithm is _ not _ the jacobian admm as discussed in since the block variables are still coupled in the augmented lagrangian function . to make it parallelizable , a proximal term is needed . then our result recovers the convergence of the proximal jacobian admm introduced in .in fact , the above theorem strengthens the convergence result in by establishing an rate of convergence in terms of the feasibility measure and the objective value .when the -variable is simple to update , it could be beneficial to renew the whole of it at every iteration , such as the problem . in this subsection , we consider the case that there are multiple -blocks but a single -block ( or equivalently , ) , and we establish a sublinear convergence rate result with a different technique of dealing with the -variable .[ thm : rate-1yw ] let be the sequence generated from algorithm [ alg : rpdc ] with and , where assume let where then , under assumptions [ assump1 ] , [ assump2 ] and [ assump3 ] , we have \right|,\,{\mathbb{e}}\|a\hat{x}^t+b\hat{y}^t - b\|\right\}\\ & \le & \frac{1}{1+\theta t}\left[(1-\theta)\left(\phi({x}^0,{y}^0)-\phi({x}^*,{y}^*)+\frac{\rho_x}{2}\|{r}^{0}\|^2\right)+\frac{1}{2}\|x^0-x^*\|_{\hat{p}}^2+\frac{1}{2}\|y^0-y^*\|_{\theta\tilde{q}+\rho_x b^\top b}^2\right.\notag\\ & & \hspace{1.5 cm } + \left.\frac{\max\{(1+\|\lambda^*\|)^2 , 4\|\lambda^*\|^2\}}{2\rho_x}\right ] \notag \ ] ] where , and is an arbitrary primal - dual solution .it is easy to see that if , the result in theorem [ thm : rate-1yw ] becomes exactly the same as that in theorem [ thm : rate - cvx ] below . in general , they are different because the conditions in on and are different from those in . in this subsection , we consider the most general case where both and have multi - block structure .assuming , we can still have the convergence rate .the assumption can be made without losing generality , e.g. , by adding zero components if necessary ( which is essentially equivalent to varying the probabilities of the variable selection ) .[ thm : rate - cvx ]let be the sequence generated from algorithm [ alg : rpdc ] with the parameters satisfying assume and satisfy one of the following conditions [ matpqhat ] let then , under assumptions [ assump1 ] , [ assump2 ] and [ assump3 ] , we have \right|,\,{\mathbb{e}}\|a\hat{x}^t+b\hat{y}^t - b\|\right\}\\ & \le & \frac{1}{1+\theta t}\left[(1-\theta)\left(\phi({x}^0,{y}^0)-\phi({x}^*,{y}^*)+\rho_x\|r^0\|^2\right ) +\frac{1}{2}\|x^0-x^*\|_{\tilde{p}}^2+\frac{1}{2}\|y^0-y^*\|_{\hat{q}}^2\right.\nonumber\\ & & \hspace{1.5cm}\left.+\frac{\max\{(1+\|\lambda^*\|)^2 , 4\|\lambda^*\|^2\}}{2\rho_x}\right ] \nonumber\end{aligned}\ ] ] where , and is an arbitrary primal - dual solution .when , the two conditions in become the same . however , in general , neither of the two conditions in implies the other one . roughly speaking , for the case of and , the one in can be weaker , and for the case of and , the one in is more likely weaker .in addition , provides an explicit way to choose block diagonal and by simply setting and s to the lower bounds there .in this section , we extend our method to solve a stochastic optimization problem where the objective function involves an expectation . specifically , we assume the coupled function to be in the form of where is a random vector . for simplicitywe assume , namely , we consider the following problem one can easily extend our analysis to the case where and is also stochastic .an example of is the penalized and constrained regression problem that includes as a special case . due to the expectation form of ,it is natural that the exact gradient of is not available or very expensive to compute .instead , we assume that its stochastic gradient is readily accessible . by some slight abuse of the notation ,we denote ,\quad h({w})=\left[\begin{array}{c}-{a}^\top{\lambda}\\ { a}{x}-{b}\end{array}\right].\ ] ] a point is a solution to _ if and only if _ there exists such that [ 1st - opt - s ] modifying algorithm [ alg : rpdc ] to , we present the stochastic primal - dual coordinate update method of multipliers , summarized in algorithm [ alg : srpdc ] , where is a stochastic approximation of .the strategy of block coordinate update with stochastic gradient information was first proposed in , which considered problems without linear constraint .we make the following assumption on the stochastic gradient .[ assump - error ] let .there exists a constant such that for all , [ ass - error1 ] ={0},\label{ass - error11}\\ & { \mathbb{e}}\|{\delta}^k\|^2\le\sigma^2.\label{ass - error12}\end{aligned}\ ] ] following the proof of lemma [ lem:1step ] and also noting ={\mathbb{e}}_{i_k}({x}^k-{x}^{k+1})^\top{\delta}^k,\ ] ] we immediately have the following result .let be the sequence generated from algorithm [ alg : srpdc ] where is given in with .then \cr & & + { \mathbb{e}}_{i_k}({x}^{k+1}-{x})^\top\left(\hat{{p}}-\rho a^\top a+\frac{{i}}{\alpha_k}\right)({x}^{k+1}-{x}^k ) -\frac{l_f}{2}{\mathbb{e}}_{i_k}\|{x}^k-{x}^{k+1}\|^2+{\mathbb{e}}_{i_k}({x}^{k+1}-{x}^k)^\top{\delta}^k\cr & \le & \left(1-\frac{n}{n}\right)\big[f({x}^k)-f({x})+({x}^{k}-{x})^\top(-{a}^\top{\lambda}^k)+ \rho({x}^{k}-{x})^\top{a}^\top{r}^{k}\big].\end{aligned}\ ] ] the following theorem is a key result , from which we can choose appropriate to obtain the convergence rate .[ thm - s - vx ] let be the sequence generated from algorithm [ alg : srpdc ] .let and denote assume is nonincreasing , and [ bd - x - s ] let then , under assumptions [ assump1 ] , [ assump2 ] , [ assump3 ] and [ assump - error ] , we have \cr & \le & ( 1-\theta)\alpha_0\left[f({x}^0)-f({x}^*)\right]+\frac{\alpha_0}{2}\|{x}^{0}-{x}^*\|_{\hat{{p}}-\rho a^\top a}^2+\frac{1}{2}\|{x}^0-{x}^*\|^2\cr & & + \left|\frac{\alpha_0\beta_1}{2\alpha_1}-\frac{(1-\theta)\beta_1}{2}\right|\gamma^2+\sum_{k=0}^t\frac{\alpha_k^2}{2}{\mathbb{e}}\|{\delta}^k\|^2.\end{aligned}\ ] ] the following proposition gives sublinear convergence rate of algorithm [ alg : srpdc ] by specifying the values of its parameters .the choice of depends on whether we fix the total number of iterations .[ prop : rate - s ] let be the sequence generated from algorithm [ alg : srpdc ] with given in , satisfying , and the initial point satisfying and .let be +\frac{1}{2}\|x^0-x^*\|_{d_x}^2+\frac{\alpha_0}{2\rho}\max\{(1+\|\lambda^*\|)^2 , 4\|\lambda^*\|^2\},\end{aligned}\ ] ] where is a primal - dual solution , and . 1 .if for a certain , then for , \right|,\ , { \mathbb{e}}\|a\hat{x}^t - b\|\right\}\le \frac{c_0}{\theta\alpha_0\sqrt{t}}+\frac{\alpha_0(\log t+2)\sigma^2}{2\theta\sqrt{t}}.\ ] ] 2 . if the number of maximum number of iteration is fixed a priori , then by choosing with any given , we have \right|,\ , { \mathbb{e}}\|a\hat{x}^t - b\|\right\}\le\frac{c_0}{\theta\alpha_0\sqrt{t}}+\frac{\alpha_0\sigma^2}{\theta\sqrt{t}}.\ ] ] when , we can show that and hold for ; see appendix [ app : bd - x - s ] .hence , the result in follows from , the convexity of , lemma [ equiv - rate ] with , and the inequalities when is a constant , the terms on the left hand side of and on the right hand side of are both zero , so they are satisfied . hence , the result in immediately follows by noting and .the sublinear convergence result of algorithm [ alg : srpdc ] can also be shown if is nondifferentiable convex and lipschitz continuous . indeed , if is lipschtiz continuous with constant , i.e. , then , where is a subgradient of at .hence , +{\mathbb{e}}_{i_k}({x}^k-{x}^{k+1})^\top\big(\tilde{\nabla}f({x}^k)-\tilde{\nabla } f({x}^{k+1})\big)\cr & = & \frac{n - n}{n}(f({x})-f({x}^k))+{\mathbb{e}}_{i_k}[f({x})-f({x}^{k+1})]+{\mathbb{e}}_{i_k}({x}^k-{x}^{k+1})^\top\big(\tilde{\nabla}f({x}^k)-\tilde{\nabla } f({x}^{k+1})\big).\end{aligned}\ ] ] now following the proof of lemma [ lem:1step ] , we can have a result similar to , and then through the same arguments as those in the proof of theorem [ thm - s - vx ] , we can establish sublinear convergence rate of .in this section , we test the proposed randomized primal - dual method on solving the nonnegativity constrained quadratic programming ( ncqp ) : where , and is a symmetric positive semidefinite ( psd ) matrix . there is no -variable , and it falls into the case in theorem [ thm : rate-3x ] .we perform two experiments on a macbook pro with 4 cores .the first experiment demonstrates the parallelization performance of the proposed method , and the second one compares it to other methods .* parallelization .* this test is to illustrate the power unleashed in our new method , which is flexible in terms of parallel and distributive computing .we set and generate , where the components of follow the standard gaussian distribution .the matrix and vectors are also randomly generated .we treat every component of as one block , and at every iteration we select and update blocks , where is the number of used cores .figure [ parallel - comp ] shows the running time by using 1 , 2 , and 4 cores , where the optimal value is obtained by calling matlab function ` quadprog ` with tolerance . from the figure, we see that our proposed method achieves nearly linear speed - up .* comparison to other methods . * in this experiment , we compare the proposed method to the linearized alm and the cyclic linearized admm methods .we set and generate , where the components of follow standard gaussian distribution .note that is singular , and thus is not strongly convex .we partition the variable into 100 blocks , each with 50 components . at each iteration of our method ,we randomly select one block variable to update .figure [ random - qp ] shows the performance by the three compared methods , where one epoch is equivalent to updating 100 blocks once . from the figure, we see that our proposed method is comparable to the cyclic linearized admm and significantly better than the linearized alm .although the cyclic admm performs well on this example , in general it can diverge if the problem has more than two blocks ; see .in this section , we discuss how algorithms [ alg : rpdc ] and [ alg : srpdc ] are related to several existing methods in the literature , and we also compare their convergence results .it turns out that the proposed algorithms specialize to several known methods or their variants in the literature under various specific conditions .therefore , our convergence analysis recovers some existing results as special cases , as well as provides new convergence results for certain existing algorithms such as the jacobian proximal parallel admm and the primal - dual scheme in .the randomized proximal coordinate descent ( rpcd ) was proposed in , where smooth convex optimization problems are considered .it was then extended in to deal with nonsmooth problems that can be formulated as where . toward solving , at each iteration ,the rpcd method first randomly selects one block and then performs the update : where is the lipschitz continuity constant of the partial gradient . with more than oneblocks selected every time , has been further extended into parallel coordinate descent in .when there is no linear constraint and no -variable in , then algorithm [ alg : rpdc ] reduces to the scheme in if , i.e. , only one block is chosen , and , and to the parallel coordinate descent in if and .although the convergence rate results in are non - ergodic , we can easily strengthen our result to a non - ergodic one by noticing that implies nonincreasing monotonicity of the objective if algorithm [ alg : rpdc ] is applied to . for solving the problem with a stochastic , proposes a stochastic block proximal gradient ( sbpg ) method , which iteratively performs the update in with replaced by a stochastic approximation . if is lipschitz differentiable , then an ergodic convergence rate was shown . setting , we reduce algorithm [ alg : srpdc ] to the sbpg method , and thus our convergence results in proposition [ prop : rate - s ] recover that in . without coupled functions or proximal terms , algorithm [ alg : rpdc ]can be regarded as a randomized variant of the multi - block admm scheme in .while multi - block admm can diverge if the problem has three or more blocks , our result in theorem [ thm : rate-3x ] shows that convergence rate is guaranteed if at each iteration , one randomly selected block is updated , followed by an update to the multiplier . note that in the case of no coupled function and , indicates that we can choose , i.e. without proximal term .hence , randomization is a key to convergence .when there are only two blocks , admm has been shown ( e.g. , ) to have an ergodic convergence rate .if there are no coupled functions , and both indicate that we can choose if , i.e. , all and blocks are selected . thus according to, we can set , in which case algorithm [ alg : rpdc ] reduces to the classic 2-block admm .hence , our results in theorems [ thm : rate-1yw ] and [ thm : rate - cvx ] both recover the ergodic convergence rate of admm for two - block convex optimization problems . in , the proximal jacobian parallel admm ( prox - jadmm ) was proposed to solve the linearly constrained multi - block separable convex optimization model at each iteration , the prox - jadmm method performs the updates for in parallel : and then updates the multiplier by where and is a damping parameter . by choosing approapriate parameters , established convergence rate of order based on norm square of the difference of two consecutive iterates .if there is no -variable or the coupled function in , setting , p^k=\mathrm{blkdiag}(\rho_x a_1^\top a_1+p_1,\cdots,\rho_x a_n^\top a_n+p_n)-\rho_x a^\top a\succeq 0,\,\forall k ] , and using lemmas [ lem : xy - rate ] and [ equiv - rate ] with , we obtain the result . using and , applying to the cross terms , and also noting the definition of and in, we have \cr & & + \sum_{k=0}^t\rho_x{\mathbb{e}}({x}^{k+1}-{x})^\top{a}^\top{b}({y}^{k+1}-{y}^k ) + ( 1-\theta)\sum_{k=0}^t\rho_y{\mathbb{e}}({y}^{k}-{y})^\top{b}^\top{a}({x}^{k+1}-{x}^k)\cr & & -\sum_{k=0}^t{\mathbb{e}}({x}^{k+1}-{x})^\top\tilde{{p}}({x}^{k+1}-{x}^k)+\frac{l_f}{2}\sum_{k=0}^t{\mathbb{e}}\|{x}^k-{x}^{k+1}\|^2\cr & & -\sum_{k=0}^t{\mathbb{e}}({y}^{k+1}-{y})^\top\tilde{{q}}({y}^{k+1}-{y}^k)+\frac{l_g}{2}\sum_{k=0}^t{\mathbb{e}}\|{y}^k-{y}^{k+1}\|^2\cr & = & -\frac{\theta}{2\rho}{\mathbb{e}}\left[\|\tilde{{\lambda}}^{t+1}-{\lambda}\|^2-\|{\lambda}^0-{\lambda}\|^2+\sum_{k=0}^{t-1}\|{\lambda}^{k+1}-{\lambda}^k\|^2+\|\tilde{{\lambda}}^{t+1}-{\lambda}^t\|^2\right]\cr & & + \frac{\rho_x}{\rho}\sum_{k=0}^t { \mathbb{e}}({\lambda}^k-{\lambda}^{k+1})^\top{b}({y}^{k+1}-{y}^k)+\frac{(1-\theta)\rho_y}{\rho}\sum_{k=0}^t{\mathbb{e}}({\lambda}^{k-1}-{\lambda}^{k})^\top{a}({x}^{k+1}-{x}^k)\cr & & -\frac{\theta\rho_y}{2}{\mathbb{e}}\left(\|{x}^0-{x}\|_{{a}^\top{a}}^2-\|{x}^{t+1}-{x}\|_{{a}^\top{a}}^2\right)+\frac{(2-\theta)\rho_y}{2}\sum_{k=0}^t{\mathbb{e}}\|{x}^{k+1}-{x}^k\|_{{a}^\top{a}}^2\cr & & -\frac{1}{2}{\mathbb{e}}\left(\|{x}^{t+1}-{x}\|_{\hat{{p}}}^2-\|{x}^0-{x}\|_{\hat{{p}}}^2 + \sum_{k=0}^t\|{x}^{k+1}-{x}^k\|_{\hat{{p}}}^2\right)+\frac{l_f}{2}\sum_{k=0}^t{\mathbb{e}}\|{x}^k-{x}^{k+1}\|^2\cr & & -\frac{1}{2}{\mathbb{e}}\left(\|{y}^{t+1}-{y}\|_{\hat{{q}}}^2-\|{y}^0-{y}\|_{\hat{{q}}}^2 + \sum_{k=0}^t\|{y}^{k+1}-{y}^k\|_{\hat{{q}}}^2\right)+\frac{l_g}{2}\sum_{k=0}^t{\mathbb{e}}\|{y}^k-{y}^{k+1}\|^2,\end{aligned}\ ] ] where we have used the conditions in . by young s inequality, we have that for , and for , plugging and and also noting , we can upper bound the right hand side of by -\frac{\theta\rho_y}{2}{\mathbb{e}}\left(\|{x}^0-{x}\|_{{a}^\top{a}}^2-\|{x}^{t+1}-{x}\|_{{a}^\top{a}}^2\right)\cr & & + \left(\frac{(1-\theta)(2-\theta)}{2\theta^2}\rho_x+\frac{(2-\theta)\rho_y}{2}\right)\sum_{k=0}^t { \mathbb{e}}\|{x}^{k+1}-{x}^k\|_{{a}^\top{a}}^2+\frac{(2-\theta)\rho_y}{2\theta^2}\sum_{k=0}^t{\mathbb{e}}\|{y}^{k+1}-{y}^k\|_{{b}^\top{b}}^2\cr & & -\frac{1}{2}{\mathbb{e}}\left(\|{x}^{t+1}-{x}\|_{\hat{{p}}}^2-\|{x}^0-{x}\|_{\hat{{p}}}^2 + \sum_{k=0}^t\|{x}^{k+1}-{x}^k\|_{\hat{{p}}}^2\right)+\frac{l_f}{2}\sum_{k=0}^t{\mathbb{e}}\|{x}^k-{x}^{k+1}\|^2\cr & & -\frac{1}{2}{\mathbb{e}}\left(\|{y}^{t+1}-{y}\|_{\hat{{q}}}^2-\|{y}^0-{y}\|_{\hat{{q}}}^2 + \sum_{k=0}^t\|{y}^{k+1}-{y}^k\|_{\hat{{q}}}^2\right)+\frac{l_g}{2}\sum_{k=0}^t{\mathbb{e}}\|{y}^k-{y}^{k+1}\|^2\cr & \overset{\eqref{para - mat}}\le & \frac{1}{2}\left(\|{x}^0-{x}\|_{\hat{{p}}-\theta\rho_xa^\top a}^2+\|{y}^0-{y}\|_{\hat{{q}}}^2\right)+\frac{\theta}{2\rho}{\mathbb{e}}\|\lambda^0-\lambda\|^2.\label{ineq - k4-w - sub2}\end{aligned}\ ] ] in addition , note that hence , if and satisfy , then also holds . combining , and yields \cr & & + \theta\sum_{k=0}^{t-1}{\mathbb{e}}\left[\phi({x}^{k+1},{y}^{k+1})-\phi({x},{y})+({w}^{k+1}-{w})^\top h({w}^{k+1})\right]\cr & \le & ( 1-\theta)\left[\phi({x}^0,{y}^0)-\phi({x},{y})\right]\cr & & + ( 1-\theta)\left[({x}^{0}-{x})^\top(-{a}^\top{\lambda}^0)+ \rho_x({x}^{0}-{x})^\top{a}^\top{r}^{0}+({y}^{0}-{y})^\top(-{b}^\top{\lambda}^0)+ \rho_y({y}^{0}-{y})^\top{b}^\top{r}^{0}\right]\cr & & + \frac{1}{2}\left(\|{x}^0-{x}\|_{\hat{{p}}-\theta\rho_xa^\top a}^2+\|{y}^0-{y}\|_{\hat{{q}}}^2\right)+\frac{\theta}{2\rho}{\mathbb{e}}\|\lambda^0-\lambda\|^2 . \ ] ] applying the convexity of and the properties of , we have \cr & \overset{\eqref{equivhw}}= & ( 1+\theta t){\mathbb{e}}\left[\phi(\hat{{x}}^t,\hat{{y}}^t)-\phi({x},{y})+(\hat{{w}}^{t+1}-{w})^\top h(\hat{{w}}^{t+1})\right]\cr & \overset{\eqref{prop - mas - h}}\leq&{\mathbb{e}}\left[\phi({x}^{t+1},{y}^{t+1})-\phi({x},{y})+(\tilde{{w}}^{t+1}-{w})^\top h(\tilde{{w}}^{t+1})\right]\cr & & + \theta\sum_{k=0}^{t-1}{\mathbb{e}}\left[\phi({x}^{k+1},{y}^{k+1})-\phi({x},{y})+({w}^{k+1}-{w})^\top h({w}^{k+1})\right].\end{aligned}\ ] ] now combining and , we have \cr & \le & ( 1-\theta)\left[\phi({x}^0,{y}^0)-\phi({x},{y})\right]\cr & & + ( 1-\theta)\left[({x}^{0}-{x})^\top(-{a}^\top{\lambda}^0)+ \rho_x({x}^{0}-{x})^\top{a}^\top{r}^{0}+({y}^{0}-{y})^\top(-{b}^\top{\lambda}^0)+ \rho_y({y}^{0}-{y})^\top{b}^\top{r}^{0}\right]\cr & & + \frac{1}{2}\left(\|{x}^0-{x}\|_{\hat{{p}}-\theta\rho_xa^\top a}^2+\|{y}^0-{y}\|_{\hat{{q}}}^2\right)+\frac{\theta}{2\rho}{\mathbb{e}}\|\lambda^0-\lambda\|^2 . \ ] ] assume .it holds that \notag\\ & & -\sum_{k=0}^{t-1}\frac{\alpha_k\beta_{k+1}}{2\alpha_{k+1}}\left[\|{\lambda}^{k+1}-{\lambda}\|^2-\|{\lambda}^{k}-{\lambda}\|^2+\|{\lambda}^{k+1}-{\lambda}^k\|^2\right]\nonumber\\ & \le&-\sum_{k=0}^{t-1}\frac{\beta_{k+1}}{2}\|{\lambda}^{k+1}-{\lambda}^k\|^2 + \sum_{k=1}^t\frac{(1-\theta)\beta_k}{2}\|{\lambda}^{k}-{\lambda}^{k-1}\|^2 \nonumber \\ & & -\sum_{k=0}^{t-1}\frac{\alpha_k\beta_{k+1}}{2\alpha_{k+1}}\|{\lambda}^{k+1}-{\lambda}\|^2 -\sum_{k=1}^t\frac{(1-\theta)\beta_k}{2}\|{\lambda}^{k-1}-{\lambda}\|^2\nonumber\\ & & + \frac{\alpha_0\beta_1}{2\alpha_1}\|{\lambda}^0-{\lambda}\|^2+\sum_{k=1}^{t-1}\frac{\alpha_k\beta_{k+1}}{2\alpha_{k+1}}\|{\lambda}^{k}-{\lambda}\|^2 + \sum_{k=1}^t\frac{(1-\theta)\beta_k}{2}\|{\lambda}^{k}-{\lambda}\|^2\nonumber\\ & = & -\sum_{k=0}^{t-1}\frac{\theta\beta_{k+1}}{2}\|{\lambda}^{k+1}-{\lambda}^k\|^2 + \left(\frac{\alpha_0\beta_1}{2\alpha_1}-\frac{(1-\theta)\beta_1}{2}\right)\|{\lambda}^0-{\lambda}\|^2 -\left(\frac{\alpha_{t-1}\beta_t}{2\alpha_t}-\frac{(1-\theta)\beta_t}{2}\right)\|{\lambda}^{t}-{\lambda}\|^2\nonumber\\ & & -\sum_{k=1}^{t-1}\left(\frac{\alpha_{k-1}\beta_k}{2\alpha_k } + \frac{(1-\theta)\beta_{k+1}}{2}-\frac{\alpha_k\beta_{k+1}}{2\alpha_{k+1}}-\frac{(1-\theta)\beta_k}{2}\right)\|{\lambda}^{k}-{\lambda}\|^2.\end{aligned}\ ] ] by the update formula of in , we have from that \cr & & + { \mathbb{e}}\left[\frac{({\lambda}^{k+1}-{\lambda})^\top({\lambda}^{k+1}-{\lambda}^{k})}{\left(1-\frac{(1-\theta)\alpha_{k+1}}{\alpha_k } \right)\rho } + \frac{(1-\theta)\alpha_{k+1}}{\alpha_k}\rho({x}^{k+1}-{x})^\top{a}^\top{r}^{k+1}\right]\cr & & + { \mathbb{e}}({x}^{k+1}-{x})^\top\left(\tilde{{p}}+\frac{{i}}{\alpha_k}\right)({x}^{k+1}-{x}^k ) -\frac{l_f}{2}{\mathbb{e}}\|{x}^k-{x}^{k+1}\|^2+{\mathbb{e}}({x}^{k+1}-{x}^k)^\top{\delta}^k\cr & \le & ( 1-\theta){\mathbb{e}}\left[f({x}^k)-f({x})+({x}^{k}-{x})^\top(-{a}^\top{\lambda}^k)+({\lambda}^{k}-{\lambda})^\top { r}^{k}+\frac{({\lambda}^{k}-{\lambda})^\top({\lambda}^k-{\lambda}^{k-1})}{\left(1-\frac{(1-\theta)\alpha_{k}}{\alpha_{k-1}}\right)\rho } \right]\cr & & + ( 1-\theta)\rho{\mathbb{e}}({x}^{k}-{x})^\top{a}^\top{r}^{k},\end{aligned}\ ] ] where similar to , we have defined . multiplying to both sides of and using and , we have \cr & & + \frac{\alpha_k\beta_{k+1}}{2\alpha_{k+1}}{\mathbb{e}}\left[\|{\lambda}^{k+1}-{\lambda}\|^2-\|{\lambda}^{k}-{\lambda}\|^2+\|{\lambda}^{k+1}-{\lambda}^k\|^2\right ] + { \mathbb{e}}\left[(1-\theta)\alpha_{k+1}\rho({x}^{k+1}-{x})^\top{a}^\top{r}^{k+1}\right]\cr & & + \frac{\alpha_k}{2}{\mathbb{e}}\big[\|{x}^{k+1}-{x}\|_{\tilde{{p}}}^2-\|{x}^{k}-{x}\|_{\tilde{{p}}}^2+\|{x}^{k+1}-{x}^k\|_{\tilde{{p}}}^2\big ] + \frac{1}{2}{\mathbb{e}}\left[\|{x}^{k+1}-{x}\|^2-\|{x}^{k}-{x}\|^2+\|{x}^{k+1}-{x}^k\|^2\right]\cr & & -\frac{\alpha_kl_f}{2}{\mathbb{e}}\|{x}^k-{x}^{k+1}\|^2+\alpha_k{\mathbb{e}}({x}^{k+1}-{x}^k)^\top{\delta}^k\cr & \le & ( 1-\theta)\alpha_k{\mathbb{e}}\left[f({x}^k)-f({x})+({w}^{k}-{w})^\top h({w}^k)\right]\cr & & + \frac{(1-\theta)\beta_k}{2}{\mathbb{e}}\left[\|{\lambda}^{k}-{\lambda}\|^2-\|{\lambda}^{k-1}-{\lambda}\|^2+\|{\lambda}^{k}-{\lambda}^{k-1}\|^2\right]+\alpha_k(1-\theta)\rho{\mathbb{e}}({x}^{k}-{x})^\top{a}^\top{r}^{k}.\end{aligned}\ ] ] denote then for , it is easy to see that becomes \cr & & + \frac{\alpha_t}{2\rho}{\mathbb{e}}\left[\|\tilde{{\lambda}}^{t+1}-{\lambda}\|^2-\|{\lambda}^{t}-{\lambda}\|^2+\|\tilde{{\lambda}}^{t+1}-{\lambda}^t\|^2\right]\cr & & + \frac{\alpha_t}{2}{\mathbb{e}}\left[\|{x}^{t+1}-{x}\|_{\tilde{{p}}}^2-\|{x}^{t}-{x}\|_{\tilde{{p}}}^2+\|{x}^{t+1}-{x}^t\|_{\tilde{{p}}}^2\right ] + \frac{1}{2}{\mathbb{e}}\left[\|{x}^{t+1}-{x}\|^2-\|{x}^{t}-{x}\|^2+\|{x}^{t+1}-{x}^t\|^2\right]\cr & & -\frac{\alpha_tl_f}{2}{\mathbb{e}}\|{x}^t-{x}^{t+1}\|^2+\alpha_t{\mathbb{e}}({x}^{t+1}-{x}^t)^\top{\delta}^t\cr & \le & ( 1-\theta)\alpha_t{\mathbb{e}}\left[f({x}^t)-f({x})+({w}^{t}-{w})^\top h({w}^t)\right]\cr & & + \frac{(1-\theta)\beta_t}{2}{\mathbb{e}}\left[\|{\lambda}^{t}-{\lambda}\|^2-\|{\lambda}^{t-1}-{\lambda}\|^2+\|{\lambda}^{t}-{\lambda}^{t-1}\|^2\right]+\alpha_t(1-\theta){\mathbb{e}}\rho({x}^{t}-{x})^\top{a}^\top{r}^{t}.\end{aligned}\ ] ] by the nonincreasing monotonicity of , summing from through and and plugging gives +\theta\alpha_{k+1}\sum_{k=0}^{t-1}{\mathbb{e}}\left[f({x}^{k+1})-f({x})+({w}^{k+1}-{w})^\top h({w}^{k+1})\right]\nonumber\\ & & + \frac{\alpha_t}{2\rho}{\mathbb{e}}\left[\|\tilde{{\lambda}}^{t+1}-{\lambda}\|^2-\|{\lambda}^{t}-{\lambda}\|^2+\|\tilde{{\lambda}}^{t+1}-{\lambda}^t\|^2\right]\nonumber\\ & & + \frac{\alpha_{t+1}}{2}{\mathbb{e}}\|{x}^{t+1}-{x}\|_{\tilde{{p}}}^2 + \sum_{k=0}^t\frac{\alpha_k}{2}{\mathbb{e}}\|{x}^{k+1}-{x}^k\|_{\tilde{{p}}}^2+\frac{1}{2}{\mathbb{e}}\big[\|{x}^{t+1}-{x}\|^2-\|{x}^{0}-{x}\|^2+\sum_{k=0}^t\|{x}^{k+1}-{x}^k\|^2\big]\nonumber\\ & & -\sum_{k=0}^t\frac{\alpha_kl_f}{2}{\mathbb{e}}\|{x}^k-{x}^{k+1}\|^2+\sum_{k=0}^t\alpha_k{\mathbb{e}}({x}^{k+1}-{x}^k)^\top{\delta}^k\nonumber\\ & \le & ( 1-\theta)\alpha_0{\mathbb{e}}\left[f({x}^0)-f({x})+({w}^{0}-{w})^\top h({w}^0)\right]+\alpha_0(1-\theta)\rho({x}^{0}-{x})^\top{a}^\top{r}^{0}+\frac{\alpha_0}{2}\|{x}^{0}-{x}\|_{\tilde{{p}}}^2\nonumber\\ & & -\sum_{k=0}^{t-1}\frac{\theta\beta_{k+1}}{2}{\mathbb{e}}\|{\lambda}^{k+1}-{\lambda}^k\|^2 + \left(\frac{\alpha_0\beta_1}{2\alpha_1}-\frac{(1-\theta)\beta_1}{2}\right){\mathbb{e}}\|{\lambda}^0-{\lambda}\|^2 -\left(\frac{\alpha_{t-1}\beta_t}{2\alpha_t}-\frac{(1-\theta)\beta_t}{2}\right){\mathbb{e}}\|{\lambda}^{t}-{\lambda}\|^2\nonumber\\ & & -\sum_{k=1}^{t-1}\left(\frac{\alpha_{k-1}\beta_k}{2\alpha_k}+\frac{(1-\theta)\beta_{k+1}}{2 } -\frac{\alpha_k\beta_{k+1}}{2\alpha_{k+1}}-\frac{(1-\theta)\beta_k}{2}\right){\mathbb{e}}\|{\lambda}^{k}-{\lambda}\|^2 .\end{aligned}\ ] ] from , we have \ge-\left(\frac{\alpha_{t-1}\beta_t}{2\alpha_t}-\frac{(1-\theta)\beta_t}{2}\right)\|{\lambda}^{t}-{\lambda}\|^2.\ ] ] in addition , from young s inequality , it holds that hence , dropping negative terms on the right hand side of , from the convexity of and , we have \cr & & \alpha_t{\mathbb{e}}\left[f({x}^{t+1})-f({x})+(\tilde{{w}}^{t+1}-{w})^\top h(\tilde{{w}}^{t+1})\right]+\theta\alpha_{k+1}\sum_{k=0}^{t-1}{\mathbb{e}}\left[f({x}^{k+1})-f({x})+({w}^{k+1}-{w})^\top h({w}^{k+1})\right]\cr & \le & ( 1-\theta)\alpha_0\left[f({x}^0)-f({x})+({w}^{0}-{w})^\top h({w}^0)\right ] + ( 1-\theta)\alpha_0\rho({x}^{0}-{x})^\top{a}^\top{r}^{0}+\frac{\alpha_0}{2}\|{x}^{0}-{x}\|_{\tilde{{p}}}^2+\frac{1}{2}\|{x}^0-{x}\|^2\cr & & + \left(\frac{\alpha_0\beta_1}{2\alpha_1}-\frac{(1-\theta)\beta_1}{2}\right){\mathbb{e}}\|{\lambda}^0-{\lambda}\|^2+\sum_{k=0}^t\frac{\alpha_k^2}{2}{\mathbb{e}}\|{\delta}^k\|^2.\end{aligned}\ ] ] using lemma [ lem : xy - rate ] and the properties of , we derive the desired result .let denote the proximal mapping of at .then the update in can be written to define as that in .then hence , using the fact that the conjugate of is and the moreau s identity for any convex function , we have therefore , holds , and thus from it follows substituting the formula of into , we have for , which is exactly .hence , we complete the proof .
in this paper we propose a randomized primal - dual proximal block coordinate updating framework for a general multi - block convex optimization model with coupled objective function and linear constraints . assuming mere convexity , we establish its convergence rate in terms of the objective value and feasibility measure . the framework includes several existing algorithms as special cases such as a primal - dual method for bilinear saddle - point problems ( pd - s ) , the proximal jacobian admm ( prox - jadmm ) and a randomized variant of the admm method for multi - block convex optimization . our analysis recovers and/or strengthens the convergence properties of several existing algorithms . for example , for pd - s our result leads to the same order of convergence rate without the previously assumed boundedness condition on the constraint sets , and for prox - jadmm the new result provides convergence rate in terms of the objective value and the feasibility violation . it is well known that the original admm may fail to converge when the number of blocks exceeds two . our result shows that if an appropriate randomization procedure is invoked to select the updating blocks , then a sublinear rate of convergence in expectation can be guaranteed for multi - block admm , without assuming any strong convexity . the new approach is also extended to solve problems where only a stochastic approximation of the ( sub-)gradient of the objective is available , and we establish an convergence rate of the extended approach for solving stochastic programming . * keywords : * primal - dual method , alternating direction method of multipliers ( admm ) , randomized algorithm , iteration complexity , first - order stochastic approximation . * mathematics subject classification : * 90c25 , 95c06 , 68w20 .
how should we understand the origin of biological irreversibility ? as an empirical fact , we know that the direction from the alive to the dead is irreversible . at a more specific level , we know that , in a multicellular - organism with a developmental process , there is a definite temporal flow ; through the developmental process , the multipotency , i.e. , the ability to create a different type of cells , decreases .initially , the embryonic stem cell has totipotency , and has the potentiality to create all types of cells in the organism .then a stem cell can create a limited variety of cells , having multipotency .this hierarchical loss of multipotency terminates at a determined cell , which can only replicate its own type , in the normal developmental process .the degree of determination increases in the normal course of development .how can one understand such irreversibility ? of coursethis question is not easy to answer .however , it should be pointed that \1 ) it is very difficult to imagine that this irreversibility is caused by a set of specific genes .the present irreversibility is too universal to be attributed to characteristics of a few molecules .\2 ) it is also impossible to simply attribute this irreversibility to the second law of thermodynamics .one can hardly imagine that the entropy , even if it were possible to be defined , suddenly increases at the death , or successively increases at the cell differentiation process .furthermore , it should be generally very difficult to define a thermodynamic entropy to a highly nonequilibrium system such as a cell . then what strategy should we choose ?a biological system contains always sufficient degrees of freedom , say , a set of chemical concentrations in a cell , which change in time .then , one promising strategy for the study of a biological system lies in the use of dynamical systems . by setting a class of dynamical systems ,we search for universal characteristics that are robust against microscopic and macroscopic fluctuations .a biological unit , such as a cell , has always some internal structure that can change in time . as a simple representation, the unit can be represented by a dynamical system .for example , consider a representation of a cell by a set of chemical concentrations .a cell , however , is not separated from the outside world completely .for example , isolation by a biomembrane is flexible and incomplete . in this way, the units , represented by dynamical systems , interact with each other through the external environment .hence , we need a model consisting of the interplay between inter - unit and intra - unit dynamics . for example , the complex chemical reaction dynamics in each unit ( cell ) is affected by the interaction with other cells , which provides an interesting example of intra - inter dynamics " . in the ` intra - inter dynamics ' ,elements having internal dynamics interact with each other . this type of intra - inter dynamics is not necessarily represented only by the perturbation of the internal dynamics by the interaction with other units , nor is it merely a perturbation of the interaction by adding some internal dynamics . as a specific example of the scheme of intra - inter dynamics, we will mainly discuss the developmental process of a cell society accompanied by cell differentiation . here, the intra - inter dynamics consists of several biochemical reaction processes .the cells interact through the diffusion of chemicals or their active signal transmission .if cells with degrees of freedom exist , the total dynamics is represented by an -dimensional dynamical system ( in addition to the degrees of freedom of the environment ) .furthermore , the number of cells is not fixed in time , but they are born by division ( and die ) in time .after the division of a cell , if two cells remained identical , another set of variables would not be necessary .if the dynamical system for chemical state of a cell has orbital instability ( such as chaos ) , however , the orbits of chemical dynamics of the ( two ) daughters will diverge .hence , the number of degrees of freedom , , changes in time .this increase in the number of variables is tightly connected with the internal dynamics .it should also be noted that in the developmental process , in general , the initial condition of the cell states is chosen so that their reproduction continues .thus , a suitable initial condition for the internal degrees of freedom is selected through interaction .now , to study a biological system in terms of dynamical systems theory , it is first necessary to understand the behavior of a system with internal degrees of freedom and interaction , this is the main reason why i started a model called coupled map lattice ( and later globally coupled map ) about 18 years ago . indeed , several discoveries in gcm seem to be relevant to understand some basic features in a biological system .gcm has provided us some novel concepts for non - trivial dynamics between microscopic and macroscopic levels , while the dynamic complementarity between a part and the whole is important to study biological organization . in the present paper ,we briefly review the behaviors of gcm in 2 , and discuss some recent advances in 3 - 5 , about dominance of milnor attractors , chaotic itinerancy , and collective dynamics .then we will switch to the topic of development and differentiation in an interacting cell system . after presenting our model based on dynamical systems in 6, we give a basic scenario discovered in the model , and interpret cell differentiation in terms of dynamical systems .then , the origin of biological irreversibility is discussed in 9 .discussion towards the construction of phenomenology theory of development is given in 10 .the simplest case of global interaction is studied as the globally coupled map " ( gcm ) of chaotic elements .a standard example is given by where is a discrete time step and is the index of an element ( = system size ) , and .the model is just a mean - field - theory - type extension of coupled map lattices ( cml) . through the interaction ,elements are tended to oscillate synchronously , while chaotic instability leads to destruction of the coherence .when the former tendency wins , all elements oscillate coherently , while elements are completely desynchronized in the limit of strong chaotic instability . between these cases ,elements split into clusters in which they oscillate coherently . herea cluster is defined as a set of elements in which .attractors in gcm are classified by the number of synchronized clusters and the number of elements for each cluster .each attractor is coded by the clustering condition ] .the clustering here is typically inhomogeneous : the partition ] .in this case , the difference of the values of is also hierarchical .the difference between the values of decreases as the above process of partition is iterated .although the above partition is too much simplified , such hierarchical structure in partition and in the phase space is typically observed in the po phase .the partition is organized as an inhomogeneous tree structure , as in the spin glass model .we have also measured the fluctuation of the partitions , using the probability that two elements fall on the same cluster . in the po phase, this value fluctuates by initial conditions , and the fluctuation remains finite even if the size goes to infinity .it is noted that such remnant fluctuation of partitions is also seen in spin glass models .in the partially ordered ( po ) phase , there coexist a variety of attractors depending on the partition .to study the stability of an attractor against perturbation , we introduce the return probability , defined as follows : take an orbit point of an attractor in an -dimensional phase space , and perturb the point to , where is a random number taken from $ ] , uncorrelated for all elements .check if this perturbed point returns to the original attractor via the original deterministic dynamics ( 1 ) . by sampling over random perturbations and the time of the application of perturbation, the return probability is defined as ( # of returns ) ( # of perturbation trials ) . as a simple index for robustness of an attractor, it is useful to define as the largest such that .this index measures what we call the _ strength _ of an attractor .the strength gives a minimum distance between the orbit of an attractor and its basin boundary .in contrast with our naive expectation from the concept of an attractor , we have often observed ` attractors ' with , i.e. , .if holds for a given state , it can not be an attractor " in the sense with asymptotic stability , since some tiny perturbations kick the orbit out of the attractor " .the attractors with are called milnor attractors . in other words ,milnor attractor is defined as an attractor that is unstable by some perturbations of arbitrarily small size , but globally attracts orbital points .the basin of attraction has a positive lebesgue measure .( the basin is riddled here . ) since it is not asymptotically stable , one might , at first sight , think that it is rather special , and appears only at a critical point like the crisis in the logistic map . to our surprise ,the milnor attractors are rather commonly observed around the po phase in our gcm .the strength and basin volume of attractors are not necessarily correlated .attractors with often have a large basin volume .still , one might suspect that such milnor attractors must be weak against noise .indeed , by a very weak noise with the amplitude , an orbit at a milnor attractor is kicked away , and if the orbit is reached to one of attractors with , it never comes back to the milnor attractor .rather , an orbit kicked out from a milnor attractor is often found to stay in the vicinity of it .the orbit comes back to the original milnor attractor before it is kicked away to other attractors with .furthermore , by a larger noise , orbits sometimes are more attracted to milnor attractors .such attraction is possible , since milnor attractors here have global attraction in the phase space , in spite of their local instability .dominance of milnor attractors gives us to suspect the computability of our system .once the digits of two variable agree down to the lowest bit , the values never split again , even though the state with the synchronization of the two elements may be unstable .as long as digital computation is adopted , it is always possible that an orbit is trapped to such unstable state . in this sensea serious problem is cast in numerical computation of gcm in general .existence of milnor attractors may lead us to suspect the correspondence between a ( robust ) attractor and memory , often adopted in neuroscience ( and theoretical cell biology ) .it should be mentioned that milnor attractors can provide dynamic memory allowing for interface between outside and inside , external inputs and internal representation .besides the above _ static _ complexity , _ dynamic _ complexity is more interesting at the po phase . hereorbits make itinerancy over ordered states with partial synchronization of elements , via highly chaotic states .this dynamics , called chaotic itinerancy ( ci ) , is a novel universal class in high - dimensional dynamical systems .our ci consists of a quasi - stationary high - dimensional state , exits to attractor - ruins " with low effective degrees of freedom , residence therein , and chaotic exits from them . in the ci ,an orbit successively itinerates over such attractor - ruins " , ordered motion with some coherence among elements .the motion at attractor - ruins " is quasistationary .for example , if the effective degrees of freedom is two , the elements split into two groups , in each of which elements oscillate almost coherently .the system is in the vicinity of a two - clustered state , which , however , is not a stable attractor , but keeps attraction to its vicinity globally within the phase space . after staying at an attractor - ruin , an orbit exits from it due to chaotic instability , and shows a high - dimensional chaotic motion without clear coherence .this high - dimensional state is again quasistationary , although there are some holes connecting to the attractor - ruins from it .once the orbit is trapped at a hole , it is suddenly attracted to one of attractor ruins , i.e. , ordered states with low - dimensional dynamics .this ci dynamics has independently been found in a model of neural dynamics by tsuda , optical turbulence , and in gcm .it provides an example of successive changes of relationships among elements . notethat the milnor attractors satisfy the condition of the above ordered states constituting chaotic itinerancy .some milnor attractors we have found keep global attraction , which is consistent with the observation that the attraction to ordered states in chaotic itinerancy occurs globally from a high - dimensional chaotic state .attraction of an orbit to precisely a given attractor requires infinite time , and before the orbit is really settled to a given milnor attractor , it may be kicked away .when milnor attractors that lose the stability ( ) keep global attraction , the total dynamics can be constructed as the successive alternations to the attraction to , and escapes from , them .if the attraction to robust attractors from a given milnor attractor is not possible , the long - term dynamics with the noise strength is represented by successive transitions over milnor attractors .then the dynamics is represented by transition matrix over among milnor attractors .this matrix is generally asymmetric : often , there is a connection from a milnor attractor a to a milnor attractor b , but not from b to a. the total dynamics is represented by the motion over a network , given by a set of directed graphs over milnor attractors . in general , the ` ordered states ' in ci may not be exactly milnor attractors but can be weakly destabilized states from milnor attractorsstill , the attribution of ci to milnor attractor network dynamics is expected to work as one ideal limit . as already discussed about the milnor attractor , computability of the switching over milnor attractor networkshas a serious problem . in each event of switching , which milnor attractor is visited next after the departure from a milnor attractor may depend on the precision . in this sense, the order of visits to milnor attractors in chaotic itinerancy may not be undecidable in a digital computer .in other words , motion at a macroscopic level may not be decidable from a microscopic level . with this respect, it may be interesting to note that there are similar statistical features between ( milnor attractor ) dynamics with a riddled basin and undecidable dynamics of a universal turing - machine .if the coupling strength is small enough , oscillation of each element has no mutual synchronization . in this turbulent phase, takes almost random values almost independently , and the number of degrees of freedom is proportional to the number of elements , i , e . , the lyapunov dimension increases in proportion to .there remains some coherence among elements .even in such case , the macroscopic motion shows some coherent motion distinguishable from noise , and there remains some coherence among elements , even in the limit of . as a macroscopic variable we adopt the mean field , in almost all the parameter values , the mean field motion shows some dynamics that is distinguishable from noise , ranging from torus - like to higher dimensional motion .this motion remains even in the thermodynamic limit .this remnant variation means that the collective dynamics keeps some structure .one possibility is that the dynamics is low - dimensional .indeed in some system with a global coupling , the collective motion is shown to be low - dimensional in the limit of ( see . ) in the gcm eq.(1 ) , with the logistic or tent map , low - dimensional motion is not detected generally , although there remains some collective motion in the limit of .the mean field motion in gcm is regarded to be infinite dimensional , even when the torus - like motion is observed .then it is important to clarify the nature of this mean - field dynamics .it is not so easy to examine the infinite dimensional dynamics , directly .instead , shibata , chawanya and the author have first made the motion low - dimensional by adding noise , and then studied the limit of noise . to study this effect of noise, we have simulated the model where is a white noise generated by an uncorrelated random number homogeneously distributed over [ -1,1 ] .the addition of noise can destroy the above coherence among elements .in fact , the microscopic external noise leads the variance of the mean field distribution to decrease with .this result also implies decrease of the mean field fluctuation by external noise .behavior of the above equation in the thermodynamic limit is represented by the evolution of the one - body distribution function at time step directly .since the mean field value is independent of each element , the evolution of obeys the perron - frobenius equation given by , with by analyzing the above perron - frobenius equation , it is shown that the dimension of the collective motion increases as , with as the noise strength . hence in the limit of , the dimension of the mean field motionis expected to be infinite .note that the mean field dynamics ( at ) is completely deterministic , even under the external noise . with the addition of noise ,high - dimensional structures in the mean - field dynamics are destroyed successively , and the bifurcation from high - dimensional to low - dimensional chaos , and then to torus proceeds with the increase of the noise amplitude . with a further increase of noise to ,the mean field goes to a fixed point through hopf bifurcation .this destruction of the hidden coherence leads to a strange conclusion .take a globally coupled system with a desynchronized and highly chaotic state , and add noise to the system .then the dimension of the mean field motion gets lower with the increase of noise . the appearance of low - dimensional ` order ' through the destruction of small - scale structure in chaos is also found in noise - induced order .note however that in a conventional noise - induced transition , the ordered motion is still stochastic , since the noise is added into a low - dimensional dynamical system . on the other hand ,the noise - induced transition in the collective dynamics occurs after the thermodynamic limit is taken .hence the low - dimensional dynamics induced by noise is truly low - dimensional .when we say a torus , the poincare map shows a curve without thickness by the noise , since the thermodynamic limit smears out the fluctuation around the tours . also , it is interesting to note that a similar mechanism of the destruction of hidden coherence is observed in quantum chaos .this noise - induced low - dimensional collective dynamics can be used to distinguish high - dimensional chaos from random noise .if the irregular behavior is originated in random noise , ( further ) addition of noise will result in an increase of the fluctuations . if the external application of noise leads to the decrease of fluctuations in some experiment , it is natural to assume that the irregular dynamics there is due to high - dimensional chaos with a global coupling of many nonlinear modes or elements .now we come back to the problem of cell differentiation and development .a cell is separated from environment by a membrane , whose separation , however , is not complete .some chemicals pass through the membrane , and through this transport , cells interact with each other .when a cell is represented by a dynamical system the cells interact with each other and with the external environment .hence , we need a model consisting of the interplay between inter - unit and intra - unit dynamics . herewe will mainly discuss the developmental process of a cell society accompanied by cell differentiation , where the intra - inter dynamics consist of several biochemical reaction processes .cells interact through the diffusion of chemicals or their active signal transmission , while they divide into two when some condition is satisfied with the chemical reaction process in it .( see fig.1 for schematic representation of our model ) .we have studied several models with ( a ) internal ( chemical ) dynamics of several degrees of freedom , ( b ) cell - cell interaction type through the medium , and ( c ) the division to change the number of cells . as for the internal dynamics , auto - catalytic reaction among chemicalsis chosen .such auto - catalytic reactions are necessary to produce chemicals in a cell , required for reproduction .auto - catalytic reactions often lead to nonlinear oscillation in chemicals . herewe assume the possibility of such oscillation in the intra - cellular dynamics . as the interaction mechanism, the diffusion of chemicals between a cell and its surroundings is chosen . to be specific, we mainly consider the following model here .first , the state of a cell is assumed to be characterized by the cell volume and a set of functions representing the concentrations of chemicals denoted by .the concentrations of chemicals change as a result of internal biochemical reaction dynamics within each cell and cell - cell interactions communicated through the surrounding medium . for the internal chemical reaction dynamics ,we choose a catalytic network among the chemicals .the network is defined by a collection of triplets ( ,, ) representing the reaction from chemical to catalyzed by .the rate of increase of ( and decrease of ) through this reaction is given by , where is the degree of catalyzation ( in the simulations considered presently ) .each chemical has several paths to other chemicals , and thus a complex reaction network is formed .the change in the chemical concentrations through all such reactions , thus , is determined by the set of all terms of the above type for a given network .( these reactions can include genetic processes ) .cells interact with each other through the transport of chemicals out of and into the surrounding medium . as a minimal case, we consider only indirect cell - cell interactions through diffusion of chemicals via the medium .the transport rate of chemicals into a cell is proportional to the difference in chemical concentrations between the inside and the outside of the cell , and is given by , where denotes the diffusion constant , and is the concentration of the chemical at the medium . the diffusion of a chemical species through cell membrane should depend on the properties of this species . in this model ,we consider the simple case in which there are two types of chemicals , one that can penetrate the membrane and one that can not . for simplicity, we assume that all the chemicals capable of penetrating the membrane have the same diffusion coefficient , . with this type of interaction , corresponding chemicals in the mediumare consumed . to maintain the growth of the organism , the system is immersed in a bath of chemicals through which ( nutritive ) chemicals are supplied to the cells . as chemicals flow out of and into the environment , the cell volume changes .the volume is assumed to be proportional to the sum of the quantities of chemicals in the cell , and thus is a dynamical variable .accordingly , chemicals are diluted as a result of the increase of the cell volume .in general , a cell divides according to its internal state , for example , as some products , such as dna or the membrane , are synthesized , accompanied by an increase in cell volume . again , considering only a simple situation , we assume that a cell divides into two when the cell volume becomes double the original . at each division , all chemicals are almost equally divided , with random fluctuations .of course , each result of simulation depends on the specific choice of the reaction network .however , the basic feature of the process to be discussed does not depend on the details of the choice , as long as the network allows for the oscillatory intra - cellular dynamics leading to the growth in the number of cells .note that the network is not constructed to imitate an existing biochemical network .rather , we try to demonstrate that important features in a biological system are a natural consequence of a system with internal dynamics , interaction , and reproduction . from the studywe try to extract a universal logic underlying a class of biological systems .from several simulations of the models starting from a single cell initial condition , we have shown that cells undergo spontaneous differentiation as the number is increased .( see fig.2 for schematic representation ) : the first differentiation starts with the clustering of the phase of the oscillations , as discussed in globally coupled maps ( see fig.2a ) .then , the differentiation comes to the stage that the average concentrations of the biochemicals over the cell cycle become different .the composition of biochemicals as well as the rates of catalytic reactions and transport of the biochemicals become different for each group . after the formation of cell types , the chemical compositions of each group are inherited by their daughter cells . in other words , chemical compositions of cells are recursive over divisions .the biochemical properties of a cell are inherited by its progeny , or in other words , the properties of the differentiated cells are stable , fixed or determined over the generations ( see fig .after several divisions , such initial condition of units is chosen to give the next generation of the same type as its mother cell .the most interesting example here is the formation of stem cells , schematically shown given in fig.2c .this cell type , denoted as ` s ' here , either reproduces the same type or forms different cell types , denoted for example as type a and type b. then after division events occur .depending on the adopted chemical networks , the types a and b replicate , or switch to different types .for example is observed in some network .this hierarchical organization is often observed when the internal dynamics have some complexity , such as chaos .the differentiation here is stochastic " , arising from chaotic intra - cellular chemical dynamics .the choice for a stem cell either to replicate or to differentiate looks like stochastic as far as the cell type is concerned .since such stochasticity is not due to external fluctuation but is a result of the internal state , the probability of differentiation can be regulated by the intra - cellular state .this stochastic branching is accompanied by a regulative mechanism . when some cells are removed externally during the developmental process, the rate of differentiation changes so that the final cell distribution is recovered . in some biological systems such as the hematopoietic system , stem cells either replicate or differentiate into different cell type(s ) .this differentiation rule is often hierarchical .the probability of differentiation to one of the several blood cell types is expected to depend on the interaction . otherwise , it is hard to explain why the developmental process is robust .for example , when the number of some terminal cells decreases , there should be some mechanism to increase the rate of differentiation from the stem cell to the differentiated cells .this suggests the existence of interaction - dependent regulation of the differentiation ratio , as demonstrated in our results ._ microscopic stability _the developmental process is stable against molecular fluctuations .first , intra - cellular dynamics of each cell type are stable against such perturbations . then , one might think that this selection of each cell type is nothing more than a choice among basins of attraction for a multiple attractor system .if the interaction were neglected , a different type of dynamics would be interpreted as a different attractor . in our case , this is not true , and cell - cell interactions are necessary to stabilize cell types .given cell - to - cell interactions , the cell state is stable against perturbations on the level of each intra - cellular dynamics .next , the number distribution of cell types is stable against fluctuations .indeed , we have carried out simulations of our model , by adding a noise term , considering finiteness in the number of molecules .the obtained cell type as well as the number distribution is hardly affected by the noise as long as the noise amplitude is not too large ._ macroscopic stability _each cellular state is also stable against perturbations of the interaction term .if the cell type number distribution is changed within some range , each cellular dynamics keeps its type .hence discrete , stable types are formed through the interplay between intra - cellular dynamics and interaction . the recursive production is attained through the selection of initial conditions of the intra - cellular dynamics of each cell , so that it is rather robust against the change of interaction terms as well . the macroscopic stability is clearly shown in the spontaneous regulation of differentiation ratio .how is this interaction - dependent rule formed ?note that depending on the distribution of the other cell types , the orbit of internal cell state is slightly deformed . for a stem cell case , the rate of the differentiation or the replication ( e.g. , the rate to select an arrow among ) depends on the cell - type distribution . for example , when the number of a " type cells is reduced , the orbit of an s-"type cell is shifted towards the orbits of a " , with which the rate of switch to a " is enhanced .the information of the cell - type distribution is represented by the internal dynamics of s "- type cells , and it is essential to the regulation of differentiation rate . it should be stressed that our dynamical differentiation process is always accompanied by this kind of regulation process , without any sophisticated programs implemented in advance .this autonomous robustness provides a novel viewpoint to the stability of the cell society in multicellular organisms .since each cell state is realized as a balance between internal dynamics and interaction , one can discuss which part is more relevant to determine the stability of each state . in one limiting case, the state is an attractor as internal dynamics , which is sufficiently stable and not destabilized by cell - cell interaction . in this case , the cell state is called ` determined ' , according to the terminology in cell biology . in the other limiting case, the state is totally governed by the interaction , and by changing the states of other cells , the cell state in concern is destabilized . in this case , each cell state is highly dependent on the environment or other cells .each cell type in our simulation generally lies between these two limiting cases . to see such intra - inter nature of the determination explicitly , one effective method is a transplantation experiment .numerically , such experiment is carried out by choosing determined cells ( obtained from the normal differentiation process ) and putting them into a different set of surrounding cells , to set distribution of cells so that it does not appear through the normal course of development . when a differentiated and recursive cell is transplanted to another cell society , the offspring of the cellkeep the same type , unless the cell - type distribution of the society is strongly biased .when a cell is transplanted into a biased society , differentiation from a ` determined ' cell occurs .for example , a homogeneous society consisting only of one determined cell type is unstable , and some cells start to switch to a different type . hence , the cell memory is preserved mainly in each individual cell , but suitable inter - cellular interactions are also necessary to keep it .since each differentiated state is not attractor , but is stabilized through the interaction , we propose to define _ partial attractor _ , to address attraction restricted to the internal cellular dynamics .tentative definition of this partial attractor is as follows ; \(1 ) [ internal stability ] once the cell - cell interaction is specified ( i.e. , the dynamics of other cells ) , the state is an attractor of the internal dynamics . in other words, it is an attractor when the dynamics is restricted only to the variable of a given cell .\(2 ) [ interaction stability ] the state is stable against change of interaction term , up to some finite degree . with the change of the interaction term of the order , the change in the dynamics remains of the order of .\(3 ) [ self - consistency ] for some distribution of units of cellular states satisfying ( 1 ) and ( 2 ) , the interaction term continues to satisfy the condition ( 1 ) and ( 2 ) .we tentatively call a state satisfying ( 1)-(3 ) as partial attractor .each determined cell type we found can be regarded as a partial attractor . to define the dynamics of stem cell in our model, however , we have to slightly modify the condition of ( 2 ) to a ` milnor - attractor ' type . here ,small perturbation to the interaction term ( by the increase of the cell number ) may lead the state to switch to a differentiated state .hence , instead of ( 2 ) , we set the condition : ( 2 ) for some change of interaction with a finite measure , some orbits remain to be attracted to the state .so far we have discussed the stability of a state by fixing the number of cells . in some case , the condition ( 3 ) may not be satisfied when the system is developed from a single cell following the cell division rule .as for developmental process , the condition has to be satisfied for a restricted range of cell distribution realized by the evolution from a single cell .then we need to add the condition : ( 4)[accessibility ] .the distribution ( 3 ) is satisfied from an initial condition of a single cell and with the increase of the number of cells .cell types with determined differentiation observed in our model is regarded as a state satisfying ( 1)(2)(3)(4 ) , while the stem cell type is regarded as a state satisfying ( 1)(2)(3)(4 ) .in fact , as the number is increased , some perturbations to the interaction term is introduced . in our model ,the stem - cell state satisfies ( 2 ) up to some number , but with the further increase of number , the condition ( 2 ) is no more satisfied and is replaced by ( 2 ) .perturbation to the interaction term due to the cell number increase is sufficient to bring about a switch from a given stem - cell dynamics to a differentiated cell . note again that the stem - cell type state with weak stability has a large basin volume when started from a single cell .in the normal development of cells , there is clear irreversibility , resulting from the successive loss of multipotency . in our model simulations ,this loss of multipotency occurs irreversibly .the stem - cell type can differentiate to other types , while the determined type that appear later only replicates itself . in a real organism, there is a hierarchy in determination , and a stem cell is often over a progenitor over only a limited range of cell types . in other words , the degree of determination is also hierarchical . in our model, we have also found such hierarchical structure .so far , we have found only up to the second layer of hierarchy in our model with the number of chemicals . here, the loss of multipotency dynamics of a stem - type cell exhibit irregular oscillations with orbital instability and involve a variety of chemicals .stem cells with these complex dynamics have a potential to differentiate into several distinct cell types . generally , the differentiated cells always possess simpler cellular dynamics than the stem cells , for example , fixed - point dynamics and regular oscillations .although we have not yet succeeded in formulating the irreversible loss of multipotency in terms of a single fundamental quantity ( analogous to thermodynamic entropy ) , we have heuristically derived a general law describing the change of the following quantities in all of our numerical experiments , using a variety of reaction networks . as cell differentiation progresses through development , * \(i ) stability of intra - cellular dynamics increases ; * \(ii ) diversity of chemicals in a cell decreases ; * \(iii ) temporal variations of chemical concentrations decrease , by realizing less chaotic motion .the degree of ( i ) could be determined by a minimum change in the interaction to switch a cell state , by properly extending the ` attractor strength ' in 3 .initial undifferentiated cells spontaneously change their state even without the change of the interaction term , while stem cells can be switched by tiny change in the interaction term .the degree of determination is roughly measured as the minimum perturbation strength required for a switch to a different state .the diversity of chemicals ( ii ) can be measured , for example , by , with , with as temporal average .loss of multipotency in our model is accompanied by a decrease in the diversity of chemicals and is represented by the decrease of this diversity .the tendency ( iii ) is numerically confirmed by the subspace kolmorogorv - sinai ( ks ) entropy of the internal dynamics for each cell . here , this subspace ks entropy is measured as a sum of positive lyapunov exponents , in the tangent space restricted only to the intracellular dynamics for a given cell . again, this exponent decreases through the development .in the present paper , we have first surveyed some of recent progresses in coupled dynamical systems , in particular globally coupled maps .then we discuss some of our recent studies on the cell differentiation and development , based on coupled dynamical systems with some internal degrees of freedom and the potentiality to increase the number of units(cells ) .stability and irreversibility of the developmental process are demonstrated by the model , and are discussed in terms of dynamical systems . of course , results based on a class of models are not sufficient to establish a theory to understand the stability and irreversibility in development of multicellular organisms .we need to unveil the logic that underlies such models and real development universally .although mathematical formulation is not yet established , supports are given to the following conjecture ._ assume a cell with internal chemical reaction network whose degrees of freedom is large enough and which interacts each other through the environment .some chemicals are transported from the environment and converted to other chemicals within a cell . through this process the cell volume increases and the cell is divided .the , for some chemical networks , each chemical state of a cell remains to be a fixed point . in this case , cells remain identical , where the competition for chemical resources is higher , and the increase of the cell number is suppressed . on the other hand , for some reaction networks ,cells differentiate and the increase in the cell number is not suppressed .the differentiation of cell types form a hierarchical rule .the initial cell types have large chemical diversity and show irregular temporal change of chemical concentrations . as the number of cells increases and the differentiation progresses , irreversible loss of multipotency is observed .this differentiation process is triggered by instability of some states by cell - cell interaction , while the realized states of cell types and the number distribution of such cell types are stable against perturbations , following the spontaneous regulation of differentiation ratio ._ when we recall the history of physics , the most successful phenomenological theory is nothing but thermodynamics . to construct a phenomenology theory for development , or generally a theory for biological irreversibility , comparison with the thermodynamicsshould be relevant .some similarity between the phenomenology of development and thermodynamics is summarized in table 1 .as mentioned , both the thermodynamics and the development phenomenology have stability against perturbations .indeed , the spontaneous regulation in a stem cell system found in our model is a clear demonstration of stability against perturbations , that is common with the le chatelier - braun principle .the irreversibility in thermodynamics is defined by suitably restricting possible operations , as formulated by adiabatic process .similarly , the irreversibility in a multicellular organism has to be suitably defined by introducing an ideal developmental process .note that in some experiments like clone from somatic cells in animals , the irreversibility in normal development can be reversed .the last question that should be addressed here is the search for macroscopic quatities to characterize each ( differentiated ) cellular state. although thermodynamics is established by cutting the macroscopic out of microscopic levels , in a cell system , it is not yet sure if such macroscopic quantities can be defined , by separating a macroscopic state from the microscopic level . at the present stage , there is no definite answer . here , however , it is interesting to recall recent experiments of tissue engineering . by changing the concentrations of only three control chemicals , asashima and coworkers succeeded in constructing all tissues from a xenopus undifferentiated cells ( animal cap ) .hence there may be some hope that a reduction to a few variables characterizing macroscopic ` states ' may be possible .construction of phenomenology for development charactering its stability and irreversibility is still at the stage ` waiting for carnot ' , but following our results based on coupled dynamical systems models and some of recent experiments , i hope that such phenomenology theory will be realized in near future .the author is grateful to t. yomo , c. furusawa , t. shibata for discussions . the work is partially supported by grant - in - aids for scientific research from the ministry of education , science , and culture of japan .k. kaneko and i. tsuda _ complex systems : chaos and beyond a constructive approach with applications in life sciences _ ( springer , 2000 ) ( based on k. kaneko and i. tsuda _ chaos scenario for complex systems _ ( asakura ) , 1996 , in japanese )
in the first half of the paper , some recent advances in coupled dynamical systems , in particular , a globally coupled map are surveyed . first , dominance of milnor attractors in partially ordered phase is demonstrated . second , chaotic itinerancy in high - dimensional dynamical systems is briefly reviewed , with discussion on a possible connection with a milnor attractor network . third , infinite - dimensional collective dynamics is studied , in the thermodynamic limit of the globally coupled map , where bifurcation to lower - dimensional attractors by the addition of noise is briefly reviewed . following the study of coupled dynamical systems , a scenario for developmental process of cell society is proposed , based on numerical studies of a system with interacting units with internal dynamics and reproduction . differentiation of cell types is found as a natural consequence of such a system . stem cells " that either proliferate or differentiate to different types generally appear in the system , where irreversible loss of multipotency is demonstrated . robustness of the developmental process against microscopic and macroscopic perturbations is found and explained , while irreversibility in developmental process is analyzed in terms of the gain of stability , loss of diversity and chaotic instability . construction of a phenomenology theory for development is discussed in comparison with the thermodynamics .
recently , the authors investigated the following coding problem . consider a coding system composed of one encoder and decoders .the encoder observes the sequence generated by a memoryless source with generic variable .then , the encoder broadcasts the codeword to the decoders over the noiseless channel with capacity . the purpose of the -th decoder is to estimate the value of the target source as accurately as possible by using the side information and the codeword sent by the encoder , where and may be correlated with .accuracy of the estimation of the -th decoder is evaluated by some distortion measure and it is required that the expected distortion is not greater than the given value .[ fig1 ] depicts the coding system where . ).,width=211 ] in , we proposed a coding scheme which is _ universal _ in the sense that it attains the optimal rate - distortion tradeoff even if the probability distribution of the source is unknown , while the side informations and the targets are assumed to be generated from via a known memoryless channel . in , we considered only stationary and memoryless sources . in this paper , we extend the result of to the case where sources are stationary and ergodic sources .as mentioned in , our coding problem described above includes various problems as special cases .for example , the wyner - ziv problem , i.e. the rate - distortion problem with side information at the decoder , is a special case of our problem , where and .a variation of the wyner - ziv problem , where the side information may fail to reach the decoder , is also included as a special case , where , , and ( see fig . [ fig3 ] ) .moreover , our coding system can be considered as a generalization of the complementary delivery .in fact , a simple complementary delivery problem depicted in fig . [ fig2 ] is the case where , , , , and ( ) .further , our coding problem includes also the problem considered in ( depicted in fig .[ fig4 ] ) as a special case , where , , ( ) , and .at first , we introduce some notations . we denote by the set of positive integers . for a set and an integer , denotes the -th cartesian product of . for a finite set , denotes the cardinality of . throughout this paper, we will take all and to the base 2 .let be a stationary and ergodic source with finite alphabet .for each , denotes the first variables of and the distribution of is denoted by .fix .we consider random variables ( resp . ) taking values in sets ( resp . ) where ranges over the index set .we assume that , for each , and are finite sets .we write and let be a _transition probability_. in the followings , we assume that is fixed and available as prior knowledge . for each , let be the -th extension of , that is , for any sequences then , by a source and a transition probability , sources and are induced is stationary and _ memoryless _ , while the source is stationary and ergodic .further , we assume that is _ known _ both to the encoder and decoders , while is unknown .universal wyner - ziv coding in a setting similar to ours is considered in . ] . in other words , ( resp . ) is a random variable on ( resp . ) such that for any , , and .for each , ( resp . ) is called the -th component of ( resp . ) .note that the joint distribution of , , and is given as a marginal distribution of , that is , for any , , and , where the summation is over all such that the -th component is .further , for each , let be a finite set .then , the formal definition of a code for our coding system is given as follows .an _ -length block code _ is defined by mappings and is called the _ encoder _ and is called the _-th decoder_. the performance of a code is evaluated by the coding rate and the distortion attained by .the _ coding rate _ of is defined by , where is the number of the codewords of . for each , let \ ] ] be a _ distortion measure _ , where .then , for each , the distortion between the output of the -th decoder and the sequence to be estimated is evaluated by a pair of a rate and a -tuple of distortions is said to be _ achievable _ for a source if the following condition holds : for any and sufficiently large there exists a code satisfying and , for any , \leq\delta_j+\epsilon\ ] ] where denotes the expectation with respect to the distribution .now , we state our main result .the theorem clarifies that , whenever is achievable , is also achievable universally .[ maintheorem ] for given and , there exists a sequence of codes which is universally optimal in the following sense : for any source for which is achievable there exists such that , for any , satisfies and \leq\delta_j+\delta\ ] ] for any .the proof of the theorem will be given in the next section .let and be given .fix satisfying where for each , let .let be the set of all -length block codes such that .then , let be the set of -tuple of decoders such that for some . note that for , for each , a sequence , and a code , let it should be noted that , by using , the average distortion attained by the code can be written as }\nonumber\\ & = \sum_{x^n}p_{x^n}(x^n)\sum_{y_j^n , z_j^n}\biggl\{p_{y_j^nz_j^n|x^n}(y_j^n , z_j^n|x^n)\nonumber\\ & \qquad\times d_n^{(j)}\left(\psi_n^{(j)}(\phi_n(x^n),y_j^n),z_j^n\right)\biggr\}\nonumber\\ & = \sum_{x^n}p_{x^n}(x^n){\bar{d}}_n^{(j)}(x^n , c_n).\label{eq : property_of_bd}\end{aligned}\ ] ] for each , let be the set of all sequences satisfying the following condition : there are an integer ( ) and a code such that for some integer ( ) , * _ encoder : _ the encoder encodes a given sequence as follows . 1 . if , then choose integers and a code satisfying . if then error is declared .2 . send and by using bits .send the index of decoders by using bits .send the codewords of blocks ( ) encoded by . *_ decoder : _ the -the decoder decodes the received codeword as follows . 1 .decode the first bits of the received codeword and obtain and .2 . decode the first bits of the remaining part of the received codeword and obtain the decoders chosen by the encoder .3 . decode the remaining part of the received codeword by using and the side information .then , the blocks ( ) are obtained .+ the remaining part of the output , i.e. and , is defined arbitrarily .note that the total length of and is at most . by the fact that and satisfies , it is easy to see that the coding rate of satisfies for sufficiently large . hence , to show the optimality of the code , it is sufficient to bound the distortion attained by . for each ,let be a function on such that then , implies that \leq 0.\ ] ] hence , the _ ergodic theorem _ guarantees the following fact : there exists such that for any there exists a set satisfying ( i ) and ( ii ) for any , where is the _ overlapping empirical distribution _ of defined as now , let be the set of all such that then , by the markov lemma , for and .further , let be the set of all such that holds for at least one . then , for and , we have thus , for and , there exists at least one such that on the other hand , we can choose such that for any , . then , for any and , we have in other words , if is so large that and then for any we can choose , , and satisfying and .this means that .hence , we have this completes the proof of the lemma .s. kuzuoka , a. kimura , and t. uyematsu , `` universal source coding for multiple decoders with side information , '' in _ proc .of 2010 ieee international symposium on information theory ( isit2010 ) _ , to appear .e. perron , s. diggavi , and i. telatar , `` on the role of encoder side - information in source coding for multiple decoders , '' in _ proc .of 2006 ieee international symposium on information theory _ , jul .2006 , pp . 331335 .
a multiterminal lossy coding problem , which includes various problems such as the wyner - ziv problem and the complementary delivery problem as special cases , is considered . it is shown that any point in the achievable rate - distortion region can be attained even if the source statistics are not known .
the development of a method to stabilize mode - lock lasers has lead to significant advances in time and frequency metrology .the resulting frequency comb obtained from such a stabilized mode - locked system has subsequently found many applications .these applications include some that are related to quantum information technology .such quantum information applications have closed the circle , by being applied to synchronization in time and frequency metrology . to this end , an understanding of the quantum properties of frequency combs can benefit time and frequency metrology in the improvement of accuracy . for this reason we stife here to provide a better description of the quantum state of frequency combs .our main focus here is to consider the effects of uncertainties in the frequency parameters of a frequency comb on its quantum state . while the quantum state is often consider as a pure state ( see for example ) , the uncertainties in the parameters imply that the effective quantum state is a mixed state . these uncertainties are associated with the temporal degrees of freedom of the state .both the particle - number degrees of freedom , which govern the photon statistics of the frequency comb source , and the temporal degrees of freedom are necessary to provide an accurate representation of the quantum state of a frequency comb laser source . the popular way to represent quantum states that consist of multiple photons is to use continuous variables , leading to the wigner , husimi and glauber - sudarshan representations . however , these representations only address the particle - number degree of freedom .when the quantum state also involves other degrees of freedom , such as the frequency , as in this case , the representation of the quantum state becomes more involved . for a pure state, one can generalize the notion of a coherent state to incorporate the extra degree of freedom and use this to formulate the pure state in terms of such a representations ( see for instance ) .however , for a mixed state that includes another continuous degree of freedom , in addition to the particle - number degree of freedom , these generalizations fail . as a result , the wigner , husimi and glauber - sudarshan representations are not suitable for mixed states that need to be specified in terms of another degree of freedom , in addition to the particle - number degree of freedom . for this reason ,we first need to develop a quantum formalism in terms of which we can express the general mixed multi - photon quantum state of a frequency comb laser . herewe assume that , in the absense of any uncertainties in the frequency parameters of the frequency comb , the quantum state of the laser can be expressed as a generalized coherent state .the generalization incorporates the temporal degrees of freedom into the definitions of the fock states in terms of which the coherent state is defined .the uncertainties in the frequency parameters of the frequency comb turns the generalized coherent state ino a mixed state .we show that the temporal degrees of freedom in this mixed state are represented in the form of the power spectral density of the frequency comb .a mixed state is a convex sum over pure states where , so that are interpreted as probabilities . in a more general formalism , one can replace the summation with an integral in which is a continuous random variable that parameterizes the elements of the ensemble and is a probability density function , such that to incorporate the frequency degree of freedom , we define the pure single - photon states by where is the ( stochastic ) frequency spectrum of the state .it is a function of the frequency , which is related to the angular frequency by , as well as a random variable , which labels the elements of the ensemble .the bra- and ket - vectors and represent a one - dimensional frequency basis , which obeys the orthogonality condition . using these definitions, we obtain the density operator for a single - photon where is interpreted as an ensemble average that gives a two - point correlation function in fourier space .although is shown here as a single random variable , the expressions can be generalized to an arbitrary number of random variables .one can convert the density operator into a density ` matrix ' ( density ` function ' ) in the fourier basis by operating on both sides with the frequency basis states the result indicates that the density matrix for the mixed single - photon state in the fourier basis is precisely the two - point correlation function in the fourier domain , obtained from the ensemble average .the trace of the density operator represents the normalization condition for the stochastic spectra . if we assume that for all elements of the ensemble ( all values of ) , then it leads to ( [ pkon ] ) which satisfies the normalization requirement .the spectra are related to real - valued stochastic time signals via the fourier transform usually , the time signals are of infinite duration .however , this can lead to divergences in the calculations , because time signals of infinite duration are not finite energy signals . due to parseval s theorem, the spectra would therefore also not be finite energy functions . as a consequence ,the quantum states would not be normalizable ; the trace of the density operator would diverge .it also means that the fourier transforms of such functions , as given in ( [ ft ] ) , are not well defined .it turns out that , if the system is stationary , in that its statistical charateristics do not change with time , then the quantum states that are associated with time signals of infinite duration ( and the two - point functions on which their definition is based ) can be calculated in a well - defined manner .first , we define the time integrals through a limit process where we introduce a simplified notation to denote this limit process .the two - point function now becomes \ { \rm d}t_1\ { \rm d}t_2 .\end{aligned}\ ] ] here is the autocorrelation function .if the random process is stationary ( shift invariant ) , the autocorrelation function will only depend on the difference between the variables .let s redefine one of the variables by .then the expression becomes separable \ { \rm d}\tau\ { \rm d}t_2 \nonumber \\ & = & \int_t \exp[-{{\rm i}}2\pi ( \nu-\nu ' ) t_2]\ { \rm d}t_2\ \int_t r_g(\tau ) \exp[-{{\rm i}}2\pi \nu \tau]\{ \rm d}\tau \nonumber \\ & = & \epsilon(\nu-\nu ' ) s(\nu ) , \label{korfdef}\end{aligned}\ ] ] where \ { \rm d}\tau \label{psddef}\ ] ] is the power spectral density , which ( thanks to the wiener khintchine theorem ) is given by the fourier transform of the autocorrelation function , and \ { \rm d}t , \label{epsdef}\ ] ] is a special function , as defined through a limit process .the properties of are discussed in [ epsfunk ] .it then follows that the density matrix is diagonal in the fourier basis the lack of off - diagonal elements indicates that , as expected , there is no mutual coherence between different frequency components .the density operator is given by ( note that a density operator given by , does not have a well - defined trace . ) the single - photon quantum state of the frequency comb can now be obtained by substituting the power spectral density into ( [ eenmeng2 ] ) .however , before we compute the power spectral density of the frequency comb , we first consider the multi - photon case .a fock state can be expressed as a single - photon state , raised to a given integer power . using the definition of the general pure single - photon state given in ( [ intoes ] ), we express the general fock state by where the combinatoric factor in front is required for normalization .generalizing it to mixed states , we proceed as before by assuming that the fourier domain wave function depends on a random variable .the mixed -photon state is then defined in an analogous way as before , by in terms of the definitions of the single - photon state in ( [ intoes ] ) and the fock state in ( [ genfock ] ) , the expression for the mixed -photon state becomes where is a higher order correlation function .if we assume that is a normal distribution , then the ensemble average in ( [ npuntdef ] ) breaks up into a product of two - point functions .if we further assume that the only nonzero two - point functions are those that contain with its complex conjugate , then there would be different ways to combine the with their complex conjugates .this gives a numerical factor of , which cancels the factor of in ( [ mixdefn0 ] ) .all the two - point functions are equal and given by ( [ korfdef ] ) . in the end ,the expression becomes ^{\otimes n } = \left ( \hat{\rho}_1 \right)^{\otimes n } .\label{rhondef}\ ] ] so , under these circumstances , the density operator for the general -photon mixed state is given by the tensor product of mixed single - photon state density operators .it is perhaps instructive to check that the trace of this density operator is 1 , which requires that each factor in the tensor product is traced independently .one can show that , due to the properties of , the trace over products of the single - photon density operator will give zero .( see [ epsfunk ] . )one may well ask , why should the -parameter for the different single - photon factors in an -photon state all be the same , as we tacitly assumed in ( [ mixdefn ] ) ? it could have been expressed more generally as where for .the resulting density matrix would then be much more complicated , given by where is a -dimensional joint probability density function , such that here one can consider different scenarios . in the first scenario the different s are all statistically independent . in this case, the -dimensional joint probability density function becomes the product of separate one - dimensional probability density functions . as a result ,the integral breaks up into separate integrals , each representing a two - point correlation function .however , since each spectrum is only associated with one particular complex conjugate spectrum in this case , the combinatorics will not produce a factor of to cancel the factor of . by implication ,the result would be suppressed by a factor of .an alternative scenario , which is actually more natural , is where the different s are perfectly correlated . in this case , the joint probability density function would be zero unless all the s have the same value .this can be represented as a one - dimensional probability density function , multiplied by dirac delta functions that set all the s equal to each other .the result is precisely the case that we considered in the previous section with the -point function depicted by ( [ npuntdef ] ) .the reason why this is more natural , is because photons tend to exist in the same state due to their bosonic nature .this is supported by the enhancement that this scenario receives due to the cancellation of the -factor .we therefore assume that we have the latter scenario . in terms of general fock states ,a coherent states can be expressed by again for the mixed case we assume that , leading to in terms of the expression for the coherent states . assuming , as before , that the only nonzero two - point functions are those that contain with its complex conjugate , we then have where is given in ( [ rhondef ] ) .hence , only the diagonal terms survive .the complete state then becomes in the end , one can write the density operator as an exponentiated operator . if we substitute ( [ rhondef ] ) into ( [ mixdefa ] ) , we obtain ^{\otimes n } \nonumber \\ & = & \exp \left(-|\alpha|^2\right ) \exp_{\otimes } \left [ |\alpha|^2 \int\!\!\!\int { \left|{\nu}\right\rangle } \epsilon(\nu-\nu ' ) s(\nu ) { \left\langle{\nu'}\right|}\ { \rm d}\nu\ { \rm d}\nu ' \right ] \nonumber \\ & = & \exp \left(-|\alpha|^2\right ) \exp_{\otimes } \left ( |\alpha|^2 \hat{\rho}_1 \right ) , \label{kankwa}\end{aligned}\ ] ] where is defined such that all the operator products in its expansion are tensor products , so these operators do nt operator on each other .the quantum state of the frequency comb follows from the expression for the density operator in ( [ kankwa ] ) by substituting the expression of the power spectral density of the frequency comb into it .next , we compute power spectral density of the frequency comb .to calculate the power spectral density of a frequency comb , we start by considering the mechanism by which the laser light of a frequency comb is generated . for this purposewe consider the kerr - lens mode - locking process .the mechanism for the kerr - lens mode - locking is based on the principle that a laser cavity is designed such that loss in the cavity is minimized for high - intensity pulses that produce a kerr - lensing effect .the different cavity modes add in - phase at a particular point in the cavity and the difference in frequency of different cavity modes is therefore an integer multiple of the pulse repetition frequency .the latter places some requirements on the dispersion in the cavity , as determined by the wavenumber as a function of frequency . using a taylor series expansion of the wavenumber about the carrier frequency , one can distinguish among the different types of contributions , respectively associated with the phase velocity , the group velocity , the group velocity dispersion , and so forth . to have a constant mode separation , the group velocity dispersion and all higher order terms need to be zero .for kerr - lens mode - locking , a special subsystem ( using for example , a pair of prisms ) is used to compensate for the group velocity dispersion .the effect of the remaining undesired terms could be reduced due to the effect of injection mode locking . herewe ll simply assume that all these undesired terms are zero .the phase velocity and group velocity determine the mode spacing .they are also in part responsible for an offset between the lowest harmonic and zero that is not an integer multiple of .the spectrum that is thus produced can be expressed by , \end{aligned}\ ] ] where is the envelop function representing the shape of the overall spectrum centered around the carrier frequency in both halves of the spectrum , is the carrier - envelop offset - frequency , which represents the offset between the comb frequencies and the harmonic grid frequencies , defined such that .here we also express the spectrum in terms of a comb - function given by and a convolution process denoted by .note that is a one - sided spectrum .the full spectrum is given by .\label{volspek}\ ] ] on both sides , one can let the summation run from to ( to get the comb - function ) , because the additional dirac delta functions will fall outside the region where is nonzero and thus wo nt contribute .since the time - signal associated with the full spectrum is a real valued function , we have that . to obtain the time - signal in terms of a pulse train , we can perform an inverse fourier transform on the spectrum . however , here we are only interested in the spectrum .the power spectral density is the modulus square of the spectrum . for this purposewe need to add both sides of the spectrum as in ( [ volspek ] ) . however , to compute the power spectral density , we need to treat the dirac delta functions with care .one can assume that the pulse train is multiplied by an overall envelop function that limits the time duration of the pulse train on the time domain .this will convolve the comb spectrum with a narrow function , converting the dirac delta functions into narrow spectral component functions .we also assume that this envelop function and thus also its spectrum are finite energy functions . after taking the modulus square of the spectrum, one can convert the squares of the narrow functions back into dirac delta functions , with the understanding that one would in the process pick up a factor of an extra dimenson parameter .this dimension parameter can be absorbed into so that we do nt need to show it explicitly .the result can be expressed as \nonumber \\ & = & \frac{1}{4 } |p(\nu-\nu_c)|^2 \sum_{m=0}^{\infty } \delta(\nu - \nu_{\rm ceo } - m \nu_{\rm rep } ) \nonumber \\ & & + \frac{1}{4 } |p(\nu+\nu_c)|^2 \sum_{m=0}^{\infty } \delta(\nu + \nu_{\rm ceo } + m \nu_{\rm rep } ) \nonumber \\ & = & \frac{1}{4 } \left [ { \cal s}(\nu ) + { \cal s}(-\nu ) \right ] . \label{volspek0}\end{aligned}\ ] ] note that the cross terms fall away , because they do nt overlap on the frequency domain .information about the coherence of a laser source is contained in its power spectral density .in fact , the inverse fourier transform of the power spectral density is the mutual coherence function .the coherence of the frequency comb laser light is affected by the statistical properties of the carrier - envelop offset frequency and the pulse - repetition frequency . to take the statistical properties of these quantities into account ,we need to treat them as random variables and evaluate the power spectral density as an ensemble average .we ll do this for the positive frequency term where represents the ensemble average . expressing the dirac delta function in terms of its fourier transform, we obtain \ { \rm d}\xi \right\rangle \nonumber \\ & = & |p(\nu-\nu_c)|^2 \sum_{m=0}^{\infty } \int \exp({{\rm i}}2\pi \xi f ) \left\langle \exp[-{{\rm i}}2\pi \xi(\nu_{\rm ceo } + m \nu_{\rm rep } ) ] \right\rangle\ { \rm d}\xi , \nonumber \\\end{aligned}\ ] ] where is an auxiliary integration variable .since the random variables only appear in the exponential function under the integral , one can restrict the ensemble averaging to this exponential function . by assuming that the two random variables are statistically independent , we can separate the ensemble average into the product of two ensemble averages \rangle = \langle \exp ( - { { \rm i}}2\pi \xi \nu_{\rm ceo } ) \rangle \langle \exp ( - { { \rm i}}2\pi \xi m \nu_{\rm rep } ) \rangle .\ ] ] we also assume that the random variables are normally distributed , so that one can express their probability density functions by , \ ] ] where is the mean of the distribution and is its standard deviation . the subscript can either denote ` ceo ' or ` rep ' to represent the carrier - envelop offset frequency or the pulse - repetition frequency , respectively .one can now evaluate the ensemble averages of the exponential functions .first , we redefine the random variable by replacing .the new random variable has a zero mean .this leads to where denotes the remaining quantities and constants in the argument of the exponential function . by expanding the remaining exponential under the ensemble average as a taylor series and evaluating the individual moments, one can show that , for a normally distributed random variable with a zero mean , one obtains hence , when this is applied to the ensemble average , we find \rangle & = & \exp[-{{\rm i}}2\pi \xi(\mu_{\rm ceo}+ m \mu_{\rm rep})]\nonumber \\ & & \times \exp[-2\pi^2 \xi^2(\sigma_{\rm ceo}^2 + m^2 \sigma_{\rm rep}^2 ) ] .\end{aligned}\ ] ] the integral over then leads to \rangle\ { \rm d}\xi \nonumber \\ & = & \frac{1}{\sqrt{2\pi ( \sigma_{\rm ceo}^2 + m^2 \sigma_{\rm rep}^2 ) } } \exp \left [ \frac { -(\nu - \mu_{\rm ceo } - m \mu_{\rm rep})^2}{2 ( \sigma_{\rm ceo}^2 + m^2 \sigma_{\rm rep}^2 ) } \right ] .\end{aligned}\ ] ] the power spectral density for the positive side then becomes .\end{aligned}\ ] ] it consists of gaussian components that become progressively broader as increases .note that we can extend the summation to , because those component that are thus added will fall outside the envelop function and would therefore not contribute .the same applies for the negative side of the spectrum .the full power spectral density , according to ( [ volspek0 ] ) , is given by \nonumber \\ & & + \frac{1}{4 } \sum_{m=-\infty}^{\infty } \frac{|p(\nu+\nu_c)|^2}{\sqrt{2\pi ( \sigma_{\rm ceo}^2 + m^2 \sigma_{\rm rep}^2 ) } } \exp \left [ \frac { -(\nu+\mu_{\rm ceo}+m\mu_{\rm rep})^2}{2 ( \sigma_{\rm ceo}^2 + m^2 \sigma_{\rm rep}^2 ) } \right ] .\label{kampsd}\end{aligned}\ ] ] if we substitute ( [ kampsd ] ) into ( [ eenmeng2 ] ) , we obtain an expression for the single - photon quantum state of a frequency comb . upon subsituting this then into ( [ kankwa ] ), one would obtain the expression for the multi - photon quantum state of a frequency comb . , , and in arbitrary units . for comparison , the gaussian envelop functionis shown ( blue dashed line ) with and in arbitrary units . ] to see what such a power spectral density looks like , we provide a curve in fig .[ psd ] , where we selected values for the parameters that , although perhaps not realistic , would demonstrate their effect on the curve . for this purposewe model the envelop function as a gaussian function , \label{modelenv}\ ] ] where and in arbitrary units .the remaining parameters are chosen as , , and in the same arbitrary units .one can identify the individual frequency components in the frequency comb , broadened by the incertainties in and .the uncertainty in causes the broadening to increase for higher frequency components . at the same timethey are suppressed relative to the components at lower frequencies , as can be seen by comparing the peak amplitudes of these components to the shape of the envelop function , shown as the blue dashed curve in fig .an expressed is derived for the quantum state of a frequency comb , in terms of its power spectral density .we specifically considered the effect of uncertainties in and to compute the power spectral density .these uncertainties give rise to mixing in the quantum state with the result that the frequency comb need to be expressed as a mixed state .to express such a mixed quantum state that depends on frequency as a degree of freedom in addition to the particle - number degree of freedom , we develop a specific quantum representation of the density matrix that incorporates both these degrees of freedom . to obtain the expression for the power spectral densitywe start by considerating of the kerr - mode locking cavity .although the basic expression for the spectrum thus obtained is the same as found in literature , our derivation provides a complete self - contained picture underlying the expressions .the -function , which is introduced in ( [ korfdef ] ) , is defined as without the -factor , the limit process would produce a dirac delta function , but the properies of the -function differs from that of the dirac delta function .for the -function is also zero , just like the dirac delta function however , for , one can easily show that hence , in the limit the -function becomes the -function is a function of measure zero , as can be readily shown more generally , for any function that has a finite function value , one finds \ { \rm d}\nu\ { \rm d}t \nonumber \\ & = & \lim_{t\rightarrow\infty } \frac{1}{t } \int_{-t/2}^{t/2 } g(t ) \exp(-{{\rm i}}2\pi \nu ' t)\ { \rm d}t = 0 .\end{aligned}\ ] ] the functions that we consider here are often unbounded ( have infinite energy ) , so that without the , these functions would diverge in the limit where . using ( [ epsfunc ] ) , one can readily derive a number of properties of the -function .for instance , it directly follows that , and .another important aspect is the trace of multiple factors of the single - photon density matrix .consider for instance the result produces two possible ways in which the bra - vectors can be contracted onto the ket - vectors \nonumber \\ & & \times \epsilon(\nu_1-\nu_2 ) s(\nu_1 ) \epsilon(\nu_3-\nu_4 ) s(\nu_3)\ { \rm d}\nu_1 ... { \rm d}\nu_4 \nonumber \\ & = & \int\!\!\!\int \epsilon(0 ) s(\nu_1 ) \epsilon(0 ) s(\nu_3)\ { \rm d}\nu_1\ { \rm d}\nu_3 \nonumber \\ & & + \int\!\!\!\int \epsilon(\nu_1-\nu_3 ) s(\nu_1 ) s(\nu_3)\ { \rm d}\nu_1\ { \rm d}\nu_3 \nonumber \\ & = & \left [ \int s(\nu)\ { \rm d}\nu \right]^2 = \left({\rm tr } \{\hat{\rho}_1\ } \right)^2 .\end{aligned}\ ] ] so we see that the trace of a product of single - photon density matrices always reduce to the product of the traces of the individual single - photon density matrices .the research was done with the partial support of a grant from the national research foundation ( nrf ) .10 d. hayes , d. n. matsukevich , p. maunz , d. hucul , q. quraishi , s. olmschenk , w. campbell , j. mizrahi , c. senko , and c. monroe .entanglement of atomic qubits using an optical frequency comb ., 104:140501 , 2010 .neil sinclair , erhan saglamyurek , hassan mallahzadeh , joshua a. slater , mathew george , raimund ricken , morgan p. hedges , daniel oblak , christoph simon , wolfgang sohler , and wolfgang tittel .spectral multiplexing for scalable quantum photonics using an atomic frequency comb quantum memory and feed - forward control ., 113:053603 , 2014 .runai quan , yiwei zhai , mengmeng wang , feiyan hou , shaofeng wang , xiao xiang , tao liu , shougang zhang , and ruifang dong .demonstration of quantum synchronization based on second - order quantum coherence of entangled photons ., 6:30453 , 2016 .tai hyun yoon , adela marian , john l. hall , and jun ye .phase - coherent multilevel two - photon transitions in cold rb atoms : ultrahigh - resolution spectroscopy via frequency - stabilized femtosecond laser . , 63:011402 , 2000 .
uncertainties in the frequency parameters of a frequency comb laser , causes it to represent a mixed quantum state . the formulation of such a quantum state is compounded by the fact that it contains both particle - number degrees of freedom and temporal degrees of freedom . here we develop a formalism in terms of which such a mixed quantum state can be expressed . for this purpose we also need to compute the expression for the power spectral density of the frequency comb laser . we do so by taking the uncertainties in the frequency parameters into account .
when the behavior of each individual in a group is dependent on their interactions with others around them , the collective behavior of the group as a whole can be surprisingly different from what would be expected by simply extrapolating off that of the individual .in particular , people think and behave differently in crowds than in small scale settings , and this crowd behavior can occasionally lead to tragic events and even human stampedes .individuals tend to form groups spontaneously and engage in collective decision - making outside of such dramatic events as well , but the nature of this type of herding and the extent to which it happens depends on how outnumbered the group is compared to the reference population .for example , friendship networks of adolescents demonstrate greater social homophily if they are in the minority , whereas majority members do not share this preference .this phenomenon is in line with the description by simmel who argued that individuals `` resist being leveled '' in a crowd .if , however , the minority group is too small to form an independent community , it is possible for the minority to show heterophily rather than homophily .this finding highlights the importance of the surrounding social context , in particular the relative size of the group .social homophily can also lead to spatial homophily and thereby give rise to segregation . while the term homophily is used to mean different things , we use it here to refer to the tendency for people who are similar to be associated with one another regardless of the mechanism that causes this association .this use of the term is distinct from quantifying homophily by the frequency of associations among similar people , since people in the majority will have a greater frequency of associations with others in the majority simply due to having more opportunities for forming them .while several studies have investigated homophily of racial groups on smaller scales , we explore how such homophilous tendencies might persist on a much larger macroscopic scale .the behavior of individuals in a classroom can not be used to extrapolate onto the behavior of those packed into a crowd of millions .the kumbh mela is a religious hindu festival that has been celebrated for hundreds of years , and the 2013 kumbh mela , organized in allahabad , stands out from all others today and throughout history due to its magnitude .as it is infeasible to collect demographic data from millions of participants , we turned to call detail records ( cdrs ) that have been used to investigate social networks , mobility patterns , and other massive events .cell phone operators routinely maintain records of communication events , mainly phone calls and text messages , for billing and research purposes .these communication metadata , at minimum , keep track of who contacts whom , when , and for how long ( voice calls only ) . using these call detail records ( cdrs ) , we first estimate the attendance of each of 23 states of india at the event before investigating the relationship between a state s attendance and the degree of both social homophily and spatial homophily amongst its attendees .we had access to cdrs for one indian operator for the period from january 1 to march 31 , 2013 .this dataset contains records of 146 million ( 145,736,764 ) texts and 245 million ( 245,252,102 ) calls for a total of 390 million ( 390,988,866 ) communication events .given the logistical impossibility of collecting demographic , linguistic , or cultural attributes of kumbh participants at scale , we based our investigation of homophily on a marker that acts as a proxy for these covariates , namely , cell phone area codes .the area codes correspond to different states of india , and as a result of india s states reorganization act of 1956 these divisions summarize demographic variability along linguistic origin , ethnic agglomeration , and preexisting social bonds and boundaries .while cdrs readily lend themselves to studying social networks and social homophily , to investigate spatial homophily we additionally acquired access to the cell tower ids at the kumbh venue .combined with the latitude and longitude of each of the 207 towers at the site , we were able to infer the caller s location ( at the time of phone - based communication ) with relatively high spatial resolution .the grid that divides the kumbh site into regions around each cell tower , called the voronoi tessellation , groups all points on the map closest to each cell tower .the birds - eye view of allahabad in fig .1 shows the estimated attendance on one of the busiest and most favorable days for ritual bathing in the ganges river .* figure 1 .cell phone usage around the cell towers at the kumbh during its busiest day . *the heat map polygons represent the voronoi tessellation around the cell towers that occupied the site of the kumbh mela event in allahabad , india .cell towers with no activity are removed from the analysis and their voronoi cells are merged into neighboring active cell towers .map data used to produce the river traces : google , digitalglobe .extrapolating population measures from cdrs has become feasible in recent years due to the rapid increase in the prevalence of cell phones .while cdrs provide raw counts of cell phone users , to estimate attendance , these numbers need to be adjusted by ( i ) overall prevalence of cell phones in india , ( ii ) the state - specific market shares of our provider , ( iii ) the probability of daily use for a person known to be present at the venue , and ( iv ) the probability of phone non - use during a person s entire stay at the venue .first , regarding overall phone prevalence , of people in india had a wireless subscription in 2013 .second , regarding market share , the number of unique handsets are counted on a daily basis for each of 23 distinct states of india ( table s1 ) , as defined by the service provider , and each count is extrapolated from the service provider s market share in the given states . the service provider s market share varies widely state by state ( range , ) .it is important to use state - specific market share , because if average market share is used instead , the state - specific attendance counts can be off by more than a factor of .these handset counts are added together for each day before extrapolating to the general population .third , regarding daily use , it is likely that many kumbh attendees who use their phone at least once do not use their phone every day while at the festival .if not addressed , this would bias our population estimate downwards . by tracking phone activity , length of staycan be estimated based on the time period a person s phone is active while at the kumbh .based on this , we estimate the percentage of customers who use their phone on any given day during their stay conditional on them using their phone at least once during their stay to be .( note that this quantity applies to daily estimates , not to cumulative estimates .fourth , regarding non - use , the probability of a person not using his or her phone during the entire stay at the venue is difficult to account for ; these individuals are not visible in the observed data , and yet the proportion of non - users could potentially be substantial given that many visitors from outside regions would have to pay roaming fees , which likely leads them to minimize their phone use .to overcome this difficulty , we first examine four available daily population projections , each for a different day , and calibrate the proportion of non - users such that our resulting daily estimate for that same day is most consistent with the four daily projections .we obtain an estimate of for non - use ( coincidentally similar to obtained above for daily use ) and we use this estimate to adjust both cumulative and daily attendance . a social network is constructed between customers who used their phone at the kumbh .a network edge is assumed between any two people who communicated with one another at any point over the course of the kumbh .to study how a state s extent of social homophily is related to its level of representation , defined as the number of people present from the state divided by the total kumbh attendance , we select a measure that results in consistent estimates of homophily regardless of state representation .the measure of social homophily considered in refs . applied to our setting would define homophily for any given state as the proportion of ties that involve two participants from that state , but due to measuring absolute differences instead of relative differences , the homophily for states with small representation would be biased downwards due to their small proportions .a standard stochastic block model ( sbm ) approach applied to our setting would assume an equal likelihood of forming network ties between any two participants from the same state .however , if this model is misspecified and there exist additional social structure within each state ( within each block ) , as is almost certainly the case , then this approach is likely biased in the opposite direction and overestimates the social homophily in states with lower representation .the biases of both these methods are discussed in further detail in . to circumvent these problems , we shift our focus from dyads to same - state connected triples , sets of three nodes from the same state that are connected either by two edges , resulting in an open triple , or three edges , resulting in a closed triple . the rationale behindthis choice is that the three nodes in a connected triple can be assumed to belong to the same social group whether the triple is open or closed . by considering the propensity for same - state connected triples to be closed , we can gain insight intohow densely connected the social groups are in which these triples are embedded .this approach is a way of sampling pairs of nodes from the same social group even when the social groups themselves are unobserved .the proportion of triples that are closed provides a natural measure of social homophily ( see fig .this measure is commonly referred to as the global clustering coefficient or the transitivity index calculated over each state - specific network . ignoring residents from the local state whose phone use is likely different from all other states , there are 1,630,553 connected triples in the full kumbh social network . * figure 2 .schematics of homophily measures ( a ) and call detail records ( b ) . * for homophily measures ( * a * ) , the three dotted lines represent spatial boundaries for the voronoi tessellation around the cell towers , separating the shaded region into three voronoi cells , in two ( a low and high homophily ) examples .the solid lines denote which nodes are in communication in the social network , either through voice call or text message . in the context of spatial homophily ,two nodes are considered nearby if and only if they both are in the same spatial region ( voronoi cell ) on the same day .the size of voronoi cells range from as small as a to as large as . for the call detail records ( * b * ) , analysis of spatial homophily uses all pairwise communication events involving at least one customer of our operator who is present at the kumbh , whereas analysis of social homophily only considers the ties between customers of our operator . letting the triple is closed and if it is open , and let be the state of the three nodes in the triple , with as the proportion of the total cumulative kumbh population by march 31 , 2013 , that belongs to state . across the 22 non - local states , ranges from to , thus varying over 2.5 orders of magnitude .we fit the following regression model over all connected triples : model requires independence between observations for accurate inference , and because the same individual can be involved in multiple triples , this independence does not hold .the estimate from is still unbiased , but its standard error and the -value for the two - sided test of the null hypothesis will not be correct if this dependence is ignored . taking advantage of the large sample size ,for accurate inference we select a random subset of triples where we do not allow the same individual to appear in more than one triple .let be the number of customers near cell tower from state on day of the kumbh , and let be the total number of customers from state at the kumbh on day , where the sum is taken over all cell towers . to avoid double - counting ,if a person uses multiple cell towers on the same day , only the first cell tower is recorded .the probability that any two given individuals from the state are nearby on the day is : here two people are defined to be `` nearby '' on a particular day when they are both located in the same voronoi cell on that day , using the cell tower designation mentioned above .the intuition behind equation [ nearbyprob ] is that , given the location of one person , the probability a different randomly selected person from their state is in the same voronoi cell is .the probability in equation [ nearbyprob ] has the desirable property of not scaling with state representation if spatial homophily is kept constant .. if we then increase the number of people present at the kumbh from that state , will stay essentially unchanged with a negligible increase , because for any . ]this property is essential if we wish to evaluate the relationship between spatial homophily and state attendance / representation . finally , let be the probability that any two given individuals from state are nearby averaged over all 90 days . to evaluate busy , or high volume , days , we consider the three days with the highest attendance .we grouped each of these three days together along with the two days that preceded each and the two days that followed each , leading to a set of 15 days we labeled as high volume days .the remaining 75 days were grouped together to form the set of low volume days .we let be the average of the over the high volume days , be the average of the over the low volume days , and we defined to be the ratio of spatial homophily when comparing high volume days to low volume days .since the extent of homophily for any given group can depend on the relative size of that group compared to others , we first estimate daily and cumulative attendance for participants from each state which can then simply be added up to obtain overall attendance estimates .existing estimates of the kumbh s attendance vary widely and most are obtained with heavy extrapolation based on rough head counts combined with the rate of flow at high traffic points leading to the kumbh venue .these estimates have the limitation that they only look at the primary entrances into the kumbh and ignore traffic flow from secondary entrances . and while daily estimates can be inferred from traffic flow or satellite images , cumulative attendance is more difficult to obtain , because a satellite image can not tell if the same people are present for many weeks , or if people stay only a short time before leaving to be replaced by newcomers . our estimates for the total daily and cumulative attendance are shown in fig .they clearly show a spike of attendance on each of the kumbh s three primary bathing days .these days hold special religious significance and bathing on these days is seen to be particularly auspicious .based on the above numbers , we estimate the peak daily attendance of the 2013 kumbh on february 10th to be million , and the total cumulative attendance from january 1 to march 31 to be million , which suggests that the event was the largest recorded gathering in humanity s history . a sensitivity analysis in fig .3 shows the cumulative attendance if the percent of customers that are non - users is varied from the estimated . for example , if the percent of customers that are non - users is , then the cumulative attendance sinks to million , whereas if the percent of customers that are non - users is , then the cumulative attendance rises to million .* figure 3 .estimates for daily and cumulative attendance at the kumbh . * the cumulative ( * a * ) and daily ( * b * ) attendance at the kumbh is estimated from january 1st , 2013 , to march 30th , 2013 .daily estimates are the number of unique handsets used extrapolated by the ( i ) the national prevalence of cell phones , ( ii ) state - specific market share of the service provider , ( iii ) the likelihood of inactivity on a daily basis , and ( iv ) the proportion of individuals who never use their phone ( non - users ) .cumulative estimates are extrapolated only by ( i ) , ( ii ) , and ( iv ) , which accounts for the apparent difference between daily and cumulative counts on january 1st .the sensitivity of total cumulative attendance to changes in ( iv ) shows the importance of accounting for this form of censoring in the data * ( c)*. the curve plotted is , where .we investigate social homophily among the residents of the 23 states , using state - specific attendance estimates , by constructing a social network of kumbh attendees .the network nodes correspond to people and edges correspond to one or more pairwise communication events between people .note that only communication events involving the service provider s customers present at the kumbh venue are observed ( see fig .2 ) , and both parties must be customers of the provider to be included in the network so that their state of residence can be ascertained .the resulting network contains 2,130,463 nodes and 8,204,602 ties .the network is constructed using the full three month period using both text and call information combined because otherwise the network would become too sparse if segmented .when there is strong social homophily in a state , the connected triples in the social network among attendees from that state will have an increased likelihood of being closed .after fitting model we find that there is strong negative association between social homophily and state representation .the model fit has an estimate of , ci , implying that a ten - fold increase in corresponds to an decrease in the expected proportion of closed triples .the analysis restricted to a subset of independent triples yields a -value less than and this significance remains robust to the subset selected .this analysis reduces sample size and sacrifices some statistical power by looking only at a subset of independent triples in order to allow for accurate statistical inference . even then, the -value remains very significant , providing strong evidence that minority states at the kumbh tend to show significantly greater social homophily as compared to well represented states .does the finding of heavily outnumbered states being more tightly - knit in their social networks apply to spatial homophily as well ?we use our knowledge of which cell tower is used by a caller to approximate caller location .let be the probability that any two given individuals from state are physically nearby averaged over all 90 days of the kumbh .the and their confidence intervals are illustrated in fig .4 , with ranging between and , reflecting over a 7-fold difference in the propensity for spatial homophily across states , with a mean value of .states with low representation tend to be more spatially homophilous than states with high representation .in contrast , the local people from the eastern uttar pradesh , where the kumbh mela takes place , alone make up a majority at the kumbh , and they show significantly less spatial homophily .overall , there is a strong negative correlation ( pearson s ) between spatial homophily ( ) and average logarithmic daily representation at the kumbh .* figure 4 .the spatial homophily and representation of the 23 mainland states of india at the kumbh . *the point estimates and confidence intervals for , the probability that any two given customers from state are physically close to one another , ( * a * ) and , the relative increase of state s spatial homophily on busy days compared to normal days , ( * b * ) , both demonstrate an inverse relationship with state representation .the states have been ranked first by representation at the kumbh ( * c * ) and then by degree of spatial homophily ( * d * ) ( see for the list of state names ) .the heat map colors correspond to the rankings .the yellow star is the city of allahabad , the location of the 2013 kumbh mela .the near inversion of colors when comparing the two panels demonstrates a clear negative association between state representation and spatial homophily .the average spatial homophily above was computed over the full three - month period , but it is conceivable that spatial homophily is a dynamic characteristic that varies from day to day , reflecting the changing compositions of different social groups .we conjectured that the extent of spatial homophily might be different on the three primary bathing days of february 10 , february 15 , and march 10 as compared to the other less crowded days . to test this ,we define to be the ratio of spatial homophily on crowded , high volume , days relative to spatial homophily on lower attendance days for state .4 shows that states with low representation tend to have a greater increase in spatial homophily on the high volume days .participants from these underrepresented states appear particularly sensitive to increase in crowds , and they seem to group together more closely as the crowds build up. some of the states with high representation are more robust to changes in the crowd size .in fact , there were seven states that had the opposite effect ( though these effects were quite mild in comparison ) .there is a gap between the top four most represented states at the kumbh ( uttar pradesh east , madhya pradesh , bihar , and delhi ) and the remaining states .these four well - represented states all showed less spatial homophily on the busier days .overall there is moderate negative correlation ( pearson s ) between and average logarithmic daily representation at the kumbh .we used cdrs to estimate daily and cumulative attendance at the 2013 kumbh mela which , according to our analyses , represents the largest gathering of people in recorded history . while participants from all states demonstrated social and spatial homophily , these phenomena were stronger for the states with low representation at the event and were further amplified on especially crowded days .given that a person may not use their phone immediately upon arriving or before leaving the kumbh , it is likely that the duration of stay as estimated by their phone usage is truncated . to account for this censoring , a model for daily phone usageis required that can estimate the amount of censoring .we chose the simple model that assumed that each person had some independent probability of using their phone on each day .while this model is intuitive and provides suitable estimates for the amount of censoring , it may be the case that phone usage is captured better by a more complicated and involved model . though we consider the proportion of connected triples that are closed in the kumbh social network as a way of measuring the homophilous tendencies of attendees from each state , we draw a distinction between this measure and what is more commonly known as triadic closure . in the social network context, triadic closure is the mechanism by which connections are formed through a mutual acquaintance .however , since we do not observe when the original network ties are formed , we can not comment on triadic closure as a causal mechanism for tie formation .our observations avoid a causal connotation and focus instead on observed associative measures .our finding on spatial homophily is compatible with the phenomenon of `` associative homophily , '' which states that at a social gathering a person is more likely to join or continue engagement with a group as long as that group contains at least one other person who is similar to her . because every group is likely to have at least one person from the majority , associative homophily plays a relatively weak role for someone in the majority as she will be comfortable in almost every group . on the other hand ,a person in the minority may have to actively find a group that contains another person similar to him , inflating the minority group s apparent homophily .this framework offers one possible explanation for the tighter cohesion of the states at the kumbh with low representation . in conclusion ,whether at the individual , group , or state level , it appears that no one likes to be outnumbered .we all seek safety in numbers .* supplementary text .* extended discussion of how some measures of homophily can be susceptible to confounding with the size of the subgroup .the names and corresponding market shares of the 23 mainland states of india is listed . some intuition for how censoring takes effectis also included .* figure s1 .stochastic block model edge probabilities by state .* the represent the probability that any random two nodes in state share an edge , assuming this probability is the same for all pairs of nodes in state .the strong association between this probability and state representation is heavily biased under model mispecification as is more likely the case here , exagerating the result .the baseline probability is calculated assuming no block structure , i.e. all nodes have the same probability of being connected to one another regardless of state membership .* figure s2 .simple illustration of the bias produced by the stochastic block model under model misspecification . * social groups are displayed in blue , and are assumed to all be of equal size .the probability that two people in the same social group share an edge is .the probability that two people in different social groups share an edge is .states a and b are constructed to have identical homophily , i.e. the probability of an edge between two people in the same social group is the same for both states .the average edge probability displayed takes the average over all possible pairs of nodes in the state .* figure s3 .schematic for estimation of the probability of phone usage on any given day . *each square represents a different day , and it is assumed that a person arrives at and departs from the kumbh only once .the estimated proportion of days a phone is used is calculated as the total number days a phone is used summed across all customers , divided by the length of stay summed across all customers .* table s1 .state acronyms and operator market share . *the acronyms for the twenty - three telecommunications states in india used by the operator are listed .in addition , the market share of the operator as measured by the percentage of the total number of people in the state with some form of subscription to a phone plan , taken from the month of january 2013 .* supplementary information . * network data taken over the full duration of the kumbh mela . daily handset count data , stratified by state .ib and jpo are supported by harvard t.h .chan school of public health career incubator award to jpo .tk is supported by the hbs division of research .the authors declare no conflict of interests .the authors would like to thank gautam ahuja , clare evans , gokul madhavan , daniel malter , and peter sloot for contributing their helpful comments , suggestions , critiques and discussion , and would like to thank the operator for providing access to their data . a special thanks to the operator for both providing us access to their dataas well as accomodating us on their campus grounds as we worked on the analysis . in particular, we wish to express our thanks to employees rohit dev and vikas singhal for their assistance .
macroscopic behavior of scientific and societal systems results from the aggregation of microscopic behaviors of their constituent elements , but connecting the macroscopic with the microscopic in human behavior has traditionally been difficult . manifestations of homophily , the notion that individuals tend to interact with others who resemble them , have been observed in many small and intermediate size settings . however , whether this behavior translates to truly macroscopic levels , and what its consequences may be , remains unknown . here , we use call detail records ( cdrs ) to examine the population dynamics and manifestations of social and spatial homophily at a macroscopic level among the residents of 23 states of india at the kumbh mela , a 3-month - long hindu festival . we estimate that the festival was attended by 61 million people , making it the largest gathering in the history of humanity . while we find strong overall evidence for both types of homophily for residents of different states , participants from low - representation states show considerably stronger propensity for both social and spatial homophily than those from high - representation states . these manifestations of homophily are amplified on crowded days , such as the peak day of the festival , which we estimate was attended by 25 million people . our findings confirm that homophily , which here likely arises from social influence , permeates all scales of human behavior .
isps and application service providers have a strong interest in understanding network and application performance to make sure that their customers are satisfied .in addition to passive traffic monitoring inside the network , performing active measurements at the endpoints is gaining importance as a tool for observing long - term network behavior as well as for investigating and diagnosing network failures .measurement endpoints include infrastructure nodes such as access routers and set - top boxes as well as user devices such as personal computers , smartphones , and tablets .typical metrics , e.g. , as defined by the ip performance metrics ( ippm ) working group are round trip delay , one way delay , ip packet delay variation , average tcp / udp throughput , average fractional loss , dns latency , among others . aggregating performance metrics from many measurement points by an internet service provider ( isp ) , or a measurement service ( e.g. , ripe atlas , samknows , netradar , speedtest , etc . ) allows characterizing the network performance geo - spatially and over time , diagnose outages and observe the impact of the outage , and lastly the collected information helps regulators develop better public policy for the internet . currently , video is the dominant traffic on the internet , in both fixed and wireless networks . in 2012 , 51% of mobiletraffic was video , hence , measuring the performance of video streaming applications is crucial for isps .the video quality at an endpoint is affected by path capacity ( e.g. , media bit rate is higher than the available end - to - end capacity ) , burstiness of video ( e.g. , high motion in the video causes temporary increase in media bit rate , which appears as a traffic burst on the network ) , network packet loss and re - ordering .therefore , transport layer metrics provide valuable input to measuring a viewer s users experience . performinglarge - scale passive measurements raises privacy concerns , because end - users do not want the isp or the measurement service to monitor their traffic .furthermore , metrics from a passive measurement are hard to correlate across measurement points because there might be varying amounts of cross - traffic , which would be difficult to reconcile during analysis . the motivation for collecting the datasets presented in this paper is to explore the characteristics of internet video for the design of active measurement techniques at the endpoint , which is suitable for large scale measurements ( as defined by the ietf lmap working group ) . in order to reflect user experience ,the measurements are based on actual online videos that are popular amongst users , instead of using a single predefined video .however , the diversity in the duration , types and formats of videos available on the internet makes it hard to select an appropriate video for benchmarking user s quality of experience .furthermore , results from tests conducted on different videos can not be compared directly with each other . in this paper , we present the analysis of datasets of youtube s popular videos collected between july 2013 to april 2014 .we choose youtube for two reasons : ease of access without logging in and its popularity .the datasets contain information collected for over _ 130000 _ videos from 58 locations using youtube s location - based charts .this paper makes the following four contributions : 1 .we describe the video trends in terms of categories , duration , formats , resolutions , media bit rates and the variation in instantaneous bit rates of the video ( burstiness ) in the current internet .these results can be used for selecting appropriate videos for conducting active measurements .we show that video lengths on youtube follow a lognormal distribution . additionally , the file sizes of different file formats ( webm and mp4 ) and resolutions ( 360p , 720p and 1080p ) also follow lognormal distribution .we observe that the average bit rate and the burstiness of a video when calculated for the first 3 minutes is comparable to the entire duration of the video ( typically at least 10 minutes long ) .since , the time taken and the traffic generated by active tests need to be minimized to avoid any effect on real user traffic , we can use 3 minutes as the cut - off time for our measurements .we show correlation of videos across different resolutions , arguing that it is possible to generate traffic for a higher resolution stream from a lower one or vice versa , by appropriate upscaling or downscaling , respectively .the rest of this paper is organized as follows .we describe related work and the novelty of our work in section [ sec_relatedwork ] .section [ dataset ] describes our datasets and the methodology used for the collection .results and analysis are divided into section [ analysis1 ] and [ analysis2 ] followed by a discussion on the application of our results and future work in section [ discussion ] .we present a model for active measurements that is based on our current setup in section [ lmapmodel ] and conclude with a brief summary and hints at future work in section [ conclusion ] .there are several other studies that characterize youtube videos but the datasets are from 2007 - 08 .initially in 2007 , youtube had a size limit of 100 mb for its videos , which has since been increased to 20 gb .our datasets were collected in 2013 and 2014 and contains full hd ( 1080p ) content as well as files with the webm format , which to our knowledge has not been studied before . in ,the authors use over 20 million randomly selected youtube videos to show that the popularity of videos is constrained by geographical locations .our methodology is in line with this as we gathered all available location - based charts from youtube , giving our dataset regional representation .furthermore , our proposal to lmap for testing video streaming also recommends using location - based charts for measuring user experience .a crowdsourcing study in shows that the qoe for tcp video streaming is directly related to the number and duration of stalls during a video playout . in ,the authors build a qoe model based on stalling events for youtube .research has also shown that actively measuring stall events ( with the pytomo tool ) in different isps helps predicting the user experience .the proposals in this paper can complement such a tool ( like pytomo ) by selecting and categorizing videos for active measurements .a more recent study was done for the characterization of an adult video streaming website : the authors findings about the video durations is similar to what we observe in our dataset ; however , we offer an additional in - depth analysis of formats , resolutions and variations in the instantaneous bit rate .since youtube dominates video traffic , our findings can serve as a good comparison point for similar studies on other video streaming services . in ,the researchers study how youtube s block - sending flow control can lead to tcp packet losses .the impact of location , devices and access technologies on user behavior and experience is discussed in .distribution of youtube s cache servers and their selection process was studied in .our work aims at active measurements and is thus relevant to the lmap and ippm wgs .lmap provides a framework for large scale measurements .the model we propose for large scale video performance measurements in section [ lmapmodel ] builds upon the lmap framework .the testing is to be carried out from an lmap measurement agent ( ma ) .regular ( long - term ) active measurements add additional traffic on the network and should run during idle or low user activity periods , so that they do not interfere with other traffic .therefore , both the traffic generated and the extra traffic lasts should be minimized .this implies that we can not run active measurements for tens of minutes or hours to measure performance of a long video . emphasizes the need for stronger descriptions for test streams because of the indeterministic nature of the internet .since we are proposing active measurements using online services , we will be using a variety of test streams , and it is important that they are characterized . in this paper , we propose possible methods to achieve this .we present results from three measurement activities that we did for youtube during 2013 - 2014 .the datasets constitute the list of video urls extracted from the youtube s chart pages for 58 different locations and with popularity defined for differing time periods ( today , last week , last month and all time ) .our first measurements were based on the charts of july 5 , 2013 and we collected the description of the videos available on the youtube page , including the title , category , number of views , likes and dislikes , available formats , resolutions and file sizes .we collected the charts again for september 11 , 2013 and , for this set , in addition to the descriptions as above , we also gathered the date of uploads and file sizes for some selected formats and resolutions . after removing redundancies , there are over 75,000 videos in each of these datasets and over 130,000 unique videos altogether . about 28% of the videos of the july dataset are also present in the dataset of september , of which 85% are from the all time charts . at the time, youtube did not provide any support for dynamic adaptive http streaming ( dash ) and the data collected was for non - adaptive videos .youtube introduced the dash format in october 2013 .the dash implementation uses a fragmented mp4 file in which the stream is divided into subsegments for easy switching between different resolutions .currently , the subsegment duration used by youtube as per our dataset is 5 seconds for video tracks .while youtube provides audio and video tracks in a single file for progressive download , the dash streams provides them in separate files . to characterize the variation in instantaneous bit rate ( burstiness ) of video , we collected information about the frame sizes and timestamps for each video into a separate dataset ( _ frame - logs _ ) , which was collected in april 2014 and so includes both dash and non - adaptive video streams .we collected logs for mp4 videos in 360p and 720p resolutions , and dash mp4 videos with 360p , 480p , 720p and 1080p resolutions .table [ tab_ds ] shows a summary of the sizes of these datasets .the data were collected at aalto university using our youtube client , designed to run active measurements for youtube videos .the client is designed to measure youtube performance for end - users using the _ samknows _ whitebox .it uses ` libcurl ` for fetching videos and extracts youtube s video metadata using regular expressions by finding keywords in the first http response .the scope of this paper is to characterize videos to aid in designing better measurement systems and not to actually measure network performance .therefore , we present no results about the quality of the download .youtube uses numeric identifiers called itags for identifying the formats and resolutions of the video .the itags used during this study are listed in table [ tab_itag ] .when collecting _ frame - logs _ , the client is run without a rate adaptation algorithm to collect complete frame information for a single bit rate stream .in this section , we present the analysis of the datasets . the datasets cover all the categories of youtube fairly well , it gives a good idea of the different types of popular videos available on youtube .figure [ fig_cats ] shows the distribution of the video categories and also the cumulative views for each category .the _ music _ category has the highest number of views despite that less than 2% of the videos in the datasets belong to this category .illegally shared videos are quickly removed , hence most of the music videos on youtube are shared by music companies through syndication hubs .currently , the most viewed video on youtube also belongs to the ` music ' category .the lower number of music videos also indicates that , unlike other categories , many of the same music videos are popular across multiple countries , resulting in common results for various locations .internet video is viewed on a number of different devices and hence a range of different resolutions are supported for compatibility reasons .furthermore , new resolutions appear and old ones are discarded .currently , youtube uses a 16:9 aspect ratio for wide screen displays , and provides videos in 7 different resolutions . in our datasetswe observed resolutions as low as 144x176 ( qcif ) to as high as 3072x4096 ( 4k ) , however youtube keeps changing the offered resolutions based on technological needs or internal reasons .we observed that the default resolution of 360p is available for over 99% videos in the datasets . if a video is available in mp4 for a particular resolution , it is also available in webm for the same resolution .less than 1% videos are available in only one of the two formats .the overall availability of mp4 format is slightly higher in comparison to webm in july , but this gap is not seen anymore in the september dataset ( see figure [ fig_formats ] ) . when youtube introduced dash in oct 2013 , it stopped providing non - adaptive streams for full hd videos .consequently , the dash format uses only the fragmented mp4 format and support for webm was no longer available .the 360p and 480p versions of flv have been discontinued as well , however 240p is still available .the date of upload is available only for videos collected in september , where over 72% of the videos were uploaded in 2013 .the popularity of videos in reference to how long they have been available is shown in figure [ fig_year ] .it shows per - year distribution of videos and the boxplots for the number of views .the graphs are based only on the worldwide charts and for the sake of clarity , outliers with more than 500 m views are not shown .the longest youtube video in the datasets is over 11 hours long .since youtube allows some users to upload 10 hour long videos , there are a number of videos that last for more than an hour .the average duration of the videos in the complete dataset is _441s _ and the median is _181s_. the video length fits a lognormal distribution but the tail is heavily skewed with only 15% of the videos with durations longer than _600s_. 50% of the videos in the inter - quartile range ( iqr ) have durations between _ 71 _ and _ 387 _ seconds .we suggest that this is a good range for active measurements , as the videos are long enough to gather interesting results and yet not so long that the extra traffic starts interfering with user traffic. figures [ figure_histvdolen ] and [ figure_ecdfvdolen ] illustrate the suitability of the lognormal fit with an empirical cumulative distribution function ( cdf ) plot and , an histogram - density plot , respectively . for videos that are available in both mp4 and webm formats , the sizes of the files in each format for a particular resolution are comparable with some exceptions .figure [ fig_corrsizes ] shows the correlation for all three resolutions , we observe 0.99 correlation .the best fit for the file sizes is a lognormal distribution .table [ table_fs ] shows a summary of distribution fits for each resolution and format for september measurements .the table also shows the sample size used for the fitting depending on the number of videos available for that format and resolution .the actual data has a long skewed tail due to some videos with duration well over an hour , which causes it to deviate from lognormal .figures [ fig_ecdfvdos ] and [ fig_histvdos ] illustrate the suitability of the lognormal fit with a histogram - density graph and cdf . andhence the bit rates for webm are also higher than those for mp4 . ]media is encoded using variable encoding bit rates , which means the bit rates can spike to values much higher than the advertised bit rate for a media stream .furthermore , depending on the content , different video or audio streams encoded using similar encoding parameters may have different resulting average bit rates .we calculated the average bit rates for videos from the september dataset by dividing the file sizes by the duration of the corresponding video .the results are shown in figure [ fig_ds12bitrates ] .webm files are generally larger as we showed in section [ sec_filesizes ] , and hence webm bit rates are higher than mp4 for similar resolutions as well . for the dash videos in our dataset , we had frame level information for up to three minutes of the video ( audio files are separate for dash , these calculations are based on video frames only ) .we calculated average bit rates for different resolutions by summing the frame sizes and dividing by the total duration .figure [ fig_fragmean ] shows the summary of the bit rates .we found the adaptive streams of 360p and 720p to have lower average bit rates in comparison to the non - adaptive streams as shown in figure [ fig_fragvsunfragbitrate ] .youtube stopped serving non - adaptive file formats for 480p and 1080p when it introduced dash .our dash analysis takes only 3 minutes of video into account ; we will show later in section [ sec_3min ] that this results in little loss of information , if any .a video with sudden traffic bursts is more likely to cause a freeze in the playout than one that has a more consistent rate .we measured the instantaneous bit rates as the bit rate observed during one second of playout .the burstiness of the video can be related to the standard deviation in these instantaneous bit rates .we can use the mean bit rate and this value of burstiness to classify internet video into groups .this can aid in comparing results of active measurements done over a range of different videos .both these parameters are easy to measure at a measurement agent during the test and can be recorded along with other results . gathering this informationwill also help in keeping up - to - date information about the characteristics of internet video as a side - effect of measuring performance .figure [ fig_fragsd ] shows the relative standard deviation for different resolutions of dash video for up to 3 minutes of video .we calculated the average bit rate of non - adaptive mp4 videos for the first 3 minutes and compared it to the average bit rate of the entire video . as expected , the values are comparable and the distribution of average bit rates remains the same ; illustrated in figure [ fig_unfragmeans ] .we did the same exercise with the standard deviation in per - second instantaneous bit rates of the videos and found similar results with minor changes in the shape of the histogram as shown in figure [ fig_unfragsd ] .hence , active measurements that span for only 3 minutes can be enough for measuring performance to a good degree of accuracy , at least for short videos like youtube .it is worth mentioning , that the test does not have to run for the entire playout duration .once the 3 minutes of video is downloaded , the test can calculate whether or not all frames have arrived before playout time and terminate .this result may not apply to long videos such as movies .we discuss this in more detail with suggestions on how to handle exceptions in section [ discussion ] .we observed that the correlation in the per - second instantaneous bit rates of videos is very strong , as shown in figure [ fig_ecdfcor ] , so upscaling or downscaling a video to a higher or lower resolution can be done with a good level of accuracy .this kind of simulation has two use cases : 1 ) in active measurement for performance testing from a user s own test server ; we can store different files in a single resolution and simulate traffic for other resolutions .2 ) for generating traffic for testing dash algorithms .such a scheme is useful for saving storage space , since high resolution videos are in the range of gigabytes and it may not be worth storing so much data when none of it is actually viewed at the other end .it is also useful in cases where you simply might not have access to higher resolution videos to be used for testing .we used both dash and non - adaptive mp4 videos for testing this hypothesis .we explore three scenarios based on three different use cases : 1 . full hd - we simulate 1080p from a 480p video and vice versa using 8045 dash streams 2 .mobile - we simulate 720p from 360p and vice versa using 22,600 dash streams 3 .non - adaptive - we simulate 720p from 360p and vice versa using 30,871 non adaptive streams our analysis for dash uses a cut - off value of 3 minutes for the videos .the cut - off would only introduce minimal bias in our results because the correlation coefficients remain almost the same as we saw in section [ sec_3min ] .the instantaneous bit rate of a simulated resolution with average bit rate at i - th second , can be calculated from a different resolution with average bit rate and instantaneous bit rate using the formula : the value of i is a multiple of the measurement interval we take for measuring instantaneous bit rates .we used 1 second intervals . to compute the error in a simulated video , we used mean average percentage error ( mape ) calculated using the following formula , where n is the total number of instantaneous bit rate values : table [ tab_mape ] shows the mean and 95th percentile of the mapes for all videos used for mobile , full hd and non - adaptive case .the slightly higher values when downscaling are a consequence of the unsymmetric nature of mape .figures [ fig_ecdfhd ] and [ fig_ecdfunfrag ] show sample cdf comparisons for dash hd and non - adaptive case respectively , showing 4 videos for each case .the videos are picked to represent different durations and correlation coefficients ( c ) .note that we specifically picked some worst case scenarios to demonstrate insights into the behavior , and generally the fits are much better .we observed that if c is high , the cdfs fit well even with large mape values .the videos we observed for such cases had few high motion peaks .upscaling resulted in higher peaks than actual and downscaling resulted in lower peaks than actual .the cases with the highest mape values in both figures [ fig_ecdfhd ] and [ fig_ecdfunfrag ] represent such videos .we noted that the many of these videos were slide shows of images , commonly seen on youtube with musical tracks .this is also observed for videos with a stationary background image , because in that case too , there is a spike at segment boundaries to allow seeking and rate shifting . when the correlation coefficients are low , even with lower mape values , the cdf fit is not so good .videos with low c tend to have low mape when the video is not bursty , and even though the peaks do not line up , the difference in peaks and troughs is so small that the simulation error is small .figure [ fig_timing ] shows the timing graphs of two videos , one with high correlation and high mape and the other with low correlation and low mape .we looked at the characteristics of internet video in light of large scale active measurements to measure internet video performance at the endpoint .since the true user experience should be based on user behavior , active measurements should be based on videos that are popular amongst users .youtube provides a good use case as it is widely used , accessible without requiring a login and provides localized charts .previous studies show that user experience for internet video based on tcp streams directly depends on the number and duration of stalls during playout .this is understandable , since deterioration due to packet loss does not occur in this case like it would for say rtp video .so a good approach for measuring performance is to measure stall duration when downloading popular video content and because popularity differs based on geo - location , tests conducted in different locations should use the locally popular videos .this helps in scalability as well , as we do not want millions of active measurement agents flooding measurement servers for a single video .but videos are inherently very different in terms of duration and burstiness , and it is hard to directly compare results based on stalling duration without taking into account these characteristics .a bursty video with high bit rate is more likely to cause stalls than a consistently low bit rate video .furthermore , an initial buffering of 2 seconds of playout for a 5 second video is likely to prevent stalling when the media bit rate is bordering the capacity of the end - to - end path , while a longer video may still experience stalls . in dashthe player can switch to a higher or lower bit rates representation of the same stream based on the observed path characteristics .this switching can directly affect the user experience if it is done too often and especially if it changes quality by a large magnitude . hence to measure dash performance , we need to run tests for a reasonable amount of time and check the stability of the network ; here too the burstiness of the video plays a part . in the following subsections ,we discuss various aspects of our results , their implications and use in the design of active measurements and also their applicability in other fields .active measurements must be conducted during idle time , when user traffic is not detected , and must finish as soon as possible so as to avoid affecting any new user - generated traffic .while we want to capture the user experience , we have to do so within these constraints .video tests , unlike other measurements are bound to run for longer durations depending on the length of the video .we looked at popular videos from 58 different locations and all available categories and found that over 50% videos have a duration between 1 and 7 minutes .these durations are good enough for active measurements as it gives sufficient time to gauge network performance and are not too long that they start interfering with regular user traffic .hence , a measurement agent can specifically pick videos that lie within this duration .a different approach for limiting the duration of the video is to simply cut - off the test at a fixed time .we provide results to prove our hypothesis that testing for the first 3 minutes adequately represents the entire video and so testing can be limited to the first 3 minutes .this also falls in line with the user behavior of aborting videos without watching them to the end .however , we must not ignore the fact that these results are based on youtube videos , and while longer videos are present in the dataset , majority videos are under 10 minutes duration . for movies on netflix or other movies on demand services , the first 3 minutes would be mostly credits having comparatively less motion and hence lower bit rates. additionally , users are less likely to abort the video in the first few minutes .we will have to run similar tests for such videos to know what their behavior is . even if the first 3 minutes are not adequate , it is still not feasible to run active tests for entire video durations for long videos. it may be more appropriate , however , to pick a 3 minute or 5 minute segment from the middle of the movie rather than the beginning .since the same video is available in a number of formats , a measurement agent should be aware of which format to download for performing active measurements .the choice should portray user behavior , and we can relate it to the behavior of players in popular browsers or applications . at the time of writing this paper ,webm is supported in the latest versions of opera , google chrome and mozilla firefox , and on internet explorer with an additional component .however , safari still has no support for webm .similarly , mpeg dash support is also being added to browsers .it is not in the scope of our paper to study the usage trends of these browser or other video applications and to comment on most commonly used formats . however , from our data we observe that both webm and mp4 files have comparable file sizes and bit rates , but mp4 has slightly smaller media bit rates .the observed difference is small and we expect the results from either format to be comparable to the other because the compression efficiency depends on encoding parameters of the respective codec and the efficiency can be adequately adjusted by appropriately tweaking the codec parameters .we noticed that the popularity of music videos is one that is least affected by geographic locations . while , there are music videos that are popular only in a specific geographic location , a large number of videos were common to charts in various different locations .this also shows up in the incredibly high view counts for a relatively small number of videos collected in our dataset . a measurement scheme that uses music chartsonly can generate more consistent and comparable results as most music videos have similar lengths and content .80% of the music videos in our own dataset had durations between 2 and 6 minutes .this scheme may suffer on grounds of scalability , because a large number of mas from different locations would be measuring the same video and hence may affect popularity indices .also , testing a specific category may limit diversity as well , as not all categories of videos watched by users will be covered , and this would make the results biased if a service treats traffic for different categories differently . in some cases , it may be useful to use our own test servers for testing video performance e.g. for troubleshooting service or network issues .while this would not directly reflect upon the performance of a particular service and hence real user experience , it still shows the isps internal network performance . if bad performance is measured by a measurement agent , testing with a network local server can help narrow down on the source of the problem .we propose a simple traffic generator for different resolutions using a copy of the video in only one resolution and the average bit rates for the rest . in spite of some shortcomings ,this is a good step towards traffic generation and can have applications other than performance measurements .for example , dash servers typically require multiple representations of the same video in different resolutions , further depending on the use - case ( live streaming or video on demand ) it may also serve the same video in different chunk or segment sizes ( 1s , 2s , 5s , 10s , etc . ) , this leads to a high number of valid combinations .the streaming server may need to store and handle all the valid combinations of the media stream which creates unnecessary complexity for evaluating congestion control algorithms . in this case , the server can instead store a single file of a particular video stream containing the frame sizes and the timestamps .eventually up / down scaling the frame sizes to the requested bit rate ( as described in section [ analysis2 ] ) .similarly , the media traffic generator can also be applied for evaluating congestion control algorithm in web - based real - time communication ( webrtc ) by using a single representation of the video as the baseline and changing the frame size based on the bit rate computed by the congestion control algorithm .internet video is constantly evolving to meet new technological advances and user trends . during the course of our 10 month study we observed several changes on youtube ( removal of flv for 360p , removal of non - adaptive streams for full hd , appearance of dash for full hd and 4k video ) .furthermore , youtube now offers longer and larger videos than it did 4 years ago .if we use location - based popularity charts for selecting videos when testing with live services , lmap can automatically keep up with the trends . in addition , we can use a feedback loop from our measurements into our video selection process and traffic generator model .it may not be feasible to gather frame - level information for all measurement agents , as it would require first storing this large amount of data and then sending it to a centralized server for processing .but since the measurement agents in one location will be cycling through the same set of popular videos , we can collect detailed frame - level stats at one central point .needless to say , without a method that ensures evolution , we can not keep the testing system relevant for long .the lmap ietf is working on standardizing large scale performance measurements for access devices .the main components of an lmap system for active video measurements are shown in figure [ fig_lmap ] .the measurement agent ( ma ) is located at the end - user s premises and is responsible for conducting active tests with youtube or other video servers .the subscriber database contains information about the subscriber line and the repository is the database of the results .we define the following parameters for our testing model * m : number of measurement agents * minlength : minimum length of test videos , value = 72 sec * cutoff : cut - off duration for tests , value = 180 seconds * n : number of videos to get from charts , value depends on m and frequency of tests note that minlength is the first quartile of the video durations and should be less than cutoff . the value of n determines how many videos will eventually be used for testing , and the number should be selected with two considerations 1 ) it should be much smaller than m so we can get a good number of results from different mas for the same video .2 ) it should not be so small that active measurements start impacting popularity indices of videos .there are 3 types of instructions that a controller can request from a ma , which we are already using with our youtube tests on samknows probes : 1 .collect charts and do random test : the mas queries top n videos for its location from youtube , discard videos with duration < minlength and randomly tests for 1 video .do testing for video url the url can be a youtube video or it can be a link to the traffic generator , but currently we do not have a traffic generator within our test system , so only youtube videos are used . in the presence of a traffic generator ,the testing methodology consists of the following steps .* once a week the mas collect charts and submit results for 1 random video with duration > minlength , to the repository via the collector . *the data analysis tools assign categories to these videos based on duration and bit rates and submits the final list to the controller and the traffic generator . *the traffic generator downloads low - bit rate formats for the selected videos and saves frame logs , it further assigns burstiness values to the videos based on the standard deviation in instantaneous bit rates .it submits burstiness results to the collector which are then incorporated in the categorization done by the data analysis tools .* for the remainder of the week the controller randomly assigns videos to different mas for testing and collect results . *the data analysis tools keep the value of minlength updated based on duration of videos used for testing .if minlength becomes larger than cutoff , it is ignored while collecting charts . the traffic generator is used for cases when a controlled video server is needed , for instance , when troubleshooting a problem .it uses frame logs of lowest bit rate streams to generate dummy traffic for testing and uses upscaling for generating traffic for higher resolutions . for upscaling, it needs the average bit rate of high resolution videos .this information can easily be extracted using just the headers from the containers that are present in the beginning of the files , so the whole file is not needed .while there are many challenges involved in designing active measurements for internet video , the one tricky question that faces measurement designers is what video to download .we need the measurements at different end points to somehow correlate with each other , but downloading the same video would defeat the purpose as it would not reflect true user experience . downloading popular ( most viewed ) videosis the obvious choice for measuring user experience , but then how do we correlate the performance of a 1 minute video with a 1 hour video , or a 2mbps video with an 8mbps video . for this reason ,it is necessary to know what is a sane value for duration and media bit rate that effectively represents the majority of the popular videos on the internet . in light of our analysis, we conclude that 3 minutes , which is about the median of the video durations , is a good cut - off duration for our active measurements . as active measurements are conducted at the end - user , preferably in the absence of cross - traffic generated from within the user s network , the length of a single test must be short enough to be conducted conveniently without disrupting the user .the 3 minute duration covers a wide range of online videos without being so long that downloading it on a cross - traffic free line becomes a challenge on its own .furthermore , we now know that majority of 1080p videos have bit rates no higher than 5 to 7mbps , and though videos with higher bandwidth requirements exist they are very few in number. hence , it might be sufficient to measure for videos that have bit rates in this range .using these guidelines and picking videos from a set of current popular videos for the location of the measurement device , would also make the measurements scalable .given we pick from a large enough dataset , the chances of increasing the popularity of a single video just because we are running active measurements on it is reduced . the correlation in the file sizes for webm and mp4 also indicates that the average media bit rate for both formats is comparable and hence the performance of either of the two formats should be enough to gauge the user experience . a next step to consider in this direction would be to correlate media bit rate variations for the different codecs as well. finally , looking at the standard deviation in the instantaneous media bit rates gives a very simple method for defining burstiness in the video .this paper just provides a preliminary study of this behavior , but if the standard deviation can be successfully correlated with performance under restricted network bandwidth , we can use it to further categorize videos for reconciling measurement results . furthermore ,such information can help optimize mpeg dash clients , for example , shifting to a higher bit rate for chunks that have high burstiness only when there is a certain duration of pre - buffered video .there are many aspects of the collected data that have not been explored by us .the file sizes and durations can be used for calculating the average bit rates for the various formats , which may be used for modeling internet video .the data can be used to explore social aspects for instance to see the changes in popularity ( number of views ) over the two month period in the common set of videos .this work was supported by the european community s seventh framework programme ( fp7/2007 - 2013 ) grant no .317647 ( leone ) .
the availability of high definition video content on the web has brought about a significant change in the characteristics of internet video , but not many studies on characterizing video have been done after this change . video characteristics such as video length , format , target bit rate , and resolution provide valuable input to design adaptive bit rate ( abr ) algorithms , sizing playout buffers in dynamic adaptive http streaming ( dash ) players , model the variability in video frame sizes , etc . this paper presents datasets collected in 2013 and 2014 that contains over 130,000 videos from youtube s most viewed ( or most popular ) video charts in 58 countries . we describe the basic characteristics of the videos on youtube for each category , format , video length , file size , and data rate variation , observing that video length and file size fit a log normal distribution . we show that three minutes of a video suffice to represent its instant data rate fluctuation and that we can infer data rate characteristics of different video resolutions from a single given one . based on our findings , we design active measurements for measuring the performance of internet video .
we consider the following initial value problem for an inviscid burgers - hilbert equation for : , \\ & u(0,x;{\epsilon } ) = u_0(x ) .\end{split}\end{aligned}\ ] ] in ( [ bheq ] ) , is the spatial hilbert transform , is a small parameter , and is given smooth initial data .this burgers - hilbert equation is a model equation for nonlinear waves with constant frequency , and it provides an effective equation for the motion of a vorticity discontinuity in a two - dimensional flow of an inviscid , incompressible fluid .moreover , as shown in , even though ( [ bheq ] ) is quadratically nonlinear it provides a formal asymptotic approximation for the small - amplitude motion of a planar vorticity discontinuity located at over cubically nonlinear time - scales .we assume for simplicity that , in which case the hilbert transform is given by (t , x;{\epsilon } ) = \mathrm{p.v . }\frac{1}{\pi } \int\frac{u(t , y;{\epsilon})}{x - y}\ , dy.\ ] ] we will show that smooth solutions of ( [ bheq ] ) exist for times of the order as . explicitly , if denotes the standard sobolev space of functions with weak -derivatives , we prove the following result : [ th : main ] suppose that .there are constants and , depending only on , such that for every with there exists a solution of ( [ bheq ] ) defined on the time - interval ]is given by , or ,\ ] ] as may be verified by use of the identity .this solution oscillates with frequency one between the initial data and its hilbert transform , and the effect of the nonlinear forcing term on the linearized equation averages to zero because it contains no fourier component in time whose frequency is equal to one .alternatively , one can view the averaging of the nonlinearity as a consequence of the fact that the nonlinear steepening of the profile in one phase of the oscillation is canceled by its expansion in the other phase .this phenomenon is illustrated by numerical results from , which are reproduced in figure [ u_sing ] .the transition from an lifespan for large to an lifespan for small is remarkably rapid : once a singularity fails to form over the first oscillation in time , a smooth solution typically persists over many oscillations . for the burgers - hilbert equation ( [ bheq ] ) versus the logarithm of for fixed initial data .numerical solutions are shown by diamonds .the steeper line is a formal asymptotic prediction from for , which gives .the shallower line is the singularity formation time for the inviscid burgers equation , which gives .( see for further details.),width=384 ] in the context of the motion of a vorticity discontinuity , the formation of a singularity in a solution of ( [ bheq ] ) corresponds to the filamentation of the discontinuity .the result proved here corresponds to an enhanced lifespan before nonlinear ` breaking ' of the discontinuity leads to the formation of a filament .there are three main difficulties in the proof of theorem [ th : main ] .the first is that the presence of a quadratically nonlinear term in ( [ bheq ] ) means that straightforward energy estimates prove the existence of smooth solutions only on time - scales of the order .following the idea introduced by shatah in the context of pdes , and used subsequently by other authors , we remove the quadratically nonlinear term of the order by a normal form or near - identity transformation , replacing it by a cubically nonlinear term of the order .the second difficulty is that a standard normal form transformation of the dependent variable , of the type used by shatah , leads to a loss of spatial derivatives because we are using a lower - order linear term ] , we find that satisfies the equation -h\mathbf{h}\left [ h_{x}\right ] -\mathbf{h}\left [ h\right]h_{x}\right\rbrace = \mathbf{h}\left [ h\right].\ ] ] we will make the change of variables ( [ nearid_trans ] ) in ( [ htrans ] ) , so first we discuss ( [ nearid_trans ] ) .the map is smoothly invertible if , which holds by sobolev embedding if is sufficiently small .specifically , we have the gagliardo - nirenberg - moser inequality where we can take , for example , we assume throughout this section that which ensures that by the chain rule , thus , if ( [ cinftysmall ] ) holds , then hence , since is an isometry on , and -estimates for imply -estimates for .conversely , one can use the contraction mapping theorem on , the space of continuous functions that decay to zero at infinity , to show that if and then there exists a function such that the function is smooth if is smooth , and if .thus , we can obtain initial data for from the initial data for . from ( [ nearid_trans ] ) , we have = \mathrm{p.v.}\frac{1}{\pi}\int_{{\mathbb{r}}}\left [ \frac{1-{\epsilon}\tilde{g}_{\tilde{\xi}}}{\xi-\tilde{\xi}-{\epsilon}(g-\tilde{g})}\right]\tilde{g } \ , d\tilde{\xi}\ ] ] where we use the notation using these expressions , together with ( [ hx ] ) , in ( [ htrans ] ) and simplifying the result , we find that satisfies the following nonlinear integro - differential equation : subtracting off the leading order term in from the integrand , we may write this equation as + \frac{1}{\pi}{\epsilon}^2 \int_{{\mathbb{r}}}\left(\frac{g-\tilde{g}}{x-\tilde{x}}\right)\left\ { \left(\frac{\tilde{g}}{\xi-\tilde{\xi}}\right)\left[\frac{g-\tilde{g}}{\xi-\tilde{\xi } } - \tilde{g}_{\tilde{\xi}}\right ] + \tilde{g}_{\tilde{\xi}}\left[\frac{g-\tilde{g}}{\xi-\tilde{\xi } } - g_\xi\right]\right\}\ , d\tilde{\xi}\ ] ] where (t,\xi;{\epsilon } ) = \mathrm{p.v.}\frac{1}{\pi } \int_{{\mathbb{r } } } \frac{g(t,\tilde{\xi};{\epsilon})}{\xi - \tilde{\xi } } \ , d\tilde{\xi}\ ] ] denotes the hilbert transform of with respect to and the integral of the order in ( [ f ] ) is not a principal value integral since the integrand is a smooth function of .finally , we observe that this equation can be put in the form ( [ g - eq ] ) .[ lemma : geq ] an equivalent form of equation ( [ f ] ) is given by -\frac{1}{\pi}{\epsilon}^2 \partial_{\xi}\int_{{\mathbb{r}}}(\xi-\tilde{\xi})\tilde{g}_{\tilde{\xi}}\ , \phi \left ( \frac{g-\tilde{g}}{\xi-\tilde{\xi}};{\epsilon}\right ) \, d\tilde{\xi},\ ] ] where first , we check that ( [ finalfinaleq ] ) is well - defined . abusing notation slightly , we write from ( [ defphi ] ) , so when , which is implied by ( [ cinftysmall ] ) . in that case we use in the right hand side of this inequality and apply the cauchy - schwartz inequality to get where ^{1/2}\ ] ] denotes the -norm of with respect to , which is a function of .temporarily suppressing the -variables and denoting the derivative of with respect to by , we have from the taylor integral formula that and the cauchy - schwartz inequality implies that thus , using this estimate in ( [ tempint ] ) , we get thus , the -integral in ( [ finalfinaleq ] ) converges when and is , in fact , a uniformly bounded function of .to verify that ( [ finalfinaleq ] ) agrees with ( [ f ] ) , we take the -derivative under the integral in ( [ finalfinaleq ] ) , use ( [ defphic ] ) which implies that and integrate by parts in the result .this gives + \frac{1}{\pi}{\epsilon}^2 \int_{{\mathbb{r } } } \left(\frac{g-\tilde{g}}{x-\tilde{x}}\right ) \left[\tilde{g } c_{\tilde{\xi } } - ( \xi-\tilde{\xi } ) \tilde{g}_{\tilde{\xi } } c_\xi \right ] \, d\tilde{\xi}. \label{tempgeq}\ ] ] using the equations in ( [ tempgeq ] ) and comparing the result with ( [ f ] ) proves the lemma . multiplying ( [ finalfinaleq ] ) by , integrating the result with respect to , and integrating by parts with respect to , we find that the right - hand side vanishes by skew - symmetry in so that the conservation of is consistent with the conservation of from ( [ bheq ] ) . hence , from ( [ normest ] ) , we have . differentiating ( [ finalfinaleq ] )twice with respect to , multiplying the result by , integrating with respect to , and integrating by parts with respect to , we get where \ , d\xi d\tilde{\xi}. \label{tempi}\ ] ] the following lemma estimates in terms of the -norm of .[ lemma : est ] suppose that is given by ( [ tempi ] ) where is defined in ( [ defphi ] ) , and is defined in ( [ defc ] ) .there exists a numerical constant such that whenever satisfies ( [ c2small ] ) .we first convert the -derivative in the expression ( [ tempi ] ) for to a -derivative .let where a prime on and related functions denotes a derivative with respect to .it follows from ( [ defcxi ] ) that \phi(c;{\epsilon } ) \\ & = ( \xi -\tilde{\xi } ) c\phi(c;{\epsilon } ) - ( \xi-\tilde{\xi})^2 \phi_{\tilde{\xi}}(c;{\epsilon}).\end{aligned}\ ] ] we use this equation in ( [ tempi ] ) and integrate by parts with respect to in the term involving .since is independent of , this gives \, d\xi d\tilde{\xi } \label{defi}\ ] ] where expanding the derivatives with respect to in ( [ defi ] ) , using ( [ defc ] ) to express in terms of , and integrating by parts with respect to in the result to remove the third - order derivative of , we find that can be expressed as where the functions , are given explicitly by in particular , if , which is the case if satisfies ( [ c2small ] ) , then we will estimate the terms in ( [ defij ] ) separately . _estimating : _ using ( [ m ] ) in ( [ defij ] ) , we get that \left(\int_{{\mathbb{r}}}g_{\xi\xi}^2\ , d{\xi}\right ) \\ & \leq 4 \sup_{\xi\in { \mathbb{r } } } \left[\left(\int_{{\mathbb{r}}}c^2\ , d\tilde{\xi}\right)^{1/2 } \left(\int_{{\mathbb{r}}}c_\xi^2\ , d\tilde{\xi}\right)^{1/2}\right ] \left(\int_{{\mathbb{r}}}g_{\xi\xi}^2\ , d\xi\right ) .\end{split } \label{tempi1}\end{aligned}\ ] ] by a similar argument to the proof of ( [ cl2 ] ) , using taylor s theorem with integral remainder and the cauchy - schwartz inequality , we have from ( [ defc ] ) and ( [ defcxi ] ) that ^ 2\ , d\tilde{\xi } \\ & = \int_0 ^ 1 \int_0 ^ 1 \int_{{\mathbb{r } } } ( 1-r)(1-s ) g^\prime\left(\tilde{\xi } + r(\xi-\tilde{\xi})\right ) g^\prime\left(\tilde{\xi } + s(\xi-\tilde{\xi})\right)\ , d\tilde{\xi } dr ds \\ & \le \int_0 ^ 1 \int_0 ^ 1 \int_{{\mathbb{r } } } ( 1-r)(1-s ) \\ & \qquad \left(\int_{{\mathbb{r } } } g^{\prime2}\left(\tilde{\xi } + r(\xi-\tilde{\xi})\right)\ , d\tilde{\xi}\right)^{1/2 } \left(\int_{{\mathbb{r } } } g^{\prime2}\left(\tilde{\xi } + s(\xi-\tilde{\xi})\right)\ , d\tilde{\xi}\right)^{1/2 } dr ds \\ & \le \left(\int_0 ^ 1 \int_0 ^ 1 \frac{(1-r)(1-s)}{\sqrt{rs } } \ , dr ds\right ) \left(\int_{{\mathbb{r } } } g_\xi^2\left(\xi\right)\ , d{\xi}\right ) \\ & \le \frac{16}{9 } \|g_{\xi\xi}\|^2_{l^2}.\end{aligned}\ ] ] thus , using ( [ cl2 ] ) and ( [ l2cxi ] ) in ( [ tempi1 ] ) , we get that where is a numerical constant . _estimating : _ using ( [ m ] ) and ( [ l2cxi ] ) in ( [ defij ] ) , we get that suppressing the -variables , we observe from ( [ defc ] ) that where is the maximal function of , defined using intervals whose left or right endpoint is . using this inequality and the cauchy - schwartz inequality in ( [ tempi2 ] ) , we find that the maximal operator is bounded on , so there exists a numerical constant such that for example , from , we can take it follows that where . _ estimating : _ using ( [ defcxi ] ) in ( [ defij ] ) , we we can rewrite as splitting this integral into two terms , we get where using ( [ m ] ) , we have in exactly the same way as , which gives where . we estimate in a similar way to as \left(\int_{{\mathbb{r } } } |g_{\xi}g_{\xi \xi}|\ , d\xi\right),\ ] ] which by use of ( [ l2cxi ] ) and the cauchy - schwartz inequality gives where . combining these estimates ,we get ( [ iest ] ) with where is the maximal - function constant in ( [ max_con ] ) . using ( [ iest ] ) in ( [ h2eq ] ), we find that since and is conserved , we get provided that ( [ c2small ] ) holds .it follows from ( [ energyest ] ) and gronwall s inequality that if , where is sufficiently small , then remains finite and ( [ c2small ] ) holds in some time - interval , where the constants may be chosen to depend only on .the same estimates hold backward in time , so this completes the proof of theorem [ th : main ] . by solving the differential inequality ( [ energyest ] ) subject to the constraint ( [ c2small ] ) , we can obtain explicit expressions for and . let which is comparable to from ( [ normest ] ) .then we find that theorem [ th : main ] holds with is the constant in ( [ gag_nir ] ) and is the constant in ( [ acon ] ) .in this section , we relate the near - identity transformation of the independent variables used above to a more standard normal form transformation of the dependent variables , of the form introduced by shatah where is a bilinear form .we consider the normal form transformation given in : .\ ] ] here , denotes the derivative with respect to and . differentiating ( [ shortnft ] ) with respect to , using ( [ bheq ] ) to eliminate , and simplifying the result , we find that this transformation removes the nonresonant term of the order from the equation and gives = \mathbf{h}[v ] .\label{shortnfteq}\ ] ] the bilinear form in ( [ shortnft ] ) is not bounded on , but one can show that the normal form transformation ( [ shortnft ] ) is invertible on a bounded set in when is sufficiently small .we were not able , however , to obtain -estimates for from ( [ shortnfteq ] ) , because ( [ shortnfteq ] ) contains second - order derivatives , rather than first - order derivatives as in ( [ bheq ] ) , and there is a loss of derivatives in estimating the -norm of .in fact , for every power of that one gains through a normal form transformation of the dependent variable , one introduces an additional derivative .the appearance of additional derivatives is a consequence of using a zeroth - order linear term ] and taking the hilbert transform of ( [ shortnft ] ) , we get the ode we regard as a given function and use ( [ eq1 ] ) to determine the corresponding function .we may write ( [ eq1 ] ) as which agrees up to the order with an evolution equation in for : by the method of characteristics , the solution of ( [ heps ] ) is which is the transformation ( [ nearid_trans ] ) . since ( [ nearid_trans ] ) agrees to the order with a normal form transformation that removes the order term from ( [ bheq ] ) , this transformation must do so also , as we verified explicitly in section [ sec : proof ] .it is rather remarkable that the normal form transformation ( [ shortnft ] ) can be implemented by making a change of spatial coordinate in the equation for , but we do not have a good explanation for why this should be possible .
we consider an initial value problem for a quadratically nonlinear inviscid burgers - hilbert equation that models the motion of vorticity discontinuities . we use a normal form transformation , which is implemented by means of a near - identity coordinate change of the independent spatial variable , to prove the existence of small , smooth solutions over cubically nonlinear time - scales . for vorticity discontinuities , this result means that there is a cubically nonlinear time - scale before the onset of filamentation . normal form transformations , nonlinear waves , inviscid burgers equation , vorticity discontinuities . 37l65 , 76b47 .
the increasing availability of data sets about social relationships , such as friendship , collaboration , competition , and opinion formation , has recently spurred a renewed interest for the basic mechanisms underpinning human dynamics .aside with the classical studies in social sciences and social network analysis , some interesting contributions to the understanding of social dynamics have lately come from statistical physics , which has brought in the field new tools and analytical methods to study systems consisting of many interacting agents . in such wider context, much effort has been devoted to the study of the dynamics responsible for opinion formation in populations of interacting agents , and in particular to a more in - depth understanding of the elementary mechanism allowing the emergence of global consensus and of the role of endogenous and exogenous driving forces , including social pressure and mass media . as a result of this investigation , a plethora of models of opinion formation have been proposed and studied .although the majority of those models originally made the simplifying assumption of considering homogeneous interaction patterns ( basically , regular lattices ) , the rise of network science provided the tools to overcome this limitation , featuring more realistic interaction patterns .more recently , also the role of mass media in the formation of global consensus has attracted a lot of interest .an aspect of social relationships that has been mostly discarded in the study of the emergence of consensus is the fact that agents usually interact in a variety of different contexts , making the interaction pattern effectively multilayered and multi - faceted . as a matter of fact ,the urge to maintain a certain level of coherence among opinions on different but related subjects might actually play a crucial role in determining the reaction of each agent to external pressure and in facilitating ( or hindering ) the emergence of global consensus .moreover , the balance between the internal tendency towards coherence and the necessity to adequately respond to social pressure is naturally dependent on each person s attitude , thus implying a certain level of heterogeneity .some individuals may be more prone to align more closely to the opinions of their neighbors in each of the different contexts where they interact , putting little or no importance to the overall coherence of their profile .on the contrary , some other agents may indeed be more reluctant to change their opinion on a topic , in spite of being urged by other individuals or media , if such a change results in a contradiction with another of their opinions on a different but related subject . in this paperwe propose a model of opinion formation that takes into account _i ) _ the concurrent participation of agents to distinct yet connected interaction levels ( representing discussion topics or social spheres ) , _ ii ) _ the presence of social pressure and _ iii ) _ the exogenous action of mass media . our analysis can be naturally cast in the framework of multiplex networks , which has recently proven successful for a more realistic modeling of different social dynamics . according to this framework ,agents are represented by nodes connected by links of different nature , where links of the same kind belong to the same layer of the system .each layer thus represents the interaction pattern of individuals discussing a given topic .different layers are in general endowed with different topologies , to mimic multi - layer real - world social systems where distinct interaction patterns are present at different levels .peer social pressure occurs on each topic through intra - layer links .the opinions of an individual on the different topics are also driven towards a specific state by the tension towards internal agent s coherence , represented by a preferred configuration of opinions on different topics .mass media are introduced as fields acting uniformly on all the agents at the level of each single topic .the resulting model is a natural extension of the traditional ising model of magnetic interaction and of more recent variations introduced to take into account the effect of external forces on the emergence of consensus , in the spirit of less and more recent work connecting statistical mechanics of disordered systems and opinion dynamics .the key ingredient of heterogeneous distributed couplings between opinions lead to interesting equilibrium states , where agents can remain fully coherent while a variable level of global consensus is attained , depending on the strength of the pressure exerted by mass media .this clearly resembles the dynamics observed in real societies , thereby supporting the relevance of our approach .we consider a population of individuals interacting through different layers , representing different topics or subjects .the network of each layer represents the pattern of interactions among agents on a specific topic , which is in general distinct from those of the other layers , and is encoded by the adjacency matrix } } = \{a_{ij}{^{[\alpha]}}\}_{i , j=1,\ldots , n} ] only if agent and agent are neighbors on layer , and equal to zero otherwise .the structure of the overall interaction pattern is thus concisely represented by the vector of adjacency matrices } } , \ldots , a{^{[m]}}\} ] are in general distinct .each agent expresses a binary opinion } } = \pm 1 ] .we assume that agent opinions evolve over time due to two concurrent mechanisms . on the one hand , agents are subject to social pressure from their peers on each layer ( denoted by the red and blue links in fig . [ fig:1 ] ) , so that the opinion of agent on node will tend to remain aligned with the opinions of its neighbors on the same layer .this mechanism , based on the elimination of conflicting opinions on a microscopic scale , has been widely observed in many real - world social systems , and is responsible for the attainment of local consensus on each layer . on the other hand ,we assume that the opinions of agent at the different layers are not independent from each other but are instead interacting , so that for each agent there exists a preferred configuration of opinions at the different layers which is considered _ coherent_. for instance , the political orientation of a person is often related to his / her ideas about economy and welfare , so that the emergence of consensus with its neighbors on one subject should remain coherent with its current opinions on the other layers .moreover , we imagine that agents are exposed , on each layer , to the action of mass - media , a mean - field external force which preferentially drives their opinions towards either or .we formalize the interplay of these concurrent dynamics by defining the functional : } } = j \sum_{j=1}^n a_{ij}{^{[\alpha ] } } s_j{^{[\alpha ] } } + h{^{[\alpha]}}+\gamma\frac{\chi_i}{j } \sum_{\mathclap{\substack{\beta = 1 \\\beta \neq \alpha}}}^m s_i{^{[\beta ] } } \label{eq:1}\ ] ] for each agent and each topic .the first sum on the rhs of eq .( [ eq:1 ] ) represents the social pressure exerted on by its neighbors on layer , and is weighted by the coefficient , which models its intrinsic permeability to social pressure .the variables }} ] , is a measure of how much agent is flexible towards a change of one of its opinions , eventually leading to configurations which do not agree with what it would consider a coherent configuration of its spins . in other words ,agents for which assign less importance to internal coherence and more relevance to social pressure , while the opposite happens when . in our model ,the opinions of each agent evolve towards configurations which maximize the function } } = s_i{^{[\alpha ] } } f_i{^{[\alpha]}} ] .although being a somehow simplified model of real - life interactions , where not just binary but also intermediate opinions between two extremes are possible and agents might respond differently to social pressure and to the external effect of mass - media , this model turns out to be already general enough to investigate the elementary mechanisms driving interacting opinions .numerical implementation of this dynamical evolution is obtained through extensive monte carlo simulations , adopting an appropriately modified version of the glauber algorithm . in particular , at each step we update all the spins } } , \> i=1,\ldots , n , \>\alpha=1,2 , \ldots , m ] and accepting the flip only when the new configuration leads to a larger value of the function }} ]is also updated according to the new configuration .clearly , the form of }} ] correspond to preferred configurations for node .clearly , these rules imply a deterministic evolution of the opinions , which is not observed in real social systems .we then need to account for the presence of stochastic noise .its simulation is realized by introducing a parameter , which may be regarded as a social temperature in analogy with magnetic systems , induced by all those mechanisms which drive the system out of its deterministic dynamics , such as partial information or misunderstandings .we include such thermal noise in the dynamics of our model in a standard way : when , an agent may change its opinion on the topic even if it leads to configurations with a smaller }} ] , with }} ] due to the flip of thew spin }} ] , and }} ] , with } } } \right| ] indicating which of the opinion is prevalent among the population .we also define the average internal coherence of the agents as follows : } } s_i{^{[2]}}. \label{eq:3}\ ] ] notice that , when , if the two spins of each agent are coherent with their preferred configuration , while if they are incoherent for every agent .the opposite holds when .an interesting remark is that the global function }} ] can be interpreted as the magnetization of the different layers of the system .we discuss in this section the transition towards coherence and consensus and the equilibrium properties of the model , focusing on the dependence of the order parameters and }} ] . in details , we investigate in sec .[ subsec : a ] the case of and , i.e. a population of homogeneous agents in the absence of social noise . in sec .[ subsec : b ] we consider a population of heterogeneous agents ( not fixed ) , while keeping . finally , in sec .[ subsec : c ] we study the effect of social noise by investigating the dependence on .simulations of the glauber dynamics described in the previous section are realized by varying the global parameter adiabatically .the initial configuration is obtained by setting }}=1 ] .we let the system perform two complete hysteresis cycles before recording the resulting configurations .this procedure eliminates possible effects due to the specific initial conditions .the results presented here are obtained by simulating the dynamics on a multiplex of two uncorrelated barabasi - albert networks with the same average degree =6 .nevertheless , we remark that analogous qualitative results have been found for different interaction patterns , such as random graphs with the same density or systems with different values of inter - layer degree - correlation , suggesting that the only topological parameter playing a major role in the long - term behavior of the dynamics is the average degree of the networks at the two layers .we consider here the case of homogeneous agents , in the absence of social noise , i.e. , the case .the effects induced by the external forces , e.g. , the mass media , are studied by choosing fields with opposite signs and relative strength according to the two typical cases : }}| = |h{^{[2]}}| ] .we remark that the qualitative behaviour observed does not depend on the specific values of }}| ] .first , we study the transition in coherence as a function of : for fields of both equal and different intensity , we provide evidence of the existence of a sharp transition along with a hysteresis loop .we are also able to propose an empirical relation to estimate the transition points , given the intensity of the fields and the density of the layers .we note that the case of fields with equal signs is somehow trivial , since the opinions on both layers are pulled in the same direction and global consensus emerges easily .second , we find that a coherent population , i.e. in the regime , exhibits either states of full or null consensus and that states of partial consensus can not be attained in a population of homogenous agents .we show examples of the steep transitions that the system exhibits by plotting c as a function of in the top panels of fig .[ fig:2a](a1 ) for }}|>|h{^{[2]}}| ] .the behavior of the coherence is robust with respect to the relative strength of the external fields : we always observe a sharp transition from to characterised by a marked hyseresis loop .however , the actual values of }} ] deeply affect the corresponding level of consensus emerging in the population .this is shown in the bottom panels of figs .[ fig:2a](a - b ) , where we plot the corresponding value of }}} ] , we have }}}=0 ] when .as the transition is sharp , we can always infer the value of }}} ] .in fact , we respectively have }}=\pm m{^{[2]}} ] , the situation is radically different .we indeed find }}}=+1 ] when increases in the interval ] when decreases in ] . as increases ,the agents become more and more inflexible , thus favoring opinions of the same sign throughout the different topics . moreover , since }}| ] , states of non - vanishing consensus are favored . in particular ,one of the opinions ends up prevailing not just on layer but , through the internal agent coherence , also on the other layer .thus , the concurrent effect of these two mechanisms causes a steep transition towards a state of both full coherence and full consensus on a single opinion on both the topics , which is determined by the leading external field . the same dynamical explanation of the previous case can instead be given for decreasing values of beyond .as suggested before , these qualitative patterns are robust with respect to the strengths of the external fields , which only determine the exact transition points and , as shown in fig .[ fig:2b](a ) .we find that the transitions points and where the hysteresis loop starts and ends respectively are given by the following empirical non - linear relation : } } h{^{[2]}})\min\left(|h{^{[1]}}| , |h{^{[2]}}|\right ) , \label{eq : gamma}\ ] ] where }}_{ij} ] only determine a shift of the metastable region , whereas they do not modify the width of the hysteresis cycle .we support this conjecture by showing in fig .[ fig:2b](b ) the values of ( i.e. the center of the hysteresis cycle ) obtained from the simulations as a function of }} ] , confirming the validity of the relation expressed in eq . .we conclude that in the case of homogeneous agents the system always reaches configurations of full consensus on both layers , where the dominant opinion on each layer is determined by the sign of the strongest external field ( phase diagram in fig .[ fig:3 ] , top panel a ) . the only exception is given by the critical line where we find } } \approx 0 ] for , or equivalently , ( top panels ) and the plot of as a function of for a typical choice of the external fields ( }}=5 ] specifically ) for a few simple but explanatory cases .we first consider the simplest possible setup where half of the population is assigned , whereas the other one is assigned [ fig .[ fig:3](b ) ] , meaning respectively that of the population is flexible with respect to internal consensus ( ) while the remaining agents are intransigent ( ) . even if in this case the phase diagram looks similar to the one in fig .[ fig:3](a ) for a population of homogeneous agents , we can already observe the emergence of states of partial consensus close to the diagonal , i.e. , for }}|,|h{^{[2]}}|>2.5 ] . in this case, the qualitative behaviour of both }} ] with respect to the previous case .thus , we may expect to find even richer phase diagrams and smoother transitions in with respect to the cases presented before if we further increase the heterogeneity of the population .indeed , when is sampled uniformly in ] smoothly increases from to for increasing values of }} ] . furthermore , the consensus attained in the region }}|<2.5 ] is significantly smaller than in the other cases .these results suggest that one can smoothly tune the level of consensus on each topic by choosing the relative strength of the media acting on the two layers , and yet obtain states in which the majority of the agents are internally coherent .we also recall that in all the non - homogeneous cases ( fig .[ fig:3](b - d ) ) the system reaches full coherence , but the transition is not sharp .we conclude by highlighting that our model , even if simplified , is nevertheless able to generate non trivial states of partial consensus across the layers due to the driving effect of mass media , while at the same time ensuring that each agent will still find itself coherent .we here consider the case with social noise , i.e. . for simplicity , we investigate its effect in a population of homogeneous agents ( ) .we find that the system exhibits the same qualitative behaviour described in the case for all temperatures below a non - null critical temperature , whereas for it does have absorbing states for finite values of , thus lying in a paramagnetic phase dominated by noise .this is shown in fig .[ fig:4](a ) where we plot as a function of both and ( forward branch of the hysteresis cycle ) for an exemplary choice of the external fields }} ] with opposite signs .we note that different values of }}| ] do not change qualitatively the results presented .indeed , for the system exhibits steep transitions to states of full coherence and consensus .however , when increases , i.e. as the noise becomes stronger , the jump of the transition becomes less pronounced and the hysteresis cycle shrinks considerably , eventually disappearing at .for only states of partial coherence and consensus can be obtained , and for . only in the limit ,the population is able to recover full coherence .this scenario is confirmed by fig .[ fig:4](b ) , where we report projections of the phase diagram of fig .[ fig:4](a ) for different exemplary values of . for the hysteresis cycle is wide and the jump in goes from to . for ,slightly below , the hysteresis cycle has almost disappeared and the jump in is consistently reduced , though still present . for ,i.e. beyond the critical level of noise , the transition in becomes continuous and the hysteresis loop disappears .we note that the noise similarly affects the system in the case of a population of heterogeneous agents , such that a paramagnetic phase appears beyond also in this case .furthermore , we stress that depends non trivially on the set of parameters of the system .however , deriving such functional relation is beyond the scope of the present work .we conclude by recalling that opinion evolution in real social systems is inevitably affected by noise as already suggested by recent works on the subject ( see for instance ) . in this section, we have shown that the behavior of the system for does not change qualitatively in the presence of noise below some critical value for both a population of homogeneous agents and one of heterogeneous agents .this ultimately suggests that our finding that heterogeneity is necessary in population of coherent agents in order to exhibit realistic states of partial consensus , found for noise - free setups of our model , may still be relevant for real social systems .understanding the elementary mechanisms responsible for the emergence of consensus in social systems is a fascinating problem that has stimulated research in several different fields , from sociology to mathematics , from computer science to theoretical physics , for more than a few decades. nevertheless , traditional models used in the field to describe such systems are still far from capturing the essence of the dynamics of real societies . indeed, these models of opinion formation overall underestimate the importance of both ( i ) the existence of many different contexts where social dynamics may develop , and ( ii ) the variety of interaction patterns that naturally forms between individuals at each of these different aspects . in details ,these models are usually based on the simplifying assumption that the social interactions underpinning consensus are essentially homogeneous , whereas real - world societies are instead intrinsically multilayered and multifaceted , meaning that individuals normally interact with several different neighbourhoods in a number of different yet correlated contexts .such multilayered structure of social interactions also naturally imply that relationships among each individuals opinions on many different topics or subjects may exists , thus playing a major role in the formation of an agent s public profile .however , this issue has rarely been addressed in the literature to our knowledge .overall , these properties of real social systems , force agents to pursue a balanced trade - off between their internal tendency towards providing a coherent image of themselves , corresponding to a coherent set of opinions over the range of contexts in which their social activities develop , and the external pressure towards local homogenization that comes from their concurrent participation to different social circles . in this workwe address the issues ( i - ii ) thoroughly , and propose a novel , yet simple , model of social opinion dynamics which is capable to account for them all .our model is obtained by suitably readapting the framework of multilayer networks , which has been developed in the last years in different contexts .remarkably , the proposed model suggests that the delicate equilibrium between internal agent coherence and responsiveness to external social pressure in a multilayered social environment might indeed be one of the fundamental ingredients responsible for the appearance of non - trivial consensus patterns , such as states of partial consensus emerging from a population of coherent agents . despite being straightforward in its formulation and relying on rather simple assumptions ,the model we proposed allows to take appropriately into account the interplay between each agent s tendency towards coherence , the neighborhood s tendency towards local consensus and the pulling external forces represented by the persistent action of mass media .one of the most interesting findings of the present work is that the introduction of mild heterogeneity in the agents response to social pressure fosters the emergence of non - trivial states in which internal agent s coherence is always reached at the expenses of a lower level of global consensus .this picture is consistent with what is widely observed in structured societies , where a perfect global consensus is never stable while individuals tend to adhere to pre - defined sets of social values which they consider coherent .another remarkable effect reproduced by our model is the impact of mass media pressure , especially in the case where the population is heterogeneous .in particular , it is interesting to observe that by an appropriate tuning of the relative strength of the two external fields representing mass media one can indeed set any desired value of consensus on each layer , with the possibility of driving the population from incoherent to more coherent configurations in a continuous way .finally , the results of the study of the role played by the presence of noise are compatible with real - world scenarios , in which incomplete or inaccurate information about the state of peers is the norm and not an exception .we highlight that the model discussed in this work is limited to a specific setting , where both the social and mass - media pressure are considered only as a mean - field effect .these assumptions imply that the response of agents to both external fields and interactions with his / her neighbors is homogeneous , which is only a first - order approximation of the real effects of mass media and social pressure on a population of agents .a more realistic approach would require to consider each agent s adaptive response to such influence , i.e. , by both considering that the effect of external field on layer on each node is a random variable }}$ ] drawn from a certain distribution , and considering an agent - dependent response to interactions with other individuals , i.e. by replacing with an agent - dependent parameter .however , we purposedly decided to leave the investigation of these generalizations to a future work . in conclusion, we find it quite intriguing that by taking into account the presence of concurrent interactions on a variety of different topics we were able to provide a simple explanation for the formation of growing patterns of consensus , whose level appears to be dependent on the strength of mass media pressure , as long as the agents acknowledge different couplings between their opinions on the different topics .we believe that the results presented in this work will spur further research towards a better understanding of the implications of interconnected and multilayered interaction patterns on the spreading of opinions and emergence of consensus in real - world social systems ., v.n . and v.l .acknowledge support from the project lasagne , contract no.318132 ( strep ) , funded by the european commission .this research utilized queen mary s midplus computational facilities , supported by qmul research - it and funded by epsrc grant ep / k000128/1 .10 d. lazer , a. pentland , l. adamic , s. aral , a .-barabasi , d. brewer , n. christakis , n. contractor , j. fowler , m. gutmann , t. jebara , g. king , m. macy , d. roy , m. v. alstyne , social science : computational social science , science 323 ( 5915 ) ( 2009 ) 721723 .s. boccaletti , g. bianconi , r. criado , c. del genio , j. gmez - gardees , m. romance , i. sendia - nadal , z. wang , m. zanin , the structure and dynamics of multilayer networks , phys .544 ( 1 ) ( 2014 ) 1 122 , the structure and dynamics of multilayer networks .
the formation of agents opinions in a social system is the result of an intricate equilibrium among several driving forces . on the one hand , the social pressure exerted by peers favours the emergence of local consensus . on the other hand , the concurrent participation of agents to discussions on different topics induces each agent to develop a coherent set of opinions across all the topics in which he is active . moreover , the prevasive action of external stimuli , such as mass media , pulls the entire population towards a specific configuration of opinions on different topics . here we propose a model in which agents with interrelated opinions , interacting on several layers representing different topics , tend to spread their own ideas to their neighbourhood , strive to maintain internal coherence , due to the fact that each agent identifies meaningful relationships among its opinions on the different topics , and are at the same time subject to external fields , resembling the pressure of mass media . we show that the presence of heterogeneity in the internal coupling assigned by agents to their different opinions allows to obtain states with mixed levels of consensus , still ensuring that all the agents attain a coherent set of opinions . furthermore , we show that all the observed features of the model are preserved in the presence of thermal noise up to a critical temperature , after which global consensus is no longer attainable . this suggests the relevance of our results for real social systems , where noise is inevitably present in the form of information uncertainty and misunderstandings . the model also demonstrates how mass media can be effectively used to favour the propagation of a chosen set of opinions , thus polarising the consensus of an entire population .
in the competitive environment of semiconductor manufacturing , accurate reliability prediction results in significant time - to - market and profitability improvements .prediction quality depends on the manufacturer s ability to characterize process - related instabilities and defects in a given design .burn - in stresses are commonly performed on products to accelerate the fabrication process failure mechanism and to screen out design flaws . at sub - micron process technology nodes, it has been suggested that burn - in stress is likely to affect the negative bias temperature instability ( nbti ) , which in turn will affect the operational performance of circuits .the objective of the work described in this paper is to evaluate the effect of burn - in stress on nbti , with reference to the performance effect on analogue circuits .a digital - to - analogue converter ( dac ) module was selected as a case study . with device reliability models and circuit simulation, this paper analyses the effect of burn - in stress on the shift of key dac parameters such as the integral non - linearity ( inl ) , differential non - linearity ( dnl ) and gain error .since the advent of 90 nm cmos technology , nbti has become one of the top circuit reliability issues for both pmos and nmos devices , because it can severely impact product performance over time . compared with previous process generations ,nmos hot electron degradation is no longer of such concern . at 45 nm , positive bias temperature instability ( pbti )has an effect on nmos devices that is about half of that of nbti on pmos devices .several studies have reported on the impact of nbti on the performance of analogue and digital components .it was shown by kang _et al _ that the degradation in maximum circuit delay closely follows the trend of threshold - voltage ( ) degradation in a single pmos transistor .their finding was based on a detailed analysis of circuit performance with respect to nbti degradation , particularly focusing on the maximum delay degradation of random - logic circuits .et al _ confirmed the effect of nbti degradation under ac conditions .in addition , kufluoglu _ et al _ addressed both pmos - level measurement delay effects and real - time degradation and recovery by simulation . a study performed by bhardwaj _et al _ revealed that circuit - level nbti models can be further improved by considering various process technology - dependent parameters which lead to process variation effects .ball _ et al _ have explored the burn - in implications for sram circuits .their approach has demonstrated that the minimum operating voltage , , increases during burn - in as a result of nbti and is of the order of the nbti - induced shift .schroder and babcock have thoroughly studied the time to failure ( ) relationship to voltage and temperature effects . from their analysis , is affected as follows : * when the burn - in stress voltage , increases , decreases ; * when the difference between the nominal voltage , , and increases , decreases ; and * is inversely proportional to temperature .the worst case situation is when the system is operated at a high voltage most of the time .however , nbti degradation can also affect the minimum operating voltage , , as noted above .nbti degradation is less sensitive to than is nmos hot carrier ( nhc ) degradation .however , it is more sensitive to temperature and occurs even when the transistor is not switching , as long as it is in inversion .the following equation shows how the threshold voltage shift of a pmos transistor , as a function of the applied voltage and temperature , affects the . *c \right ) \label{eq3}\ ] ] ^\beta \label{eq4}\ ] ] where * is the scaled time to failure in seconds due to voltage and temperature scaling dependencies ; * is the mean time to failure at the selected fail criterion ( fc ) ; * is the geometry scaling function for ageing ; * is the electric field acceleration factor ; * is the electric field ( ) across the gate oxide , of thickness ; * is the junction temperature ; * is the thermal activation energy ; * is the shift , defined as the failure criterion for modelling ; * is a process - dependent variable. this effect can be simulated by applying a signal to the circuit of interest and summing the degradation from each time step . in this case , the effect of the shift in , the threshold voltage , for a time varying waveform , can be calculated by using the quasi - static time integral with time in equation ( [ eq5 ] ) . lee _ et al _ demonstrated the nbti effect on product reliability degradation .in addition , their simulator includes other reliability mechanisms such as hot carrier injection ( hci ) and time - domain - dielectric - breakdown ( tddb ) .the simulation demonstrates the validity of using a tddb degradation model to predict the failure rate of a complicated microprocessor .the model is derived using large discrete capacitor / device tddb data with various temperature , voltage and geometry considerations .it is noted that even though nbti degradation occurs under elevated voltage and temperature , the nbti phenomena show some relaxation .this occurs due to passivation of nbti - induced silicon dangling bonds by the hydrogen which has diffused from the gate oxide to the interface .there are two types of relaxation that need to be seriously considered for circuit reliability modelling .1 . fast relaxation : this relaxation occurs as soon as the stress is removed .it is responsible for reduced ac degradation even after accounting for the transistor ` on ' time .however , this relaxation mode is not covered in our reliability simulations .2 . extended relaxation : this relaxation occurs as the device is kept unbiased .our reliability analysis accounts for this relaxation mode .figure [ figure 12 ] illustrates the two relaxation modes . because relaxation lessens the effects of nbti , a device under continuous usage may suffer a higher degradation than the reliability simulation predicts .we approximate a complex integrated circuit with a series failure model .we also assume each failure mechanism has an exponential lifetime distribution . in this way ,the failure rate of each failure mechanism is treated as a constant .with these two assumptions , the reliability simulation models , which are often used to extrapolate failure rates , can be validated based on available data .for this work , intel s internal tool , relsim ( reliability simulation ) , is used to predict changes in device and circuit performance over a product s lifetime .it further allows simulation of post - degraded circuits to ensure circuit designs meet reliability requirements at end - of - life ( eol ) .the reliability simulation methodology used in this paper is shown in figure [ fig : anicepicture ] .the simulation has two modes of operation .the first mode , the stress mode , calculates the transistor shift .the second mode , the playback mode , simulates degraded circuit performance based on stress mode results .the simulation is conducted to cover elevated ranges of process , voltage and temperature ( pvt ) .the dac reliability simulation is run in a 3-step process in the design environment . 1 .simulate the non - degraded behaviour at the typical circuit operating condition ( and temperature ) .2 . in stress mode , calculate the amount of degradation on each transistor . this is done at a slightly higher voltage and temperature to get a more conservative estimate of the degradation .3 . in playback mode , simulate the degraded circuit , using the degradation calculated in the stress mode .the stress mode is used to report the degradation of a circuit at future times chosen by the user .the user provides the ageing time , the ageing method ( e.g. none , fixed , uniform , bias and temperature user parameters ) and a reference degradation value .also necessary is a degradation parameter file that contains parameters for mos device stress calculations . during the stress simulation ,a stress file is generated at each specified future time .the stress file contains the stressed ( degraded ) values of each mos device in the circuit .the degraded circuit values from the initial stress mode can be subsequently used in playback mode .the playback mode produces output signal waveforms for an aged circuit .the information from a stress file is read and a perturbation function is applied to the mos depending on the degradation model chosen in the stress mode . the reliability simulator has been used for transistor ageing modelling across major process technologies from 250 nm down to 14 nm .the models have been extensively calibrated against actual silicon test chip data to ensure accuracy .the simulator can be used to model the minimum ( ) degradation effects .it is able to find the worst case corners , and takes voltages on all nodes into account .the ac nbti modelling capability provides more accurate reliability performance predictions than static dc worst - case models .furthermore , it can be calibrated with the ac circuits to include nbti recovery , similar to that in .another key advantage of this reliability simulator is that the pmos degradation is modelled with threshold voltage shifts based on non - uniform i - v degradation .the simulator models the effect on the mos transistors i - v characteristics and the effect on the device parameters and applied voltages . under the pmos degradation model ,it is suitable for both digital and analogue simulations .for this case study a video dac has been used .the performance of the dac is critical for achieving excellent video quality .the required accuracy of the dac is based on the differential gain and phase distortion specifications for tv .the dac is designed as a current steering architecture to achieve high accuracy and low distortion of the analogue video signal .the signal range is between zero volts to the maximum nominal analogue video signal swing of 1.3v .the digital input to each dac is latched on the rising edge of each clock and is converted to an analogue current . for a given digital input , the current source outputs are summed together and directed to the output pin by the differential current switches .an analogue video voltage is created from the dac output current flowing into the termination resistors . to determine the required output current of the dac circuit , the video level specifications for the various video formats along with the effective load terminationare measured .the lsb output voltage , which ranges between 684 and 1.27 mv , is a function of the supported video format .given the circuit mismatch sensitivity of this circuit , paired devices are designed accordingly ; typically with greater lengths .the dac is composed of parallel current switches .this so - called crt dac is widely used in high speed applications specifically for its speed and linearity .the circuit is referenced to an analogue power supply which consists of an array of pmos current sources and differential current switches ( figure [ figure8 ] ) .this dac operates at 3.3v nominal voltage and implemented in 90 nm process technology .it has been shown that the 90 nm crt dac has sufficient headroom in terms of the circuit performance degradation throughout a 7-year lifetime .the degradation is calculated by scaling the gate voltages to the typical analogue operating voltages .the extrapolation is given by equation ( [ eq8 ] ) . where : * is the process related pre - factor ; * is the threshold voltage ; * is the activation energy in ev ( = 0.145 ev was chosen by experiment ) * is the transconductance parameter ( = 0.75 was chosen by experiment ) ; * is the gate to source voltage ; * is the boltzmann constant ; * is the temperature in kelvin ; * is the time in years ; and * is the voltage acceleration and exponent factor ( = 0.181 was chosen by experiment ) .for our case study , nbti analysis was performed on the dac circuit shown in figure [ figure6 ] .we simulated the nbti behaviour of the dac under normal and extreme conditions .the reliability simulation playback mode analysis was done under the typical corners , for pre - layout schematics with proper loading . for this analysis ,the circuit was aged for a 7-year lifetime to check the dac circuit functionality and the effect of nbti degradation under burn - in conditions .table [ table1 ] shows a comparison between three different conditions ..reliability simulation parameters across three different conditions [ cols="^,^,^,^",options="header " , ]the nbti degradation observed in the reliability simulation of a dac circuit revealed that under a severe stress condition such as a 40% increase in the nominal voltage supply , a significant voltage threshold mismatch , beyond the 2 mv limit , was recorded .a burn - in experiment on the dac circuit was performed to verify the simulation . a correlation between the simulation results and the burn - in behaviourwas observed , but the change in the gain error was significantly greater than predicted .d. k. schroder and j. a. babcock , `` negative bias temperature instability : road to cross in deep submicron silicon semiconductor manufacturing , '' _ journal of applied physics _ , vol .94 , no . 1 ,pp . 118 , 2003 .[ online ] .available : http://link.aip.org/link/?jap/94/1/1 k. kang , h. kufluoglu , k. roy , and m. ashraful alam , `` impact of negative - bias temperature instability in nanoscale sram array : modeling and analysis , '' _ computer - aided design of integrated circuits and systems , ieee transactions on _ , vol .26 , no .1770 1781 , oct . 2007 .s. kumar , k. kim , and s. sapatnekar , `` impact of nbti on sram read stability and design for reliability , '' in _ quality electronic design , 2006 .isqed 06 .7th international symposium on _ , march 2006 , pp .212 218 .h. kufluoglu , v. reddy , a. marshall , j. krick , t. ragheb , c. cirba , a. krishnan , and c. chancellor , `` an extensive and improved circuit simulation methodology for nbti recovery , '' in _ reliability physics symposium ( irps ) , 2010 ieee international _ , may 2010 , pp .670 675 .s. bhardwaj , w. wang , r. vattikonda , y. cao , and s. vrudhula , `` predictive modeling of the nbti effect for reliable design , '' in _ custom integrated circuits conference , 2006 .ieee _ , sept .2006 , pp .189 192 .m. ball , j. rosal , r. mckee , w. loh , t. houston , r. garcia , j. raval , d. li , r. hollingsworth , r. gury , r. eklund , j. vaccani , b. castellano , f. piacibello , s. ashburn , a. tsao , a. krishnan , j. ondrusek , and t. anderson , `` a screening methodology for vmin drift in sram arrays with application to sub-65 nm nodes , '' in _ electron devices meeting , 2006 .2006 , pp . 1 4 .n. kimizuka , k. yamaguchi , k. imai , t. iizuka , c. liu , r. keller , and t. horiuchi , `` nbti enhancement by nitrogen incorporation into ultrathin gate oxide for 0.10 m gate cmos generation , '' in _ vlsi technology , 2000 .digest of technical papers .2000 symposium on _ , 2000 , pp .92 93 .lee , n. mielke , b. sabi , s. stadler , r. nachman , and s. hu , `` effect of pmost bias - temperature instability on circuit reliability performance , '' in _ electron devices meeting , 2003 .iedm 03 technical digest .ieee international _ , vol .14.6 , dec .2003 , pp .m. latif , n. ali , and f. hussin , `` a case study for reliability - aware in soc analog circuit design , '' in _ intelligent and advanced systems ( icias ) , 2010 international conference on _ , june 2010 , pp .1 6 .m. agostinelli , s. lau , s. pae , p. marzolf , h. muthali , and s. jacobs , `` pmos nbti - induced circuit mismatch in advanced technologies , '' _ microelectronics reliability _ , vol .46 , no . 1 , pp . 63 68 , 2006 . [ online ] .available : http://www.sciencedirect.com/science/article/ pii / s0026271405000983[http://www.sciencedirect.com / science / article/ pii / s0026271405000983 ]
burn - in is accepted as a way to evaluate ageing effects in an accelerated manner . it has been suggested that burn - in stress may have a significant effect on the negative bias temperature instability ( nbti ) of subthreshold cmos circuits . this paper analyses the effect of burn - in on nbti in the context of a digital to analogue converter ( dac ) circuit . analogue circuits require matched device pairs ; nbti may cause mismatches and hence circuit failure . the nbti degradation observed in the simulation analysis indicates that under severe stress conditions , a significant voltage threshold mismatch in the dac beyond the design specification of 2 mv limit can result . experimental results confirm the sensitivity of the dac circuit design to nbti resulting from burn - in .
today the casimir effect is being actively investigated not only theoretically but also experimentally .historically the first measurement of the casimir force between metals was performed in 1958 and confirmed the existence of the force with an uncertainty of about 100% . in the following decadesthe experimental output was painfully low and only one experiment with metal test bodies was made ( see ref . for a review ) . in the last few yearsmany measurements of the casimir force have been performed using torsion pendulums , atomic force microscopes , micromechanical torsional oscillators , and other laboratory techniques [ 416 ] .most authors ( see refs . [ 114 ] ) have used the concept of the root - mean - square deviation between experiment and theory to quantify the precision of the measurements . however , for strongly nonlinear quantities , such as the casimir force which changes rapidly with separation distance, this method is not appropriate because it may lead to different results when applied in different ranges of separations .this was emphasized in ref . although no better method was suggested .the present paper contains the comparison analysis of the precision and accuracy in two recent experiments using rigorous methods of mathematical statistics .the distinctive feature of our approach is that both total experimental and total theoretical errors are determined independently of one another at some accepted confidence level .then , the absolute error of differences between calculated and measured values of the physical quantity is found at the same confidence as a function of separation , serving as a measure of the precision in the comparison of experiment and theory .in experiment the casimir pressure between two au coated parallel plates was determined dynamically by means of a microelectromechanical torsional oscillator within the separation region from 160 to 750 nm . in experiment the casimir force was measured between a si plate and a large au coated sphere using an atomic force microscope within the separations from 62.33 to 600.04 nm . in our error analysiswe use the notation which denotes either the measured casimir pressure or force as a function of separation between the test bodies .usually several sets of measurements , say , are taken within one separation region ( ) .this is done in order to decrease the random error and to narrow the confidence interval . in ref . , and in ref . .each set consists of pairs ] where . generally speaking , separations with fixed butdifferent may be different ( this was the case in ref . ) . for such measurement resultsit is reasonable to divide the entire separation range ( ) into partial subintervals of length , where is the absolute error in the measurement of separations equal to 0.6 nm and 0.8 nm in refs . , respectively .in so doing , each subinterval contains a group of points , ( in ref . ranges from 3 to 13 ) . inside each subintervalall points can be considered as equivalent , because within the interval of width the value of absolute separation is distributed uniformly .the mean and the variance of the mean of the physical quantity for the subinterval are defined as ^ 2 .\label{eq1}\ ] ] if all , i.e. , the same in different sets of measurement ( as in ref . ) , the mean and the variance of the mean at each point are obtained more simply ^ 2 .\label{eq2}\ ] ] direct calculation shows that the mean values are uniform , i.e. , change smoothly with the change of .the variances of the mean , , are , however , not uniform .to smooth them , we have used a special procedure developed in mathematical statistics . at each separation , in order to find the uniform variance of a mean , we consider not only one subinterval containing but also several neighboring subintervals from both sides of ( 4 or 5 in ref . ) or about 30 neighboring points in ref . .the number of neighboring subintervals or points is denoted by .then the smoothed variance of the mean at a point is given by , \label{eq3}\ ] ] where are the statistical weights .the maximum in eq .( [ eq3 ] ) is taken over two sets of coefficients , , and where the constants are determined from .note that max in eq .( [ eq3 ] ) leads to the most conservative result , i.e. , overestimates the random error .finally , the confidence interval at a confidence probability takes the form , \label{eq4}\ ] ] where the random absolute error in the measurement of the quantity at a separation is given by here the value of can be found in tables for the student s -distribution .for example , in the experiment , . thus ,for , we have and is independent of .the computational results for the relative random errors in the experiments at 95% confidence are shown in columns labeled ( a ) in table 1 , as the functions of separation . as is seen from column two in table 1 , in the experiment relative random error of the casimir pressure measurements is equal to 1.5% at nm , then it quickly decreases to 0.4% at nm , and then increases with further increase of separation .this is explained by the fact that the absolute random error in eq .( [ eq5 ] ) takes a maximum value at the shortest separation and monotonically decreases with the increase of separation until nm . at larger separations is practically constant and the increase of is explained by solely in terms of the decrease of the casimir pressure magnitude . in the experiment (column 8 in table 1 ) the absolute random error is only 0.78% at the shortest separation nm , and quickly increases with separation due to the decrease of the casimir force . & & + & & & + (nm)&(a)&(b)&(c)&(d ) & ( e)&(f)&&(a)&(b)&(c)&(d ) & ( e)&(f ) + 62.33&&&&&&&&0.78&0.31&0.87&0.55&3.5&4.0 + 70&&&&&&&&1.1&0.42&1.2&0.56&3.2&3.7 + 80&&&&&&&&1.6&0.60&1.7&0.56&2.8&3.7 + 90&&&&&&&&2.1&0.84&2.4&0.56&2.6&3.9 + 100&&&&&&&&2.9&1.1&3.2&0.56&2.4&4.4 + 120&&&&&&&&4.7&1.8&5.3&0.56&2.0&6.2 + 140&&&&&&&&7.3&2.8&8.1&0.57&1.8&9.1 + 160&1.4&0.15&1.4&0.56&1.6&2.4&&10&4.1&12&0.58&1.6&13 + 170&0.59&0.15&0.59&0.56&1.6&1.9&&12&4.9&14&0.58&1.6&15 + 180&0.57&0.15&0.57&0.57&1.5&1.8&&15&5.7&16&0.58&1.5&18 + 200&0.55&0.16&0.56&0.57&1.4&1.7&&20&7.7&22&0.59&1.4&23 + 250&0.48&0.20&0.54&0.58&1.2&1.5&&37&14&41&0.61&1.3&42 +300&0.44&0.31&0.59&0.59&1.1&1.4&&62&24&69&0.64&1.2&70 + 350&0.40&0.50&0.72&0.61&1.0&1.4&&96&37&107&0.67&1.1&108 + 400&0.56&0.80&1.1&0.62&0.98&1.6& & & & & & & + 500&1.3&1.80&2.5&0.66&0.91&2.9& & & & & & & + 600&2.9&3.80&5.4&0.70&0.88&5.4& & & & & & & + in each of the experiments there are several absolute systematic errors and respective relative systematic errors where .systematic errors are the random quantities characterized by a uniform distribution . because of this , the total systematic error is ^ 2}\right\ } , \label{eq6}\ ] ] where is the confidence probability , and is a tabulated coefficient . the same rule is also valid for the absolute systematic errors . in the experiment are main systematic errors : where m is the sphere radius , and are the resonant and natural angular frequencies of the oscillator , respectively ( the former is separation - dependent ) . was determined so precisely that its error does not contribute to the results , and the error of the resonant frequency is . using the value and utilizing eq .( [ eq6 ] ) one obtains the total systematic errors given in column 3 [ labeled ( b ) ] in table 1 . the experiment contains the following systematic errors : due to force calibration ; due to noise when the calibration voltage is applied to the cantilever ; due to the instrumental sensitivity ; and due to the restrictions on computer resolution of data . combining these errors using the analog of eq .( [ eq6 ] ) with , we obtain . the respective relative errors are shown in column 9 in table 1 .comparing columns labeled ( b ) in table 1 , we conclude that in both experiments the relative systematic error increases as the separation increases .the magnitudes of the systematic errors are smaller in the experiment of ref . . to find the total experimental error in the measurements of ,one should combine the random and systematic errors obtained above which are described by a normal ( or student ) distribution and a combination of uniform distributions , respectively . to be very conservative , we assume that the systematic error is described by a uniform distribution ( other assumptions lead to smaller total error ) .different methods for combining random and systematic errors are described in ref. .here we use one based on the value of the quantity . according to this method , at all where the contribution from the systematic error is negligible and at 95% confidence . if is valid , the random error is negligible and at 95% confidence . in the separation regionwhere , the combination of errors is performed using the rule , \label{eq8}\ ] ] where the coefficient with varies between 0.71 and 0.81 .being conservative , here we use in all calculations .table 1 [ columns 4 and 10 labeled ( c ) ] contain the total experimental error of the casimir pressure and force measurements in the experiments , respectively .as seen in column 4 of table 1 , in the experiment at nm the total experimental error is equal to 1.4% , but in a wide separation range from 170 to 300 nm , it is practically flat and within the range from 0.54 to 0.59% .even at nm it is equal to only 5.4% . in the experiment ( column 10 in table 1 ) the smallest total experimental error of 0.87% is achieved nm and increases up to 5.3% at nm .this is mainly due to the large contribution of the random errors .the theoretical values of ( both the pressure and force ) are computed using the lifshitz formula ( see , e.g. , ref . ) which takes into account the effects of finite conductivity and nonzero temperature .the lifshitz formula contains the reflection coefficients at imaginary matsubara frequencies . at zero matsubara frequencythese coefficients are expressed in terms of the drude dielectric function ( the drude model approach ) or in terms of the leontovich surface impedance ( the impedance approach ) . at nonzero matsubara frequenciesboth approaches use the tabulated optical data extrapolated to low frequencies by the imaginary part of the drude dielectric function . in refs . the reflection coefficients at all matsubara frequencies were expressed using the free electron plasma model ( the plasma model approach ) .one error in the theoretical computation arises from sample to sample variations of the optical data for the complex index of refraction .usually these data are not measured in each individual experiment , but are taken from tables . in ref . it was shown that variation of the optical data for typical samples leads to a relative theoretical error in the computed casimir pressure or force that is no larger than 0.5% .being conservative , we set % at all separations . strictly speaking, there may occur rare samples with up to 2% deviations in the casimir pressure or force at short separations .if this happens , the theoretical values come into conflict with the experimental data .such deviations must be considered not as an error ( they can only diminish the magnitudes of the pressure or force ) but as a correction .the validity of the hypothesis on the presence of such types of corrections can be easily verified statistically .another theoretical error is caused by the use of the proximity force theorem .( this is the name given by the authors of ref . ; some other authors , e.g. in ref . , prefer to use the name proximity force approximation " to underline the approximate character of the equality proposed in ref . . ) in the experiment it is applied to express the effective casimir pressure between two parallel plates through the derivative of the force acting between a sphere and a plate . in the experiment the basic result for the force is obtained using the proximity force theorem .the upper limit of error introduced by this is ( see also refs . where the same estimation was confirmed for the case of a massless scalar field ) .both errors are described by a uniform distribution and in this sense can be likened to systematic errors .they are combined by using eq .( [ eq6 ] ) with leading to the values presented in columns 5 and 11 in table 1 [ labeled ( d ) ] for the experiments , respectively .as is seen from these columns , the errors depend only slightly on separation and take similar values between 0.55 and 0.70% .in addition to the major theoretical errors , there exist other uncertainties in calculations which are not taken into account in the lifshitz formula .some of them were shown to be negligibly small ( like the contributions from patch potentials , nonlocal effects and finite sizes of the plates ) .as to the contribution from the surface roughness , it was calculated using the atomic force microscope images of the interacting surfaces and taken into account as a correction [ 1416 ] .this is why these factors do not contribute to the balance of theoretical errors .there is one more error which can be considered together with the theoretical errors if one is going to compare the experimental and theoretical values of .this arises from the fact that is determined experimentally with an error ( see sec . 2.1 ) , and this error results in the additional uncertainties in computations . bearing in mind the leading theoretical dependences of the pressure and force on separation , we obtain in ref . and in ref . . taking into account that the combined random quantity may be distributed nonuniformly , we combine it with using eq .( [ eq8 ] ) and obtain the total theoretical error at 95% confidence .the values of are presented in columns 6 and 12 in table 1 [ labeled ( e ) ] for the experiments , respectively . for both experimentsthey monotonically decrease with separation and take the largest values at the shortest separation .the significant increase of the total theoretical error in columns labeled ( e ) compared to those labeled ( d ) is due to the additional error .in secs . 2.3 and 3.2 we have obtained the total experimental and theoretical errors at 95% confidence for both the casimir pressure and force .now we consider the new random quantity [ or and determine the absolute error of this quantity , , at 95% confidence using the composition rule ( [ eq6 ] ) with ^ 2+\left [ \delta^{\!\rm tot}p^{\rm exp}(z)\right]^2}\right\ } \label{eq9}\ ] ] ( the same equation is valid for the force ) .note that in eq .( [ eq9 ] ) the conservative value of is used as for two uniform distributions ( otherwise it would be smaller ) .the confidence interval for the quantity at 95% confidence is given by $ ] and the mean values or must belong to this interval or its analog for the force with a 95% probability .the values of and are given in columns 7 and 13 in table 1 [ labeled ( f ) ] for the experiments , respectively .they characterize the sensitivity of the experiments to the differences between theory and experiment at 95% confidence . for example , in ref . theory is in agreement with experiment at a separation nm if does not exceed 1.6% of . & & & + & & & + (nm)&(a)&(b)&(c)&(d ) & ( e)&(f)&&(a)&(e ) + 62.33&&&&&&&&15.2&0.5 + 70&&&&&&&&10.4&3.0 + 80&&&&&&&&7.1&3.6 + 90&&&&&&&&5.4&1.0 + 100&&&&&&&&4.5&2.0 + 120&&&&&&&&3.9&0.15 + 140&&&&&&&&3.8&0.02 + 170 & 17.2&39.8&2.01&13.0&3.87&18.8&&3.7&0.82 + 180&13.4&31.0&0.74&7.54&1.24&14.4&&3.7&0.48 + 200&8.59&19.8&1.21&5.3&0.63&11.0&&3.7&0.31 + 250&3.34&7.72&0.31&1.3&0.93&7.09&&3.7&0.84 + 300&1.59&3.67&0.34&0.6&1.12&5.07&&3.7&0.46 + 350&0.89&2.06&0.38&0.39&0.80&3.58&&3.7&0.27 + 400&0.63&1.46&0.28&0.20&0.68&2.59 & & & + 500&0.49&1.13&0.11&0.05&0.32&1.37 & & & + 600&0.46&1.06&0.08&0.04&0.17&0.82 & & & + 700&0.46&1.06&0.02&0.01&0.08&0.51 & & & + experiment is rather sensitive and can be compared with different theoretical approaches to the calculation of the casimir pressure .the main results are presented in table 2 where the second and third columns labeled ( a ) , ( b ) contain the half - width of the confidence interval at 95 and 99% confidence , respectively . in columns47 labeled ( c ) , ( d ) , ( e ) and ( f ) the results for the mean differences are computed using the impedance and the plasma model approach at , the optical data in the lifshitz formula at , and the drude model approach at , respectively . to avoid confusion ,recall that in column ( c ) the zero - frequency contribution to the lifshitz formula is computed using the leontovich impedance in the region of infrared optics . at all other matsubara frequenciesthe impedance is obtained using the tabulated optical data . comparing columns 46 and columns 2,3, we conclude that the impedance approach , the plasma model approach and the lifshitz formula at are consistent with the measurement data . at the same time , by comparing columns 2,3 with column 7 we find that the drude model approach is excluded by experiment at 95% confidence within the separation range from 170 to 700 nm , and at 99% confidence from 300 to 500 nm .the physical reasons for the failure of the drude model approach and the advantages of the leontovich impedance are discussed in refs . .experiment is the first demonstration of the casimir force between a metal and a semiconductor performed at shorter separations than in experiment .for this reason it can not be used to discriminate among different theories . in column 8 in table 2 labeled( a ) the values of for the force at 95% confidence are given .column 9 in table 2 labeled ( e ) contains the values of computed using the lifshitz formula at and tabulated optical data for au and si .the comparison of these columns shows that the theory at is in a very good agreement with experiment .from the above , several conclusions can be reached : a new method for data processing and comparing theory with experiment for the casimir effect has been presented based on rigorous results of mathematical statistics with no recourse to the previously used root - mean - square deviation ; the distinguishing feature of this method is the independent determination of the total experimental and theoretical errors and of the confidence interval for differences between calculated and measured values at a chosen confidence probability ; the developed method is conservative and guaranties against underestimation of errors and uncertainties .it was applied to two recent experiments measuring the casimir pressure and force in different configurations ; we have demonstrated that the approaches based on the vanishing contribution of the transverse electric mode at zero frequency ( e.g. , the drude model approach ) are excluded by experiment at 99% confidence , whereas the three traditional approaches to the thermal casimir force are consistent with experiment .the work of glk , fc , um and vmm was supported by the nsf grant no phy0355092 and doe grant no de - fg02 - 04er46131 .ef was supported by doe grant no de - ac02 - 76er071428 .sparnaay m j 1958 _ physica _ * 24 * 751 van blokland p h g and overbeek j t g 1978 _ j. chem . soc .faraday trans . _ * 74 * 2637 bordag m , mohideen u and mostepanenko v m 2001 _ phys . rep . _ * 353 * 1 lamoreaux s k 1997 _ phys .* 78 * 5 u. mohideen u and roy a 1998 _ phys . rev .lett . _ * 81 * 4549 roy a and mohideen u 1999 _ phys .lett . _ * 82 * 4380 roy a , lin c - y and mohideen u 1999 _ phys ._ d * 60 * 111101(r ) harris b w , chen f and mohideen u 2000 _ phys .a * 62 * 052109 ederth t 2000 _ phys .rev . _ a * 62 * 062104 chan h b , aksyuk v a , kleiman r n , bishop d j and capasso f 2001 _ science _ * 291 * 1941 bressi g , carugno g , onofrio r and ruoso g 2002 _ phys ._ * 88 * 041804 chen f , mohideen u , klimchitskaya g l and mostepanenko v m 2002 _ phys .lett . _ * 88 * 101801 decca r s , fischbach e , klimchitskaya g l , krause d e , lpez d and mostepanenko v m 2003 _ phys . rev . _ d * 68 * 116003 chen f , mohideen u , klimchitskaya g l and mostepanenko v m 2004 _ phys .rev . _ a * 69 * 022117 decca r s , lpez d , fischbach e , klimchitskaya g l , krause d e and mostepanenko v m 2005 _ ann .n y _ * 318 * 37 chen f , mohideen u , klimchitskaya g l and mostepanenko v m 2005 _ phys . rev . _ a * 72 * 020101(r ) brownlee k a 1965 _ statistical theory and methodology in science and engineering _( new york : willey ) cochran w g 1954 _ biometrics _ * 10 * 101 rabinovich s g 2000 _ measurement errors and uncertainties _ ( new york : springer ) bostrm m and sernelius b e 2000 _ phys .lett . _ * 84 * 4757 brevik i , aarseth j b , hye j s and milton k a 2005 _ phys ._ e * 71 * 056101 geyer b , klimchitskaya g l and mostepanenko v m 2003 _ phys ._ a * 67 * 062102 bezerra v b , klimchitskaya g l , mostepanenko v m and romero c 2004 _ phys . rev . _ a * 69 * 022119 genet c , lambrecht a and reynaud s 2000 _ phys . rev . _ a * 62 * 012110 bordag m , geyer b , klimchitskaya g l and mostepanenko v m 2000 _ phys . rev ._ * 85 * 503 blocki j , randrup j , wiatecki w j and tsang c f 1977 _ ann .n y _ * 105 * 427 scardicchio a and jaffe r l 2005 _ nucl ._ b * 704 * 552 gies h , langfeld k and moyaerts l 2003 _ jhep _ * 0306 * 018 mostepanenko v m , bezerra v b , decca r s , fischbach e , geyer b , klimchitskaya g l , krause d e , lpez d and romero c 2006 _ j. phys . _ a , this issue
in most experiments on the casimir force the comparison between measurement data and theory was done using the concept of the root - mean - square deviation , a procedure that has been criticized in literature . here we propose a special statistical analysis which should be performed separately for the experimental data and for the results of the theoretical computations . in so doing , the random , systematic , and total experimental errors are found as functions of separation , taking into account the distribution laws for each error at 95% confidence . independently , all theoretical errors are combined to obtain the total theoretical error at the same confidence . finally , the confidence interval for the differences between theoretical and experimental values is obtained as a function of separation . this rigorous approach is applied to two recent experiments on the casimir effect .
the sharpening of blurred images is a standard problem in many imaging applications . a great variety of different approaches to this severely ill - posed inverse problemhave been developped over time which differ in the assumptions they make , and in their suitability for different application contexts .blur of an image is described by a _ point - spread function ( psf ) _ which describes the redistribution of light energy in the image domain . when blurring acts equally at all locations , one has a _ space - invariant psf _ which acts by convolution .accounting also for the impact of noise , a typical blur model ( with additive noise ) then reads where is the observed image , and the unknown sharp image . in the more general case of a space - variant blurone needs a point - spread function with two arguments , , and is replaced with the integral operator such that which subsumes the space - invariant case by setting .we denote by the adjoint of the point - spread function , which is given by .conservation of energy implies generally that , however , this condition may be violated near image boundaries due to blurring across the boundary . in deblurring , we want to obtain a restored image that approximates , with the degraded image and the psf as input .this is the case of _ non - blind _ deconvolution ( as opposed to blind deconvolution which aims at inferring the sharp image and the psf simultaneously from the degraded image ) .some approaches to the deconvolution problem are presented in more detail in section [ sec - exdcvm ] .[ [ our - contribution . ] ] our contribution .+ + + + + + + + + + + + + + + + + the main subject of this paper is to discuss a modification of the richardson - lucy ( rl ) deconvolution method by robust data terms .building on the known variational interpretation of rl deconvolution , we replace the asymmetric penaliser function in the data term in such a way that larger residual errors are penalised less than with the standard csiszr divergence term . using robust data terms together with a regulariser similar to weobtain a robust and regularised richardson - lucy variant that unites the high restoration quality of variational deconvolution methods with high efficiency that is not far from the original richardson - lucy iteration .this method has already been used for experiments on recovering information from diffuse reflections in and , combined with interpolation , for the enhancement of confocal microscopy images .it has also been used to achieve efficient deconvolution under real - time or almost - real - time conditions .we demonstrate that both robust data terms and regularisers contribute substantially to its performance. an earlier version of the present work is the technical report .[ [ related - work . ] ] related work .+ + + + + + + + + + + + + the omnipresence of deblurring problems has made researchers address this problem since long . from the abundant literature on this topic , the most relevant work in our present context includes richardson - lucy deconvolution , variational methods , and their interplay ; see also for another approach to combine richardson - lucy deconvolution with regularisation .fundamental theoretical results on existence and uniqueness of solutions of deconvolution problems can be found in the work of bertero et al . .robust data terms in deconvolution go back to zervakis et al . in statistical models , and have recently been used intensively in the variational context by bar et al . and welk et al .positivity constraints were studied in discrete iterative deconvolution by nagy and strako and in a variational framework by welk and nagy .the extension of variational approaches to multi - channel images has been studied in and more specifically in deconvolution in .[ [ structure - of - the - paper . ] ] structure of the paper .+ + + + + + + + + + + + + + + + + + + + + + + in section [ sec - exdcvm ] we recall approaches to image deconvolution which form the background for our approach , and discuss some aspects of noise models and robustness .section [ sec - rrrl ] recalls the embedding of richardson - lucy deconvolution into a variational context . exploiting this connection ,the rl algorithm can be modified in order to increase restoration quality and robustness with respect to noise and perturbations .an experimental comparison of the deconvolution techniques under consideration is provided in section [ sec - exp ] based on both synthetic and real - world data .conclusions in section [ sec - conc ] end the paper .we start by recalling selected deconvolution approaches from the literature which we will refer to later .richardson - lucy ( rl ) deconvolution is a nonlinear iterative method originally motivated from statistical considerations .it is based on the assumption of positive grey - values and poisson noise distribution .if the degraded and sharp images , and the point - spread function are smooth functions over with positive real values , one uses the iteration to generate a sequence of successively sharpened images from the initial image . in the absence of noise the sharp image is a fixed point of , as in this case the multiplier equals the constant function .the single parameter of the procedure is the number of iterations .while with increasing number of iterations greater sharpness is achieved , the degree of regularisation is reduced , which leads to amplification of artifacts that in the long run dominate the filtered image .this phenomenon is known by the name of _ semi - convergence_. variational methods address the deconvolution task by minimising a functional that consists of two parts : a _ data term _ that enforces the match between the sought image and the observed image via the blur model , and a _ smoothness term _ or _ regulariser _ that brings in regularity assumptions about the unknown sharp image .the strength of variational approaches lies in their great flexibility , and in the explicit way of expressing the assumptions made .they achieve often an excellent reconstruction quality , but their computational cost tends to be rather high . a general model for variational deconvolution of a grey - value image with known point - spread function is based on minimising the energy functional = \int\limits_\varomega \left ( \varphi\bigl((f - h\circledast u)^2\bigr ) + \alpha\ , \varpsi\bigl ( \lvert\nabla u\rvert^2 \bigr ) \right ) { \,\mathrm{d}}{\bm{x}}\ ] ] in which the data term penalises the reconstruction error or _ residual _ to suppress deviations from the blur model , while the smoothness term penalises roughness of the reconstructed image . the _ regularisation weight _ balances the influences of both terms . are increasing penalty functions . in the simplest case , is the identity , thus imposing a quadratic penalty on the data model .doing the same in the regulariser , one has whittaker - tikhonov regularisation with . as this choice leads to a blurring that directly counteracts the desired sharpening , it is often avoided in favour of regularisers with edge - preserving properties .a popular representative is total variation deconvolution which uses .edge - enhancing regularisers ( e.g. of the perona - malik type ) were studied in . in ,nonlocal regularisation was proposed . for the actual minimisation , often gradient descent or lagged - diffusivity - typeminimisation schemes are employed .iterative schemes with advantageous convergence behaviour are derived from a half - quadratic approach in ( for the total variation regulariser ) and ( for certain non - convex regularisers ) .using in a penaliser with less - than - quadratic growth leads to _ robust data terms _ .for example , use the penaliser .the concept of robustness originates from statistics , and will be discussed in more detail below .spatially variant robust deconvolution models were investigated in .even richardson - lucy deconvolution can be interpreted as a fixed point iteration for an optimisation problem , using csiszr s _ information divergence _ as an ( asymmetric ) penaliser function . in a space - continuous formulation , one has the energy functional : = \int\limits_{\varomega } \left(h\circledast u - f - f\ln\frac{h\circledast u}{f}\right ) { \,\mathrm{d}}{\bm{x}}\;.\ ] ] no regularisation termis included , which is linked to the above - mentioned semi - convergence behaviour . nevertheless , the energy formulation permits to modify richardson - lucy deconvolution in the same flexible manner as the standard variational approach by introducing robust data terms and edge - preserving , or even edge - enhancing , regularisers . while an edge - preserving regulariser in combination with richardson - lucy deconvolution has been used by dey et al . ( in a space - invariant setting ) , the possibility of a robust data term has so far not been studied in detail , although it has been used successfully in applications .statistical considerations link particular data terms to specific noise models , in the sense that minimising the so constructed energy functional yields a maximum likelihood estimator under the corresponding type of noise . for example , quadratic penalisation pertains to gaussian noise , while the penaliser matches laplacian noise . the asymmetric penaliser in is related to poisson noise .this relation allows to use an optimally adapted energy model whenever one has full control over the imaging process and can therefore establish an accurate noise model . in practice, one does not always have perfect control and knowledge of the imaging process .this is where the concept of _ robustness _ comes into play .according to huber , `` robustness signifies insensitivity to small deviations from the assumptions '' . in particular , `` distributional robustness '' means that `` the shape of the underlying distribution deviates slightly from the assumed model '' .this obviously applies to imaging processes in which no exact noise model is known , but also further violations of model assumptions can be subsumed here , such as imprecise estimates of psf , or errors near the image boundary due to blurring across the boundary , see . to incorporate each single influence factor into a model is not always feasible .robust models are designed to cope with remaining deviations , and still produce usable results . in view of the uncertainty about the true distribution of noise, they are often based on data terms that match types of noise that are assumed to be `` worse '' than the real noise .a crucial point is to suppress the effect of outliers , which is achieved e.g. by data terms that penalise outliers less .models are then adapted to distributions with `` pessimistically '' heavy tails . to evaluate robustness experimentally ,it is not only legitimate but even necessary to test such models against severe , maybe even unrealistic , types of noise that do not exactly match the model .a frequently used test case is impulse noise , and it turns out that variational approaches actually designed for laplacian noise can cope with it practically well .of course , one can no longer expect to establish optimality in the maximum - likelihood sense with respect to the true noise .furthermore , any comparison between methods that are optimised for different noise models inevitably involves testing at least one of them with non - matching noise .for example , this happens already in any comparison of wiener filtering ( gaussian noise ) against richardson - lucy ( poisson noise ) .functionals of type are often minimised using gradient descent . alternatively, elliptic iteration schemes can be used . [[ standard - gradient - descent . ] ] standard gradient descent .+ + + + + + + + + + + + + + + + + + + + + + + + + + to derive a gradient descent equation for the energy from , one computes by the usual euler - lagrange formalism \right|_{\varepsilon=0} ] .analogous to the ordinary gteaux derivative above , a `` multiplicative gradient '' is then derived from the requirement .one finds , which constitutes a new derivation of the gradient descent without the substitution of .for given , , equation can be understood as a fixed point iteration associated to the minimisation of the functional , compare .this is the so - called _ information divergence _ introduced by csiszr .the asymmetric penaliser function is strictly convex for with its minimum at . as a necessary condition for to be a minimiser of, one can compute an euler - lagrange equation which in this case becomes particularly simple as no derivatives of are present in the integrand . in view of the positivity requirement for start by a multiplicative perturbation of with a test function , \right|_{\varepsilon=0 } & = \frac{\mathrm{d}}{\mathrm{d}\varepsilon } \int\limits_{\varomega } \biggl(h\circledast \bigl(u(1+\varepsilon v)\bigr ) -f- f\ln\frac{h\circledast \bigl(u(1+\varepsilon v)\bigr)}{f}\biggr ) { \,\mathrm{d}}{\bm{x}}\bigg|_{\varepsilon=0 } \notag \\ & = \int\limits_\omega \left(1-\frac{f}{h\circledast u}\right)\ , \bigl(h\circledast ( uv)\bigr ) { \,\mathrm{d}}{\bm{x}}\;.\end{aligned}\ ] ] with for the integral operator this becomes and , after changing the order of integration and rewriting into , requiring that this expression vanishes for all test functions yields the minimality condition because of the energy conservation property one sees that is a fixed point iteration for . in the presence of noisethe functional is not minimised by a smooth function ; in fact , the fixed - point iteration shows the above - mentioned semi - convergence behaviour and diverges for . from the variational viewpoint ,the functional needs to be regularised . in standard richardson - lucy deconvolution , this regularisation is provided implicitly by stopping the iteration after a finite number of steps .the earlier the iteration is stopped , the higher is the degree of regularisation .although this sort of regularisation is not represented in the functional , the variational picture is advantageous because can be modified in the same flexible way as standard variational approaches .the structure of the iterative minimisation procedure is preserved throughout these modifications , which leads to good computational efficiency .let us first note that by limiting the growth of high - frequency signal components , regularisation has a smoothing effect that in deconvolution problems acts contrary to the intended image sharpening .it is desirable to steer this effect in such a way that it interferes as little as possible with the enhancement of salient image structures , such as edges .implicit regularisation by stopping , however , allows little control over the way it affects image structures . for this reason, it makes sense to introduce a variational regularisation term into the objective functional .this yields the functional = \int\limits_{\omega } \left ( r_f(h\circledast u ) + \alpha\ , \varpsi(\lvert\nabla u\rvert^2 ) \right ) { \,\mathrm{d}}{\bm{x}}\ ] ] in which the richardson - lucy data term is complemented by a regulariser whose influence is weighted by the regularisation weight .concerning the penalisation function in the regulariser , our discussion from section [ sec - exdcvm ] applies analogously . with the total variation regulariser given by , the energy functional corresponds to the method proposed ( in space - invariant formulation ) by dey et al . ; compare also the more recent work for a similar approach .the euler - lagrange equation for under multiplicative perturbation is given by which combines with the same multiplicative gradient for the regulariser as in , compare . in converting this into a fixed point iteration, we evaluate the divergence expression with , yielding .dependent on whether the factor with which the divergence term in is multiplied is chosen as or , the right - hand side of either receives the additional summand , or is divided by .however , can have either sign , and a negative value in the numerator or denominator will lead to a violation of the positivity requirement . for this reason ,we choose the outer factor for as if , or if . using the abbreviations \pm:=\frac12(z\pm\lvert z\rvert)$ ]we can therefore write our final fixed point iteration as + } { 1 - \alpha \left[{\operatorname{div}}\bigl ( \varpsi'(\lvert\nabla u^k\rvert^2)\,\nabla u^k\bigr)\right]_- } u^k\;. \label{nrrl}\ ] ] we will refer to this method as _ regularised rl_. up to scaling and shifting , the asymmetric penaliser function equals the logarithmic density of a gamma distribution .minimisation of the integral thus corresponds to a bayesian estimation of the sharp image assuming a poisson distribution for the intensities , with the gamma distribution as conjugate prior . in variational deconvolution , it has turned out useful to replace quadratic data terms that mirror a gaussian noise model by robust data terms associated with `` heavy - tailed '' noise distributions . not only can the resulting model handle extreme noise but it can also cope with imprecisions in the blur model .following this idea , we replace the data term of by one that is adapted to a broader distribution on .to retain the structure of the fixed point iterations and , we keep in the data term , but apply a penaliser function that grows less than linear .our modified functional therefore reads = \int\limits_\omega \left ( \varphi\bigl(r_f(h\circledast u)\bigr ) + \alpha\ , \varpsi(\lvert\nabla u\rvert^2 ) \right ) { \,\mathrm{d}}{\bm{x}}\;.\kern-.7em\ ] ] by an analogous derivation as before , one obtains for the minimality condition that leads to the new fixed point iteration + } { h^*\circledast\varphi'(r_f(h\circledast u ) ) - \alpha \left[{\operatorname{div}}\bigl ( \varpsi'(\lvert\nabla u^k\rvert^2)\,\nabla u^k\bigr)\right]_- } \,\cdot\ , u^k \label{rrrl}\ ] ] which we call _ robust and regularised rl deconvolution _ ( rrrl ) . comparing to, computational cost grows by one more convolution and the evaluation of .clearly , contains regularised rl as special case ( ) .similarly , yields a non - regularised method which we will call _ robust rl deconvolution_. assume now that the blurred image is a multi - channel image with a channel index set , e.g. an rgb colour image , whose channels are uniformly blurred , i.e.the psf is equal for all channels . replacing the expressions and in the arguments of and with their sums over image channels , and , we obtain as multi - channel analog of the functional = \int\limits_\omega \bigl ( \varphi(r ) + \alpha\ , \varpsi(g ) \bigr ) { \,\mathrm{d}}{\bm{x}}\;.\ ] ] this yields as the iteration rule for multi - channel rrrl + } { h^*\circledast\varphi'(r ) - \alpha \left[{\operatorname{div}}\left ( \varpsi'(g)\ , \nabla u_j^k\right)\right]_- } \,\cdot\ , u_j^k\;. \label{rrrlmc}\ ] ] the same procedure works for the non - robust and/or non - regularised rl variants .note that in the case of standard rl this boils down to channel - wise application . the regularised rl model by dey et al . places the regularisation only in the denominator of the fixed point iteration rule .this imposes a tight bound on : as soon as exceeds , positivity is violated , and the iteration becomes unstable , in our iteration rule , sign - dependent distribution of divergence expressions to the enumerator and denominator prevents positivity violations , thus enabling larger values of .nevertheless , for substantially larger values of instabilities are observed which can be attributed to amplifying perturbations of high spatial frequency in almost homogeneous image regions . in a modification of the fixed point iteration to optimise has been proposed that guarantees stability for arbitrary .a detailed analysis of stability bounds on in the iteration will be given in a forthcoming publication .we turn now to evaluate the performance of our newly developped deconvolution methods , and comparing them to existing approaches .methods chosen for comparison include classical richardson - lucy deconvolution , variational gradient descent methods , and the methods from which are advocated for performant deblurring in several recent papers , see e.g. .we start our tests on grey - value images that are synthetically blurred by convolution . since in this case the correct sharp image is known , we can rate restoration quality by the signal - to - noise ratio here , and are the original sharp image and restored image , respectively . by denote the variance of the image .one should be aware that snr measurements do often not capture visual quality of deblurred images very well .the parameters in our experiments are optimised primarily for visual quality , not for snr .we remark also that in synthetically blurring images , we use convolution via the fourier domain , which involves treating the image domain as periodic .in contrast , we use convolution in the spatial domain in the variational deconvolution procedures as well as in the richardson - lucy method and its modifications .this discrepancy in the convolution procedure and boundary treatment is by purpose : it helps to prevent `` inverse crimes '' that could unduly embellish results .moreover , our implementations can thereby easily be adapted to spatially variant blurs .the methods from require computations in the fourier transforms by design . [ cols="^,^,^,^,^ " , ] in order to achieve its high restoration quality , rrrl required in both synthetic experiments significantly higher computational effort than standard rl . however , run times still remained by a factor below those for the robust variational model from literature .this is caused by the favourable structure of the minimality condition and the fixed point iteration obtained from it .in contrast , minimisation of the classical variational model becomes very slow when getting close to the optimum , thus requiring much more iterations .our last experiment ( figures [ f - rwc - sivp][f - rwc2 ] ) is based on real - world data . the colour photograph shown in figure [ f - rwc - sivp](a )was blurred during acquisition with an unknown point - spread function that is inferred approximately from the shape of a point light source . for restoration, we use the multi - channel versions of our methods .restoration by standard rl achieves a decent acuity at moderate computational cost , see the detail view , figure [ f - rwc2](a ) . increasingthe number of iterations quickly leads to ringing artifacts that are visible as shadows in the vicinity of all high - contrast image structures , see figure [ f - rwc2](b ) .variational deconvolution with a robust data term and perona - malik regulariser allows a visible improvement in acuity over rl deconvolution while reducing artifacts , see the detail view in figure [ f - rwc2](c ) .using the positivity - constrained gradient descent brings about a further significant improvement , see figure [ f - rwc2](f ) . due to the better suppression of ringing artifactsthe regularisation weight could be reduced by half here in contrast , unconstrained variational deconvolution with the same reduced creates much stronger artifacts , see figure [ f - rwc2](d ) . imposing the constraint but retaining the larger weight , see figure [ f - rwc2](e ) ,already improves acuity but still smoothes out more fine details than in figure [ f - rwc2](f ) .the excellent restoration quality of variational deconvolution , however , comes at the cost of significantly increased computation time needed in order to approximate the steady state .robust and regularised richardson - lucy deconvolution as shown in the last rows of figure [ f - rwc2 ] provides an attractive compromise between standard rl and the variational gradient descent .figure [ f - rwc2](g ) shows a rrrl result with tv regulariser which can be computed fairly fast . with the perona - malik regulariser instead ,see figures [ f - rwc - sivp](b ) and [ f - rwc2](h ) , more iterations are required in order for the edge - enhancing properties of the regulariser to pay off , but still the computation time is lower than with the gradient descent algorithm , compare also table [ t - rwc ] . in terms of restoration quality ,both rrrl results range between the variational deconvolution without and with constraints .we remark that the test image consisting of large dark regions with few highlights makes the positivity constraint particularly valuable .in this paper , we have investigated richardson - lucy deconvolution from the variational viewpoint . based on the observation that the rl method can be understood as a fixed point iteration associated to the minimisation of the information divergence , it is embedded into the framework of variational methods .this allows in turn to apply to it the modifications that have made variational deconvolution the flexible and high - quality deconvolution tool that it is .besides regularisation that has been proposed before in , we have introduced robust data terms into the model . as a result, we have obtained a novel robust and regularised richardson - lucy deconvolution method that competes in quality with state - of - the - art variational methods , while in terms of numerical efficiency it moves considerably closer to richardson - lucy deconvolution .m. backes , t. chen , m. drmuth , h. lensch , and m. welk .tempest in a teapot : compromising reflections revisited . in _ proc .30th ieee symposium on security and privacy _ , pages 315327 , oakland , california , usa , 2009 .l. bar , a. brook , n. sochen , and n. kiryati .color image deblurring with impulsive noise . in n.paragios , o. faugeras , t. chan , and c. schnrr , editors , _ variational and level set methods in computer vision _ , volume 3752 of _ lecture notes in computer science _ , pages 4960 .springer , berlin , 2005 .l. bar , n. sochen , and n. kiryati .variational pairing of image segmentation and blind restoration . in t.pajdla and j. matas , editors , _ computer vision eccv 2004 , part ii _ ,volume 3022 of _ lecture notes in computer science _ , pages 166177 .springer , berlin , 2004 .l. bar , n. sochen , and n. kiryati .image deblurring in the presence of salt - and - pepper noise . in r.kimmel , n. sochen , and j. weickert , editors , _ scale space and pde methods in computer vision _, volume 3459 of _ lecture notes in computer science _ , pages 107118 .springer , berlin , 2005 .l. bar , n. sochen , and n. kiryati .restoration of images with piecewise space - variant blur . in f.sgallari , f. murli , and n. paragios , editors , _ scale space and variational methods in computer vision _ ,volume 4485 of _ lecture notes in computer science _ , pages 533544 .springer , berlin , 2007 .n. dey , l. blanc - fraud , c. zimmer , z. kam , j .- c .olivo - marin , and j. zerubia .a deconvolution method for confocal microscopy with total variation regularization . in _ proc .ieee international symposium on biomedical imaging ( isbi ) _ , april 2004 .n. dey , l. blanc - feraud , c. zimmer , p. roux , z. kam , j .- c .olivo - marin , and j. zerubia .ichardson - lucy algorithm with total variation regularization for 3d confocal microscope deconvolution ., 69:260266 , 2006 .a. elhayek , m. welk , and j. weickert . simultaneous interpolation and deconvolution model for the 3-d reconstruction of cell images . in r.mester and m. felsberg , editors , _ pattern recognition _ , volume 6835 of _ lecture notes in computer science _ , pages 316325 .springer , berlin , 2011 .m. jung and l. a. vese .nonlocal variational image deblurring models in the presence of gaussian or impulse noise . in x .-tai , k. mrken , m. lysaker , and k .- a .lie , editors , _ scale - space and variational methods in computer vision _ , volume 5567 of _ lecture notes in computer science _ ,pages 402413 .springer , berlin , 2009 .j. g. nagy and z. strako . enforcing nonnegativity in image reconstruction algorithms . in d. c. wilson , h. d. tagare , f. l. bookstein , f. j. preteux , and e. r. dougherty , editors , _ advanced signal processing algorithms , architectures , and implementations _ ,volume 4121 of _ proceedings of spie _ , pages 182190 .spie press , bellingham , 2000 .n. persch , a. elhayek , m. welk , a. bruhn , s. grewenig , k. bse , a. kraegeloh , and j. weickert .enhancing 3-d cell structures in confocal and sted microscopy : a joint model for interpolation , deblurring and anisotropic smoothing ., in press , 2013 .a. sawatzky and m. burger .edge - preserving regularization for the deconvolution of biological images in nanoscopy . in g.psihoyios , t. simos , and c. tsitouras , editors , _ proc .8th international conference of numerical analysis and applied mathematics _ , volume 1281 of _ conference proceedings _ , pages 19831986 .aip , september 2010 . m. welk and m. erler .algorithmic optimisations for iterative deconvolution methods . in j.piater and a. rodrguez - snchez , editors , _ proceedings of the 37th annual workshop of the austrian association for pattern recognition ( agm / aapr ) , 2013_. arxiv:1304.7211 [ cs.cv ] , 2013 .m. welk and j. g. nagy .variational deconvolution of multi - channel images with inequality constraints . in j.mart , j. m. bened , a. m. mendona , and j. serrat , editors , _ pattern recognition and image analysis _ ,volume 4477 of _ lecture notes in computer science _ , pages 386393 .springer , berlin , 2007 .m. welk , d. theis , t. brox , and j. weickert . -based deconvolution with forward - backward diffusivities and diffusion tensors . in r.kimmel , n. sochen , and j. weickert , editors , _ scale space and pde methods in computer vision _ , volume 3459 of _ lecture notes in computer science _, pages 585597 .springer , berlin , 2005 .m. welk , d. theis , and j. weickert .variational deblurring of images with uncertain and spatially variant blurs . in w.kropatsch , r. sablatnig , and a. hanbury , editors , _ pattern recognition _, volume 3663 of _ lecture notes in computer science _ , pages 485492 .springer , berlin , 2005 .
in this paper , an iterative method for robust deconvolution with positivity constraints is discussed . it is based on the known variational interpretation of the richardson - lucy iterative deconvolution as fixed - point iteration for the minimisation of an information divergence functional under a multiplicative perturbation model . the asymmetric penaliser function involved in this functional is then modified into a robust penaliser , and complemented with a regulariser . the resulting functional gives rise to a fixed point iteration that we call robust and regularised richardson - lucy deconvolution . it achieves an image restoration quality comparable to state - of - the - art robust variational deconvolution with a computational efficiency similar to that of the original richardson - lucy method . experiments on synthetic and real - world image data demonstrate the performance of the proposed method . * keywords : * non - blind deblurring richardson - lucy deconvolution regularization robust data term
although the macroscopic curvature of dna induced by adenine - tracts ( a - tracts ) was discovered almost two decades ago structural basis for this phenomenon remains unclear .a few models considered originally suggested that it is caused by intrinsic conformational preferences of certain sequences , but all these and similar theories failed to explain experimental data obtained later . calculations show that the b - dna duplex is mechanically anisotropic , that bending towards minor grooves of some a - tracts is strongly facilitated , and that the macroscopic curvature becomes energetically preferable once the characteristic a - tract structure is maintained by freezing or imposing constraints . however , the static curvature never appears spontaneously in calculations unbiased _ a priori _ and these results leave all doors open for the possible physical origin of the effect . in the recent yearsthe main attention has been shifted to specific interactions between dna and solvent counterions that can bend the double helix by specifically neutralizing some phosphate groups . the possibility of such mechanism is often evident in protein - dna complexes , and it has also been demonstrated by direct chemical modification of a duplex dna . in the case of the free dna in solution , however , the available experimental observations are controversial . molecular dynamics simulations of a b - dna in an explicit counterion shell could neither confirm nor disprove this hypothesis . here we report the first example where stable static curvature emerges spontaneously in molecular dynamics simulations .its direction is in striking agreement with expectations based upon experimental data .however , we use a minimal b - dna model without counterions , which strongly suggests that they hardly play a key role in this effect .figure [ ftj1 ] exhibits results of a 10 ns simulation of dynamics of a 25-mer b - dna fragment including three a - tracts separated by one helical turn .this sequence has been constructed after many preliminary tests with shorter sequence motives .our general strategy came out from the following considerations .although the a - tract sequences that induce the strongest bends are known from experiments , probably not all of them would work in simulations .there are natural limitations , such as the precision of the model , and , in addition , the limited duration of trajectories may be insufficient for some a - tracts to adopt their specific conformation . also , we can study only short dna fragments , therefore , it is preferable to place a - tracts at both ends in order to maximize the possible bend .there is , however , little experimental evidence of static curvature in short dna fragments , and one may well expect the specific a - tract structure to be unstable near the ends .that is why we did not simply take the strongest experimental `` benders '' , but looked for sequence motives that in calculations readily adopt the characteristic local structure , with a narrow minor groove profile and high propeller twist , both in the middle and near the ends of the duplex .the complementary duplex has been constructed by repeating and inverting one such motive .the upper trace in plate ( a ) shows the time dependence of rmsd from the canonical b - dna model .it fluctuates below 4 sometimes falling down to 2 , which is very low for the double helix of this length indicating that all helical parameters are well within the range of the b - dna family .the lower surface plot shows the time evolution of the minor dna groove .the surface is formed by 75 ps time - averaged successive minor groove profiles , with that on the front face corresponding to the final dna conformation .the groove width is evaluated by using space traces of c5 atoms as described elsewhere .its value is given in angstrms and the corresponding canonical b - dna level of 7.7 is marked by the straight dotted lines on the faces of the box .it is seen that the overall groove shape has established after 2 ns and remained stable later , with noticeable local fluctuations . in all a -tracts the groove strongly narrows towards 3 ends and widens significantly at the boundaries .there are two less significant relative narrowings inside non a - tract sequences as well .dynamics of backbone transitions are shown in plate ( b ) . the b and b conformations are distinguished by the values of two consecutive backbone torsions , and . in a transition they change concertedly from ( t , g ) to ( g,t ) .the difference is , therefore , positive in b state and negative in b , and it is used in fig .( d ) as a monitoring indicator , with the corresponding gray scale levels shown on the right .each base pair step is characterized by a column consisting of two sub - columns , with the left sub - columns referring to the sequence written at the top in 5-3 direction from left to right .the right sub - columns refer to the complementary sequence shown at the bottom .it is seen that , in a - tracts , the b conformation is preferably found in apa steps and that transitions in neighboring steps often occur concertedly so that along a single a - strand and conformations tend to alternate .the pattern of these transitions reveals rather slow dynamics and suggests that md trajectories in the 10 ns time scale are still not long enough to sample all relevant conformations .note , for instance , a very stable conformation in both strands at one of the gpg steps .plate ( c ) shows the time evolution of the overall shape of the helical axis .the optimal curved axes of all dna conformations saved during dynamics were rotated with the two ends fixed at the oz axis to put the middle point at the ox axis .the axis is next characterized by two perpendicular projections labeled x and y. any time section of the surfaces shown in the figure gives the corresponding axis projection averaged over a time window of 75 ps .the horizontal deviation is given in angstrms and , for clarity , its relative scale is two times increased with respect to the true dna length . shown on the rightare two perpendicular views of the last one - nanosecond - average conformation .its orientation is chosen to correspond approximately that of the helical axis in the surface plots .it is seen that the molecule maintained a planar bent shape during a substantial part of the trajectory , and that at the end the bending plane was passing through the three a - tracts .the x - surface clearly shows an increase in bending during the second half of the trajectory . in the perpendicular y - projection the helical axisis locally wound , but straight on average .the fluctuating pattern in y - projection sometimes reveals two local maxima between a - tracts , which corresponds to two independent bends with slightly divergent directions .one may note also that there were at least two relatively long periods when the axis was almost straight , namely , around 3 ns and during the fifth nanosecond .at the same time , straightening of only one of the two bending points is a more frequent event observed several times in the surface plots .finally , plate ( d ) shows the time fluctuations of the bending direction and angle .the bending direction is characterized by the angle between the x - projection plane in plate ( c ) and the plane of the local dna coordinate frame constructed in the center of the duplex . according to the cambridge convention the local direction points to the major dna groove along the short axis of the base - pair , while the local axis direction is adjacent to the optimal helicoidal axis .thus , a zero angle between the two planes corresponds to the overall bend to the minor groove exactly at the central base pair . in both plots ,short time scale fluctuations are smoothed by averaging with a window of 15 ps . the total angle measured between the opposite axisends fluctuates around 10 - 15 in the least bent states and raises to average 40 - 50 during periods of strong bending .the maximal instantaneous bend of 58 was observed at around 8 ns .the bending direction was much more stable during the last few nanoseconds , however , it fluctuated at a roughly constant value of 50 from the second nanosecond .this value means that the center of the observed planar bend is shifted by approximately two steps from the middle base pair so that its preferred direction is to the minor groove at the two att triplets , which is well distinguished in plate ( c ) as well , and corresponds to the local minima in the minor groove profiles in plate ( a ) . during the periods when the molecule straightened the bending direction strongly fluctuates .this effect is due to the fact that when the axis becomes straight the bending plane is not defined , which in our case appears when the central point of the curved axis passes close to the line between its ends .it is very interesting , however , that after the straightening , the bending is resumed in approximately the same direction .figure [ ftj2 ] exhibits similar data for another 10 ns trajectory of the same dna fragment , computed in order to check reproducibility of the results .a straight dna conformation was taken from the initial phase of the previous trajectory , energy minimized , and restarted with random initial velocities .it shows surprisingly similar results as regards the bending direction and dynamics in spite of a somewhat different minor groove profile and significantly different distribution of and conformers along the backbone .note that in this case the helical axis was initially s - shaped in x - projection , with one of the a - tracts exhibiting a completely opposite bending direction .fluctuations of the bending direction are reduced and are similar to the final part of the first trajectory , which apparently results from the additional re - equilibration . in this case the maximal instantaneous bend of 71 was observed at around 4 ns .comparison of traces in plates ( a ) and ( d ) in figs .[ ftj1 ] and [ ftj2 ] clearly shows that large scale slow fluctuations of rmsd are caused by bending .the rmsd drops down to 2 when the duplex is straight and raises beyond 6 in strongly bent conformations . in both trajectoriesthe molecule experienced many temporary transitions to straight conformations which usually are very short living .these observations suggest that the bent state is relatively more stable than the straight one and , therefore , the observed behavior corresponds to static curvature . in conformations averaged over successive one nanosecondintervals the overall bending angle is 35 - 45 except for a few periods in the first trajectory .figure [ fsnap ] shows a snapshot from around 8.5 ns of the second trajectory where the rmsd from the straight canonical b - dna reached its maximum of 6.5 .the strong smooth bent towards the minor grooves of the three a - tracts is evident , with the overall bending angle around 61 .all transformations exhibited in figs .[ ftj1 ] and [ ftj2 ] are isoenergetic , with the total energy fluctuating around the same level established during the first nanosecond already , and the same is true for the average helicoidal parameters. plates ( b ) , however , indicate that there are much slower motions in the system , and this observation precludes any conclusions concerning the global stability of the observed conformations .moreover , we have computed yet another trajectory for the same molecule starting from the canonical a - dna form . during 10 ns it converged to a similarly good b - dna structure with the same average total energy , butthe bending pattern was not reproduced .it appears , therefore , that the conformational space is divided into distinct domains , with transitions between them probably occurring in much longer time scales .however , the very fact that the stable curvature in good agreement with experimental data emerges in trajectories starting from a featureless straight canonical b - dna conformation strongly suggests that the true molecular mechanism of the a - tract induced bending is reproduced .therefore , it can not depend upon the components discarded in our calculations , notably , specific interactions with solvent counterions and long - range electrostatic effects .we are not yet ready to present a detailed molecular mechanism responsible for the observed curvature because even in this relatively small system it is difficult to distinguish the cause and the consequences .we believe , however , that all sorts of bending of the double helical dna , including that produced by ligands and that due to intrinsic sequence effects , have its limited , but high flexibility as a common origin .its own conformational energy has the global minimum in a straight form , but this minimum is very broad and flat , and dna responds by distinguishable bending to even small perturbations . the results reported here prove that in the case of a - tracts these perturbations are produced by dna - water interactions in the minor groove .neither long range phosphate repulsion nor counterions are essential .the curvature is certainly connected with the specific a - tract structure and modulations of the minor groove width , but it does not seem to be strictly bound to them . in dynamics , conformations ,both smoothly bent and kinked at the two insertions between the a - tracts , are observed periodically .note also , that the minor groove profile somewhat differs between the two trajectories and that it does not change when the molecule straightens .we strongly believe , however , the experimental data already available will finally allow one to solve this problem by theoretical means , including the approach described here , and we continue these attempts .molecular dynamics simulations have been performed with the internal coordinate method ( icmd ) including special technique for flexible sugar rings .the so - called `` minimal b - dna '' model was used which consists of a double helix with the minor groove filled with explicit water . unlike the more widely used models , it does not involve explicit counterions and damps long range electrostatic interactions in a semi - empirical way by using distance scaling of the electrostatic constant and reduction of phosphate charges .the dna model was same as in earlier reports , namely , all torsions were free as well as bond angles centered at sugar atoms , while other bonds and angles were fixed , and the bases held rigid .amber94 force field and atom parameters were used with tip3p water and no cut off schemes . with a time step of 10 fs , these simulation conditions require around 75 hours of cpu per nanosecond on a pentium ii-200 microprocessor .the initial conformations were prepared by vacuum energy minimization starting from the fiber b - dna model constructed from the published atom coordinates . the subsequent hydration protocol to fill up the minor groove normally adds around 16 water molecules per base pair .the heating and equilibration protocols were same as before . during the runs , after every 200 ps , water positions were checked in order to identify those penetrating into the major groove and those completely separated .these molecules , if found , were removed and next re - introduced in simulations by putting them with zero velocities at random positions around the hydrated duplex , so that they could readily re - join the core system .this procedure assures stable conditions , notably , a constant number of molecules in the minor groove hydration cloud and the absence of water in the major groove , which is necessary for fast sampling .the interval of 200 ps between the checks is small enough to assure that on average less then one molecule is repositioned and , therefore , the perturbation introduced is considered negligible .i thank r. lavery for useful discussions as well as critical comments and suggestions to the paper .10 marini , j. c. , levene , s. d. , crothers , d. m. & englund , p. t. , _ proc .usa _ * 79 * , 76647668 ( 1982 ) .crothers , d. m. , _ nature _ * 308 * , 509513 ( 1984 ) .trifonov , e. n. & sussman , j. l. , _ proc .* 77 * , 38163820 ( 1980 ) .levene , s. d. & crothers , d. m. , _ j. biomol .dyn . _ * 1 * , 429435 ( 1983 ) .calladine , c. r. , drew , h. r. & mccall , m. j. , _ j. mol . biol . _ * 201 * , 127137 ( 1988 ) .crothers , d. m. & shakked , z. , in _ oxford handbook of nucleic acid structure _ , edited by neidle , s. ( oxford university press , new york , 1999 ) , pp . 455470 .zhurkin , v. b. , lysov , y. p. & ivanov , v. i. , _ nucl .acids res . _* 6 * , 10811096 ( 1979 ) .sanghani , s. r. , zakrzewska , k. , harvey , s. c. & lavery , r. , _ nucl .acids res . _ * 24 * , 16321637 ( 1996 ) .von kitzing , e. & diekmann , s. , _ eur .j. _ * 14 * , 1326 ( 1987 ) .chuprina , v. p. & abagyan , r. a. , _ j. biomoldyn . _ * 1 * , 121138 ( 1988 ) .zhurkin , v. b. , ulyanov , n. b. , gorin , a. a. & jernigan , r. l. , _ proc .usa _ * 88 * , 70467050 ( 1991 ) .mirzabekov , a. d. & rich , a. , _ proc .usa _ * 76 * , 11181121 ( 1979 ) .levene , s. d. , wu , h .-crothers , d. m. , _ biochemistry _ * 25 * , 39883995 ( 1986 ) .strauss , j. k. & maher , l. j. , iii , _ science _ * 266 * , 18291834 ( 1994 ) .travers , a. , _ nature struct .* 2 * , 264265 ( 1995 ) .mcfail - isom , l. , sines , c. c. & williams , l. d. , _ curr .biol . _ * 9 * , 298304 ( 1999 ). chiu , t. k. , zaczor - grzeskowiak , m. & dickerson , r. e. , _ j. mol ._ * 292 * , 589608 ( 1999 ) . young , m. a. & beveridge , d. l. , _ j. mol .biol . _ * 281 * , 675687 ( 1998 ) .mazur , a. k. , _biol . _ * 290 * , 373377 ( 1999 ) .dickerson , r. e. _ et al ._ , _ j. mol ._ * 205 * , 787791 ( 1989 ) .mazur , a. k. & abagyan , r. a. , _ j. biomoldyn . _ * 6 * , 815832 ( 1989 ). mazur , a. k. , _ j. comput_ * 18 * , 13541364 ( 1997 ). mazur , a. k. , _* 111 * , 14071414 ( 1999 ) .mazur , a. k. , _soc . _ * 120 * , 1092810937 ( 1998 ) .mazur , a. k. , _ preprint _ * http : //physics/9907028 * , ( 1999 ) .cornell , w. d. _ et al ._ , _ j. am . chem ._ * 117 * , 51795197 ( 1995 ) .cheatham , t. e. , iii , cieplak , p. & kollman , p. a. , _j. biomol ._ * 16 * , 845862 ( 1999 ) .jorgensen , w. l. , _soc . _ * 103 * , 335340 ( 1981 ) .arnott , s. & hukins , d. w. l. , _ biochem .. communs ._ * 47 * , 15041509 ( 1972 ) .lavery , r. & sklenar , h. , _ j. biomol* 6 * , 6391 ( 1988 ) .this section contains comments from anonymous referees of peer - review journals where the manuscript has been considered for publication , but rejected .mazur describes molecular dynamics simulations where a correct static curvature of dna with phased a - tracts emerges spontaneously in conditions where any role of counterions or long range electrostatic effects can be excluded .\1 ) the observed curvature is dependent on the starting model .in fact the manuscript uses the phrase ` stable static curvature ' incorrectly to describe what is probably a trapped metastable state . the observed curve is neither stable nor static .\2 ) the choice of dna sequence seems to be biased toward that which gives an altered structure in simulations , ad is not that which gives the most pronounced bend in solution .i would suggest a comparison of ( caaaatttttg)n and ( cttttaaaag)n .prodin , f. , cocchione , s. , savino , m. , & tuffillaro , a. `` different interactions of spermine with a curved and a normal dna duplex - ( ca(4)t(4)g)(n ) and ( ct(4)a(4)g)(n ) - gel -electrophoresis and circular - dichroism studies '' ( 1992 ) biochemistry international 27 , 291 - 901 .brukner , l , sucis , s. , dlakic , m. , savic , a. , & pongor , s. `` physiological concentrations of magnesium ions induces a strong macroscopic curvature in gggccc - containing dna '' ( 1994 ) j. mol .236 , 26 - 32 .this manuscript describes the modeling of a 25-residue dna duplex using molecular dynamics simulations .the dna sequence in question contains 3 a / t tracts arranged in - phase with the helix screw and thus is expected to manifest intrinsic bending . unlike previous md studies of intrinsically bent dna sequences , these calculations omit explicit consideration of the role of counterions . because recent crystallographic studies of a - tract - like dna sequence have attributed intrinsic bending to the localization of counterions in the minor groove , the present finding that intrinsic bending occurs in the absence of explicit counterionsis important for understanding the underlying basis of a - tract - dependent bending .overall , the md procedure appears sound and the calculations were carried out with obvious care and attention to detail .there are two specific issues raised by this study that should be addressed in revision , howeveralthough the sequence chosen for this study was based on a canonical , intrinsically - bent motif consisting of three a tracts , it is unclear to what extent intrinsic bending has been experimentally shown for this particular sequence .there are known sequence - context effects that modulate a - tract - dependent bending and thus the author should refer the reader to data in the literature or show experimentally that intrinsic bending of the expected magnitude occurs for this particular sequence .moreover , one a tract is out - of - phase with respect to the others and it is therefore not clear how this contributes to the overall bend .the author is understandably concerned about end effect with short sequences ; this problem can be ameliorated by examining dna fragments that constrain multiple copies of the chosen motif or by extending the ends of the motif with mixed - sequence dna .notwithstanding the authors remark bout separating the cause and the effects with respect to intrinsic bending some comments about the underlying mechanism of bending seem appropriate. it would be particularly useful to know whether average values of any specific conformational variables are unusual or whether strongly bent states are consistent with narrowing of the minor groove within a - tracts , for example .
the macroscopic curvature induced in double helical b - dna by regularly repeated adenine tracts ( a - tracts ) is a long known , but still unexplained phenomenon . this effect plays a key role in dna studies because it is unique in the amount and the variety of the available experimental information and , therefore , is likely to serve as a gate to the unknown general mechanisms of recognition and regulation of genome sequences . the dominating idea in the recent years was that , in general , macroscopic bends in dna are caused by long range electrostatic repulsion between phosphate groups when some of them are neutralized by proximal external charges . in the case of a - tracts this may be specifically bound solvent counterions . here we report about molecular dynamics simulations where a correct static curvature in a dna fragment with phased adenine tracts emerges spontaneously in conditions where any role of counterions or long range electrostatic effects can be excluded .
let be a convex closed smooth hyper - surface .we consider the following spherical radon transform of a function defined in here , is the sphere centered at of radius , and is its surface measure .this transform appears in several imaging modalities , such as thermo / photoacoustic tomography ( e.g. , ) , ultrasound imaging ( e.g. , ) , sonar ( e.g. , ) , and inverse elasticity ( e.g. , ) .for example , in thermo / photoacoustic tomography ( tat / pat ) , is the initial ultrasound pressure generated by the thermo / photo - elastic effect .it contains useful information about the inner structure of the tissue , which can be used , e.g. , for cancer detection . on the other hand ,the knowledge of can be extracted from the ultrasound signals collected by a transducer located at , which is called the * observation * surface .one , therefore , can concentrate on finding given .the same problem also arises in other aforementioned image modalities .it is commonly assumed that is supported inside the bounded domain whose boundary is .let us discuss an inversion formula under this assumption .let be the pseudo - differential operator defined by and be the back - projection type operator when is an -dimensional ellipsoid , one has the following inversion formula we note here that formula ( [ e : inversion ] ) was written in other forms in the above references .the above form , presented in , is convenient to analyze from the microlocal point of view .another advantage of the above form is that it can be implemented straight forwardly : 1 ) can be computed fast by using fast fourier transform ( fft ) and 2 ) only involves a simple integration on .[ fig : full - circle ] is the result of our implementation when and is the unit circle .the image size is pixels .the sampling data has the resolution of for the spatial ( angular ) variable and radial variable ] by then , vanishes to order at and , and for .we define the following canonical relations in and we notice that iff is in the boundary zone , corresponding to , and is obtained from by rotating around the corresponding boundary point .similar description holds for .[ p : wavefront ] we have here , is the twisted wave front set of : proposition [ p : wavefront ] was proved in when is a line segment . the proof carries naturally to the general curve without any major changes .we present it here for the sake of completeness and convenience in later discussion .due to the composition rule for wave front sets ( see theorem [ t : compose ] in appendix ) , we obtain is a pseudo - differential operator , it does not generate new wave front set elements .this well - known pseudo - locality property of a pseudo - differential operator is recalled in appendix [ a : pdo ] , see ( [ e : wave - front - pdo ] ) . ] where and are the schwartz kernel of and , respectively .let us now proceed to analyze the right hand side of the above inclusion .we note that is an fio with the phase function ( see , e.g. , ) for the sake of simplicity , for an each , we use the notation for .due to theorem [ t : wave - fio ] ( see appendix ) , we obtain , by letting also considering as a function of , we have applying the product rule for wave front sets ( see theorem [ t : product ] in appendix ) , we obtain where { \mathcal{c}}_a & = & \{\big({{\bf a}},r , \ , \tau \ , \left < x-{{\bf a } } , z'(a ) \right>+ \tau ' , - \tau \ , r ; \ , x,\tau \ , ( x-{{\bf a } } ) ) : |x-{{\bf a}}|=r,\,\tau ' \neq 0\ } , \\[4 pt ] { \mathcal{c}}_b & = & \{\big({{\bf b}},r , \ , \tau \ ,\left < x-{{\bf b } } , z'(b ) \right>+ \tau ' , - \tau \ , r ; \ , x,\tau \ , ( x-{{\bf b } } ) ) : |x-{{\bf b}}| = r , \ , \tau ' \neq 0\}.\end{aligned}\ ] ] on the other hand , we notice that is a fio with the same phase function as ( but the order of variables is switched ) , see e.g. , .therefore , where is the transpose relation of from ( [ e : wfmu ] ) , we arrive to we notice that therefore , [ r : wave ] assume that in a neighborhood of and such that . then , due to ( [ e : wf - pro ] ) , where is the right projection operator . from( [ e : wfmu ] ) , we obtain : this observation will be used later in the proof of theorem [ t : main1 ] a ) .let us now employ proposition [ p : wavefront ] to describe the geometric effects of on the wave front set of ( see also the discussion in ) .we first keep in mind the following inclusion , coming from theorem [ t : cal - wave ] : therefore , due to proposition [ p : wavefront ] , \cup \big[{\lambda}_{{\bf a}}\circ { \mbox{wf}}(f)\big ] \cup \big[{\lambda}_{{\bf b}}\circ { \mbox{wf}}(f ) \big].\ ] ] the first part on the right hand side contains all the singularities that may be possibly reconstructed by .the other two contain all the possible artifacts generated by .we now discuss the implications of ( [ e : poss ] ) in more details .* let be an invisible singularity .we observe that therefore , from inclusion ( [ e : poss ] ) , is not reconstructed and does not generate any artifacts .* let be a visible singularity . then, + from inclusion ( [ e : poss ] ) , may be reconstructed and does not generate any artifacts .* let be a boundary singularity pointing through , that is for some .then , from the inclusion ( [ e : poss ] ) , we observe that may generate artifacts by rotating around .conversely , assume that be an artifact . then , there is that generates .+ similar description holds for a boundary singularity pointing through . the strength of the reconstructed singularities , described in b ) , will be obtained by analyzing the singularities of near .this will be done by the standard theory of pseudo - differential operators . in order to analyze the strength of the artifacts , described in c ) , we will make use of a class of fios associated to a point , introduced in section [ s : fio - point ] . here is our main result of this section : [ t : main1 ] we have * microlocally on , we have .moreover , its principal symbol is ,\ ] ] where is the intersection of the ray with .* we can write such that : * * . moreover , microlocally near , . ** .moreover , microlocally near , .will present the proof of theorem [ t : main1 ] in section [ s : proof-2d ] .we now describe some of its consequences .* let be a visible singularity . then, due to theorem [ t : main1 ] b ) , microlocally near , is a fourier distribution of order zero with positive principal symbol . applying lemma [ l : pet ], we obtain if and only if .that is , all the visible singularities are reconstructed with the correct order .+ moreover , the formula provides the magnitude of the main part of reconstructed singularities .for example , if is a jump singularity across a curve with the jump equal to , then is also a jump singularity across with the jump equal to .this explains the difference in the magnitude of the reconstructed singularities , that is demonstrated in section [ s : num ] .* now , assume that is an artifact pointing through . then , each of its generating singularities satisfies then , due to theorem [ t : main1 ] b ) and lemma [ l : ho ] , at least one generating singularity satisfies .that is , all the artifacts are at least order(s ) smoother then their strongest generating singularities .+ if we assume further that has finitely many generating singularities , each of them is a conormal singularity of order along a curve whose order of contact with the circle is exactly ., since both curves are perpendicular to at . therefore , the condition on the contact order is quite generic . ] then , due to theorem [ t : main1 ] b ) and lemma [ l : spread ] , is a conormal singularity of order along the circle . that is , the artifacts are at least order(s ) smoother than the strongest generating singularity . in our numerical experiments in section [s : num ] , we will demonstrate this fact .let us now discuss the proof of theorem [ t : main1 ] .it is similar to that of ( * ? ? ?* theorem 2.2 ) .however , we need to employ more sophisticated microlocal arguments due to the generality of the geometry involved . as in ( * ? ? ?* theorem 2.2 ) , the proofs for a ) and b ) require two different oscillatory integral representations for . the idea is similar to the case of infinitely smooth presented in ( see also for more general framework ) .the main point here is to microlocalize the argument to stay away from .let be such that .we need to prove that there is such that and the principall symbol of at equals ] and define we have . moreover , * assume that near and vanishes to order at . then , on the set \mbox { such that } z(s ) \in { \operatorname{supp}}(h ) \},\ ] ] we have ^{(k + 1 ) } } \ , \big[1 + r(x , y,{\lambda})\big ] .\end{aligned}\ ] ] here , is the normal vector of at . *assume that near , and vanishes at to order .on the set ,~ z(s ) \in { \operatorname{supp}}(h ) \},\ ] ] we have ^{(k + 1 ) } } \ , \big[1 + r(x , y,{\lambda})\big].\end{aligned}\ ] ] here , is the normal vector of at . inboth a ) and b ) , is a symbol of order at most .the lemma is proved by successive integration by parts .it is very similar to ( * ? ? ?* lemma 2.3 ) .we skip it here for the sake of brevity .let us now consider .we assume that is a convex domain with the smooth boundary .we assume that is a connected and simply connected subset of with the smooth boundary .we will analyze when vanishes to a finite order on . with a slight abuse of notation , we arc - length parametrize by the function ( with , where is the length of ) .similarly to the case , we define we denote by the following canonical relation in : we notice that if and only if and are boundary elements corresponding to a common boundary point and they are obtained from the other by a rotation around the tangent line of at .the following result gives us a geometric description of the singularities of the schwartz kernel of .[ p : wavefront-3 ] we have the proof of proposition [ p : wavefront-3 ] is similar to that of proposition [ p : wavefront ] ( see also ( * ? ? ?* proposition 3.1 ) and ) .we skip it for the sake of brevity . similarlyto proposition [ p : wavefront ] we obtain the following implications of proposition [ p : wavefront-3 ] : * smoothens out all the visible singularities . * may reconstruct the visible singularities , and * may generate artifacts by rotating a boundary singularity around the tangent line of at the corresponding boundary point .the following result tells us the strength of the reconstructed singularities , explained in b ) , and artifacts , described in c ) : [ t : main2 ] the following statements hold : * microlocally on , we have with the principal symbol ,\ ] ] where is the intersection of the ray with . *microlocally on , we have .similarly to theorem [ t : main1 ] , we obtain the following implications of theorem [ t : main2 ] ( see also lemma [ l : ho3d ] for the discussion on artifacts ) : * if is a visible singularity , then is reconstructed with correct order . *the artifacts are at least order(s ) smoother than the strongest generating singularity .let us now proceed to prove theorem [ t : main2 ] .we will need the following two lemmas : [ l : f ] let be defined by where .then , .we first notice that the above formula does not directly define an fio , since the phase " function involves an extra variable , which is neither a variable of nor a phase variable . in , where is a line segment , the above result was proved by a change of variables .when is a general curve , such change of variable seems to be complicated .we , instead , introduce a simple idea of lifting up the space . for the notational ease ,let us denote and .then , and are smooth manifolds of dimensions and , respectively .let us define the operator by then , can be written in the following form that is , is an fio of order ( see ( [ e : ord ] ) in appendix [ a : fios ] ) is the dimension of the phase variable . ] with the canonical relation let us define by the formula then , is an fio of order ( see ( [ e : ord ] ) in appendix [ a : fios ] ) is the dimension of the phase variable .] with the canonical relation we observe that , and .therefore , see , the proof of a ) is similar to that of theorem [ t : main1 ] a ) .we skip it for the sake of brevity .we now proceed to prove b ) .let .that is and there is such that here , is the unit tangent vector of at .let be the unit normal vector of which is tangent to and points inward to .let be the metric on and be a small ( bounded ) neighborhood of such that for each there exists uniquely such that that is , each can be unique parametrized by . by narrowing down , if necessary , we may assume defines a smooth map from to ] are located on two open quarters of the circle : first ( upper right ) and third ( lower left ) .accordingly , the invisible singularities are located on the other two quarters of the circle , whereas the boundary singularities correspond to the boundary of the acquisition surface and , therefore , are located at the four points , .these characterizations are due to the analysis and explanations that we presented in section [ s:2d ] . to numerically verify those theoretical findings ,let us now examine the reconstruction in fig .[ fig : fig1 ] .our observations are as follows : * all visible singularities are reconstructed sharply .they visually appear to be of the same order as the original singularities ( jump from red to blue ) .* the invisible singularities are smoothed away and , hence , not present in the reconstruction .this can be seen from the fact that there are no sharp boundaries ( intensity jumps ) along invisible directions .* added singularities ( artifacts ) are generated along four circles , each of them touches the disc ( phantom ) tangentially .moreover , we observe that two circles are concentric and centered on the x - axis , and the other two are concentric and centered on the y - axis .more precisely , the added artifacts are located on circles that are centered at the boundary points of the acquisition surface ( which is illustrated by the green curve in fig .[ fig : fig1 ] ) and that are tangent to a singularity of the original phantom .that is , the artifacts are generated by the boundary singularities at , , , and . by further examining the artifacts in fig .[ fig : fig1 ] , we also observe that the jumps along the added artifact circles are not as sharp as in the case of visible singularities .this indicates that the added artifacts are weaker than original ( generating ) singularities .in fact , our theoretical analysis shows that they are -order weaker . summing up, this experiment shows that the above observations correspond to our theoretical findings stated in propostion [ p : wavefront ] and theorem [ t : main1 ] . in the next stepwe investigate performance of the modified ( artifact reduction ) reconstruction operator . to that end, we use the operator with the cutoff function defined in - ( cf . also fig .[ fig:1q - simple_smoothing_eps_var_power_var ] ) , and apply it to the limited view data .note that the cutoff function is smooth in the interior of and vanishes to an order at the end points of .according to theorem [ t : main1 ] , the reconstructions obtained through , will exhibit added artifacts that are orders smoother than the original singularities .therefore , the degree of artifact reduction is linked to the order and we expect the operator to mitigate artifacts more when the bigger the order is .in addition to that , we expect that the strength of artifacts is influenced by the parameter ( see and the definition of ) . to investigate the practical performance , we computed a series of artifact reduction reconstructions by varying the parameters and .the results are shown in fig .[ fig:1q - simple ] and [ fig:1q - order_varies ] .first , we observe that in all reconstructions shown in fig .[ fig:1q - simple ] and [ fig:1q - order_varies ] most of the visible singularities are well reconstructed . in fig .[ fig:1q - simple ] , we have displayed some reconstructions using smoothing order and varying the parameter .here we observe that for almost no artifact reduction happens .this is due to the fact that , in the discretization regime , changes very fast near the endpoints of and , hence , behaves like a discontinuous function .the artifact reduction gets clearer as we increase the value of .next , we consider the effect of varying the smoothing parameter for a fixed .the corresponding reconstructions are shown in fig .[ fig:1q - simple_smoothing_eps_var_power_var ] .as expected , the artifacts get weaker ( are better reduced ) as the order increases .this is in accordance with our theoretical characterizations in theorem [ t : main1 ] .[ [ experiment-2 . ] ] experiment 2 .+ + + + + + + + + + + + + our second experiment follows the lead of our first experiment . here, we only consider a larger angular range where we use limited view data collected on ( three quarters of the unit circle ) . again, we compute a series of reconstructions using the standard as well as the modified reconstruction operators , and , respectively . the results of this experiment are shown in fig .[ fig:3q - no_smoothing ] - [ fig:3q - order_original_smoothing ] .before we start , let us first remark that in this example all singularities of the phantom image are visible ( they are located on the circle centered at the origin of radius ) , and the locations of all boundary singularities are the same as in experiment 1 , namely and .in contrast to experiment 1 where all of the visible singularities were singly visible , we now have both types of visible singularities , doubly and singly visible ones .those on the first and third quarters are doubly visible , while those are on the second and fourth quarters are singly visible .see section [ s:2d ] for theoretical explanation . by examining the reconstructions using the standard reconstruction formula ( i.e. ) in fig .[ fig:3q - no_smoothing ] we easily observe that indeed all singularities of the phantom are reconstructed reliably . here , the doubly visible singularities have more contrast than the singly visible ones .this is due to the fact that for each doubly visible singularity there are two positions on the acquisition arc from which the singularity is visible , whereas there is only one position on this arc for a singly visible singularity .mathematically , this is reflected by the different values of the principal symbol of the reconstructions operator , cf .theorem [ t : main1 ] , where we can see that the principal symbol if is doubly visible and if is singly visible . in fig .[ fig:3q - simple_original_smoothing ] , we further observe that added artifacts are generated on circles that are centered at the boundary points of and tangent to the boundary singularities . these artifacts , however , are not as strong as the reconstructed singularities , which is again in accordance with our theoretical results , see theorem [ t : main1 ] and the discussion below .we again studied the performance of artifact reduction by using the modified reconstruction operator with the smoothing function for ( cf . - ) .the reconstruction results for varying parameters and for varying smoothing orders are shown in fig .[ fig:3q - simple_original_smoothing ] and fig .[ fig:3q - order_original_smoothing ] , respectively .not surprisingly , we observe here the same behavior as in experiment 1 .[ [ experiment-3 . ] ] experiment 3 .+ + + + + + + + + + + + + in our last experiment we investigate how the choice of the smooth cutoff function for influences the artifact reduction . to that end, we consider the same limited view situation as in the experiment 2 where the data are collected on and define a new smoothing function which is equal to in the interior of and smoothly decreases to in transition regions of length at the boundary of . to that end, we let and 1 , \hspace{55pt } \epsilon < s < 1-\epsilon , \\[5pt ] h_0(-s+1 ) , \hspace{10pt } 1 -\epsilon \leq s \leq 1 .\end{array } \right.\ ] ] the parameter again controls how close the function is to the constant function . moreover , this function vanishes to order 1 at the endpoints of . to obtain higher order smoothness at the endpoints we again consider integer powers of and set a plot of the functions is depicted in fig .[ fig : function_h ] for different values of and .the corresponding limited view reconstructions are presented in fig .[ fig:3q-2sm_perc_varies ] and [ fig:3q-2sm_order_varies ] .the advantage of such a choice of the function lies in the fact that it is exactly ( not approximately ) equal to in the range .therefore , if is very small , most of the visible singularities are reconstructed up to the factor ( if it is doubly visible ) or ( if it is singly visible ) . however , the ( theoretical ) disadvantage of such a choice is due to singularities of at the interior points and . according to our analysis in section [ s:2d ] , this may lead to the generation of added artifacts ( located on circles that are centered around these points ) .however , since is order smoother at these inner points than at the endpoints , those new artifacts will be weaker than those rotating around the endpoints and .indeed , as can be seen in fig .[ fig:3q-2sm_perc_varies ] and [ fig:3q-2sm_order_varies ] , the new added artifacts are too weak to be recognized in the reconstructions. concerning the influence of parameters and , we arrive at similar observations as in experiments 1 and 2 . comparing the reconstructions that were computed with different smoothing functions in fig .[ fig:3q - order_original_smoothing ] and fig .[ fig:3q-2sm_order_varies ] we observe that the new smoothing function leads almost always to a significantly better artifact reduction .for example , in fig .[ fig:3q-2sm_order_varies ] , the artifacts almost completely vanish and the phantom as well as the background are reconstructed very well .this examples shows that the choice of the smoothing function might influence artifact reduction performance significantly .[ [ conclusion . ] ] conclusion .+ + + + + + + + + + + the above numerical experiments show that our theoretical results directly translate into practical observations .in particular , the proposed artifact reduction technique can lead to a significant improvement of the reconstruction quality if the smoothing function is chosen appropriately .we will explore more experiments and report in - depth results in a future publication .the work of l.l.b . was supported by the national science foundation major research instrumentation program , grant 1229766 .j. f. thanks eric todd quinto for many enlightening discussions about limited data tomography and microlocal analysis over the years as well as for the warm hospitality during several visits at tufts university .he also acknowledges support from the hc postdoc programme , co - funded by marie curie actions .l.v.n.s research is partially supported by the nsf grant # dms 1212125 .he is thankful to professor g. uhlmann for introducing him to the theory of pseudo - differential operators with singular symbols presented in , whose spirit inspires this and the previous works .he also thanks professor t. quinto for the encouragement and helpful comments / suggestions .the authors would like to thank g. ambartsoumian for sharing his codes in spherical radon transform , some of them are reused in this article s numerical implementations .they also thank professor p. stefanov for pointing out some missing references in the initial version of the article .let be an open set , and be the cotangent bundle of . for simplicity , we can consider as .we also denote let and be the standard spaces of test functions and distributions on . in this section , we briefly introduce some basic concepts in microlocal analysis , such as wave front set , pseudo - differential operators ( ) , and fourier integral operators ( fios ) .extensive presentations can be found in .the use of microlocal analysis in geometric integral transforms are pioneered in .its extensive uses in the studies of spherical mean transform can be found in many works , see , just to name a few .[ wave front set ] let and . then , is microlocally smooth at if there is a function satisfying and an open cone containing , such that is rapidly decreasing in .that is , for any there exists a constant such that the * wavefront set * of , denoted by , is the complement of the set of all where is microlocally smooth .an element indicates not only the location but also the * direction * of a singularity of .for example , if is the characteristic function of an open set with smooth boundary , then if and only if and is perpendicular to the tangent plane of at .detailed discussion can be found in and , more briefly , in .[ t : compose ] let and be linear transformations whose schwartz kernels are and .we assume that and .then , the schwartz kernel of satisfies : let and .then is in the space microlocally at if there is a function satisfying and a function homogeneous of degree zero and smooth on with , such that the * -wave front set * of , denoted by , is the complement of the set of all where is not microlocally in the space . one can use the sobolev orders to compare the singularities and , where are two distributions , not necessarily defined on the same set .for example , is stronger than if there is such that but .assume that is a smooth surface of co - dimension .let be a defining function for with on .the class consists of the distributions which locally can be written down as a finite sum of oscillatory integrals of the form where .let be an open set .the space consists of all functions such that for any multi - indices and , there is a positive constant such that the elements of are called symbols of order .one can use the order to compare two conormal singularities ( along the surface ) and ( along the surface ) , where are two distributions on .for example , is * weaker * than , if there is such that is of order while is not . since the above integralmay not converge in the classical sense , the expression in needs to be properly defined , see , e.g. , ( * ? ? ?* proposition 1.1.2 ) .given this proper definition , extends continuously to .in particular , it can be shown that a pseudo - differential operator does not generate new singularities .that is , ( * ? ?* page 131 ) moreover , if is in the space microlocally at then is in space microlocally at the same element , see .[ d : pdo ] let be a conic set that is open in the induced topology of .we say that * near , is microlocally in the space with the symbol * if the following holds : for each element there exist such that and the symbol of is equal to in a conic neighborhood if . [l : pet ] let be a linear operator whose schwartz kernel satisfies .assume that is microlocally in near with the symbol .let such that and in a conic neighborhood of assume further that then , for any , in this section , we introduce some special fourier distributions which are needed in this article .the reader is referred to , e.g. , for the general theory of the topic .let and be two manifolds of dimension and , respectively , and be a homogeneous canonical relation in .then , there is an associated class of fourier distributions of order , denoted by .each element of , called a fourier integral distribution of order , is a distribution such that it can be locally written down in the form here , of specified here is due to , e.g. , . ] and is a phase function associated to .that is , satisfies [ d : fio ] let be an open conic set in the induced topology of .we say that * near , is microlocally in the space * if the following holds : for each element there exists such that [ t : ho ] let and be a homogeneous canonical relation such that both of its left and right projections on have surjective differentials .assume that the differentials of the left and right projections have rank at least .then , every maps continuously from to .let such that .we define the following homogeneous canonical relation in that is , is defined by rotating , which pass through , around .[ l : ho ] let be a linear operator whose schwartz kernel satisfies .assume that is microlocally in near an open conic set .let such that if , then there is such that the following result is useful to analyze the artifacts when the original singularities are conormal .its proof is almost exactly the same as that of ( * ? ? ?* theorem 2.16 ) . we skip it for the sake of brevity .* there are at most finitely many such that * each such is a conormal singularity of order along a curve whose contact order with is exactly ., since both curves are perpendicular to at . therefore , the condition on the contact order is quite generic . ]let us consider .we introduce a class of fourier distributions , whose canonical relation is defined by the rotations around tangent lines of a smooth curve .this class of fourier distributions appears in the statement and proof of theorem [ t : main2 ] b ) .let be a closed smooth curve in parametrized by the parameter .assume that .we define the following homogeneous canonical relation in that is , is defined by rotating an element , that passes through , around the the tangent line of at . in section [ s:3d ] , we make use of this class . we state here some needed basic facts of this class .we note that : the following result is a microlocal version of the above result , which is used in section [ s:3d ] to analyze the strength of artifacts .its proof is almost exactly the same as that of ( * ? ? ?* corollary 2.15 ) . we skip it for the sake of brevity .[ l : ho3d ] let be a linear operator whose schwartz kernel satisfies . assume that is microlocally in near an open conic set .let such that if , then there is such that jos l. antoniano and gunther a. uhlmann . a functional calculus for a class of pseudodifferential operators with singular symbols . in _ pseudodifferential operators and applications ( notre dame , ind . , 1984) _ , volume 43 of _ proc .pure math ._ , pages 516 .soc . , providence , ri , 1985 .d. finch , i .-r . lan , and g. uhlmann .microlocal analysis of the x - ray transform with sources on a curve . in _inside out : inverse problems and applications _ , volume 47 of _ math ._ , pages 193218 .cambridge univ . press ,cambridge , 2003 .a. greenleaf and g. uhlmann .microlocal techniques in integral geometry . in _integral geometry and tomography ( arcata , ca , 1989 ) _ , volume 113 of _ contemp ._ , pages 121135 .soc . , providence , ri , 1990 .l. hrmander . , volume 256 of _ grundlehren der mathematischen wissenschaften [ fundamental principles of mathematical sciences]_. springer - verlag , berlin , 1983 .distribution theory and fourier analysis .
we study the limited data problem of the spherical radon transform in two and three dimensional spaces with general acquisition surfaces . in such situations , it is known that the application of filtered - backprojection reconstruction formulas might generate added artifacts and degrade the quality of reconstructions . in this article , we explicitly analyze a family of such inversion formulas , depending on a smoothing function that vanishes to order on the boundary of the acquisition surfaces . we show that the artifacts are orders smoother than their generating singularity . moreover , in two dimensional space , if the generating singularity is conormal satisfying a generic condition then the artifacts are even orders smoother than the generating singularity . our analysis for three dimensional space contains an important idea of lifting up a space . we also explore the theoretical findings in a series of numerical experiments . our experiments show that a good choice of the smoothing function might lead to a significant improvement of reconstruction quality .
in the last decades , quantum optics experiments based on intensity light measurements have been realized mainly with intense ( macroscopic ) fields or at single - photon level , while photon counting with few - photon light ( up to 100 photons ) is a rather unexplored measurement regime . in the first two regimes ( i.e. the macroscopic one and the single - photon level ) , several quantum optical applications have been derived , such as the ones related to quantum mechanics foundations investigation quantum communication , computation and metrology .this partially derives from the fact that the output of detectors operating in this regime are not considered trustworthy , also because this region has not yet been investigated from the metrological point of view in order to provide well established characterization techniques .in this paper we review the klyshko calibration method , based on the parametric down conversion ( pdc ) phenomenon , for single - photon detectors , as well as few extensions to photon - number - resolving detectors ( pnrd ) realized in our laboratories ; further works can be found in . on the other hand , applications of pdc light to analog regimes beyond the purpose of this review . in the following, the term single - photon detector " refers to detectors producing just a click " irrespective of the number of photons impinging on it , e.g. single - photon avalanche diodes ( spads ) operating in geiger mode . on the contrary , typical detectors able to observe more than one photon are , e.g. , high gain photomultiplier tubes , hybrid photodetectors , ccds and electron multiplying ccds ( emccds ) , silicon photomultipliers , the superconducting transition edge sensors ( tess ) , time - multiplexed single - photon detectors and single - photon detectors in tree configurations .no direct comparison upon common set of figures of merit was performed so far on these group of pnrds , also because at first it is necessary to establish a set of standardized definitions , connected with such figures of merit , as well as the corresponding characterization protocols . in the first part of this articlewe will focus on the detection quantum efficiency , ( where dut stands for device under test ) , defined as the overall probability of observing the presence of a single photon impinging on the dut . in the followingwe will see how to extend the klyshko s absolute measurement technique ( named klyshko two - photon technique , ktpt ) based on correlated photons obtained from pdc , originally developed for single - photon detectors , to pnrds . in order to characterize the detection behavior of pnrds , together with the estimation of it is fundamental to provide a theoretical model for their measurement process .usually in quantum optics it is assumed that the detection model of a non - ideal detector ( i.e. detector with ) can be described as an ideal detector with unity efficiency placed after a beam splitter of transmissivity . in practice , this implies that the detection process in the presence of more than one photon is described by the bernoulli distribution .this detection model is absolutely reasonable for typical analog " detectors for macroscopic " light . on the contrary ,detection models for pnrds may significantly differ from the bernoulli one , as in the case of time - multiplexed single - photon detectors , trees of single photon detectors , or silicon photomultipliers . in the following two pnrdswill be considered : the tes and a tree of two single photon detectors .since tes is essentially a superconductive microcalorimeter with a linear response , it is absolutely reasonable to expect for it a bernoulli detection model . in the case of the tree of spad detectors ,the model is no longer linear and will be analysed in the last section . in section 2 we will present the ktpt applied to an avalanche photo diode ( apd ) based single photon counting module , showing the detection efficiency obtained with this absolute calibration technique .finally , in section 3 we will discuss the extension of the ktpt from the calibration of single - photon detectors to pnrds . in particular , we will discuss a ktpt - based calibration method , that exploits the whole information from the output of the pnrd , without referring to some specific detection model .as already pointed out , pnrd characterization exploiting the above mentioned calibration techniques may not provide complete information regarding the detector behavior . on the contrary , at least in one case ,the determination of the quantum efficiency is strongly dependent on the assumption of a specific detection model . for this reason , we consider also few calibration techniques providing the full characterization of the detection process of pnrds . here, the detection process is considered as a quantum operation , thus the technique consists in realizing the tomography of the quantum operation .the method using pdc to calibrate detectors is a well - established technique .it has the peculiarity of being intrinsically absolute , as it does not rely on any externally calibrated radiometric standards .the pdc phenomenon was predicted in 1961 by louisell et al . , and the very first experiment to observe coincidences between downconverted photons in 1970 also included the first detector calibration using a pdc source .the method was not widely disseminated , however , and 7 years later klyshko independently proposed that pdc could be used to measure detection efficiency . in the early 1980 s , few groups pushed the technique from demonstrational experiments to more accurate calibrations . as a consequencethe ktpt has been added to the toolbox of primary radiometric techniques for detectors calibration , even if it has been deeply studied only in the case of single - photon detectors .the pdc process is used to create the correlated pair of photons allowing the absolute quantum efficiency measurement .the detection of one of the photons pairs announces the presence of its mate , and any missed detection ( in absence of losses ) of the announced photon is due to the non - ideal quantum efficiency of the dut . and detector b with efficiency collect the photons of correlated arms .counters and coincidence electronics allow to obtain the number of signal counts , the number of idler counts and the number of photons arriving in coincidence to both detectors .,width=302 ] the calibration scheme is depicted in fig [ schemasetup ] . two correlated channels of pdc emission ( dubbed signal and idler ) , are selected and directed to photo counters a and b respectively . in the ideal situation ( no losses ) , the detection of one photon of the pair guarantees with certainty the presence of the second photon along the correlated direction .if is the total number of pairs emitted by the crystal in a given time interval , and are the average count rates recorded by detectors a and b during the same time interval , and is the coincidences count rate , we have the relations where and are the detection efficiencies of photodetectors a and b at specific wavelength and . due to the statistical independence of the detectors ,the number of coincidences is then , the detection efficiency can be found as anyway , in practice it is very difficult to guarantee that both detectors see only correlated photons , thus we have to associate each arm with a particular purpose : one detector is the device under test ( dut ) , while the other acts as a trigger , to indicate when a detection is expected in the dut .we underline that , since the determination of is independent of the trigger efficiency , losses in the trigger channel do not affect the calibration technique in this section we analyze some details of the experimental realization of ktpt .usually , the coincidence and counting electronics associated to these calibration experiments is like the one reported in fig .[ electronica ] .the output signal from the trigger detector is sent to the start input of the tac , and the dut output is delayed ( 6.5 ns ) and sent to the stop input of the tac . the tac output is sent simultaneously to a multichannel analyzer ( mca ) and to a single - channel analyzer ( sca ) . the mca records histograms of inter - arrival times of the dut and trigger events .the sca output is addressed to a counter in order to measure coincidence counts .correlated photon pairs are seen in the histogram as a peak on top of a flat background resulting from uncorrelated output pulses from the two detectors .true coincidences are found by counting the events within a fixed time window around this peak and subtracting the flat background level within the same time window ( referred to as accidental coincidences ) . to account for the presence of unwanted counts , eq .[ eficiencias ] has to be modified .in addition to the correlated photons , each detector suffers from background counts , due to unwanted external light ( e.g. stray light or unheralded pdc light ) , and spurious counts due to thermal fluctuation inside the detector or trapped carriers ( dark counts and afterpulses ) .thus , spurious coincidence counts are superimposed on the correlated pairs , leading to the above mentioned background counts and accidental coincidences . to correct for the unwanted detected light ,the measured quantum efficiency , , is estimated from where are the average coincidence counts measured by tac / sca , the average valid start counts , the average background counts on the valid start and are the accidental coincidence counts .concerning this last correction one has to detail a little more .the tac valid start output provides only the true trigger counts that are considered for conversion and give contribution to coincidences . thus the tac dead time effect can be neglected thanks to the valid start output .we should also note that the number of valid start counts able to produce an accidental coincidence drastically changes if the peak of coincidences is in the tac windows or not .because the accidental counts are evaluated by adding a delay to the dut output , in order to move the peak out of the measurement window , a correction for should be added accounting to this valid start mismatch .a reasonable first order correction is given by where is the average value of the valid start counts when the coincidence peak is in the tac window and is the valid start average when the coincidence peak is not in the tac windows .then , the measured quantum efficiency becomes if we take into account a correction due to optical losses , we obtain the quantum efficiency of just the detector under calibration where is the total transmittance of the dut arm .a careful estimation of is crucial in order to adequately evaluate the quantum efficiency of the device under calibration .this kind of protocol has been experimentally implemented many times ( e.g. ref, ) . according to ref.s [ ] , the statistical uncertainty associated with this two - photon measurement techniqueis deduced by applying the uncertainty propagation law to the model of eq .[ effdut2 ] where is the variance of a generic variable .sensitivity coefficients are deduced by standard uncertainty propagation rules and the correlation coefficients are evaluated from repeated experimental data as .recently , a new implementation of the ktpt protocol has been carried out , obtaining a value of with a relative uncertainty of ( ref . ) , paving the way to applications in photometry and metrology .the extension of the ktpt to pnrds system is quite straightforward . as in the classic ktpt, it is necessary to perform two separate measurements in the presence and in the absence of the heralded photon . for each measurement , a data histogram analogous to the one reported in fig .[ peaks ] is obtained , from which it is possible to estimate the probabilities of observing photons per heralding count in the presence of a heralded photon ( ) and in the absence of the heralded photon ( ) .the probability of observing 0 photons in the presence of the heralded photons is simply the product of the probability of non - detection of the heralded photons multiplied by the probability of having 0 accidental counts : where is the quantum efficiency of the dut channel , including the optical losses , i.e. . analogously , the probability of observing counts in presence of a heralded photon can be written as from these equations , we can derive ways to evaluate the quantum efficiency of our pnrd : from eq.([p0 ] ) the efficiency can be estimated as while from eq .( [ pi ] ) it is noteworthy to observe that the set of hypotheses in the context of this calibration technique is exactly the same as the ones of the proper ktpt , and the same measurements are necessary . in this case , for each value of ( i.e. for each peak of fig . [ peaks ] ) we obtain an estimation for , allowing also a test of consistency for the estimation model . furthermore , as the number of heralding counts appears both at the numerator and at the denominator of eq.s ( [ p02 ] ) and ( [ pi2 ] ) , the correction for false heralding counts coming from stray - light , dark counts and afterpulses can be neglected ..,width=302 ] in ref . is shown a recent implementation of the proposed calibration technique applied to a tes .for this purpose , a certain number of the detector properties should be considered , e.g. : _ tes jitter _ : the poor temporal resolution of tess ( time jitter larger than 100 ns ) does not allow the use of small coincidence temporal windows .a reasonable solution is the exploitation of a heralded single photon source and the measurement of the coincidences in presence and in absence of heralding signal .the second measurement allows a trustworthy estimation of the accidental coincidences , since tess are not affected by afterpulsing ._ tes deadtime _ : as a single - photon detector , tes behaves like a paralizable detector with extending dead - time , such as a photomultiplier operating in geiger mode . to avoid deadtime distortion in the statistics of measured counts, it is necessary to use a pulsed heralded single - photon source with period larger than the deadtime of the detector , in order to avoid unwanted photons impinging on the tes surface during the deadtime , whose only effect would be the deadtime extension ._ optical losses _ : in the ktpt a careful estimation of the optical losses on the arm of the detector under test is crucial . in the specific case of the tes , where the detector input is the pit of the optical fiber , optical losses should account also for the coupling efficiency in the optical fiber .our tes sensor consists of a superconducting ti film proximised by an au layer .this detector has been thermally and electrically characterised by impedance measurements .the transition temperature of the tes , voltage biased and mounted inside a dilution refrigerator at a bath temperature of 40 mk , is =121 mk . in fig .[ tessetup ] the experimental setup used in the tes calibration is shown .a single - mode optical fiber illuminates the tes active area ( 20 m x 20 m ) .a stereomicroscope is used to align the fiber on the tes , being the distance between the fiber tip and the detector approximately 150 m .the read - out is based on a dc - squid array coupled to a digital oscilloscope for signal analysis .the energy resolution is 0.4 ev , with a response time of 10.4 ..,width=302 ] a heralded single photon source is used to perform the calibration : a type - i bbo crystal is pumped by a pulsed laser at 406 nm generating degenerate non - collinear pdc .a heralding photon at 812 nm is detected by a single photon detector ( det1 in fig.[tessetup ] ) announcing the presence of the conjugated photon ( 812 nm ) in the conjugated direction .the announced photon is sent to the tes detector by means of a single - mode optical fiber . because low repetition rate is needed to avoid pile up effect in the statistics of measured counts , the pump laser is electrically driven by a train of ns pulses with a repetition rate of 40 khz .it is easy to estimate the probabilities and , measuring the events seen by the tes in the presence or absence of heralded signal . defining and as the photons events observed by the tes in the presence or absence of the heralding photon respectively , and .tes events are detected by an oscilloscope , typical traces are reported in fig . [data ] . the oscilloscope readout is triggered only when both the pump laser trigger and the heralding detector clicks are present . in order to measure both the events in the presence and in absence of the heralded photon ,the time base is set to show two consequent laser pulses ( corresponding to the left and right pulses on fig .[ data ] , respectively ) .a histogram is generated measuring the amplitude of the pulses recorded on the oscilloscope , each peak corresponds to a different number of detected photons . as an example , the right graph in fig .[ data ] shows the histogram of a heralded event while in the left graph the histogram of an event in the absence of heralded photon is reported . to estimate parameters and ,the histograms are fitted with gaussian curves and the integral of each peak is calculated . in a more realistic model , we can re - write equations [ p0 ] and [ pi ] considering the presence of false heralding photons due to dark counts and stray light arriving to the trigger detector .if we denote by the probability of having a true heralding count then , the probability in eq . [ p0 ] of observing no photons on the pnr detector is + ( 1-\xi ) \mathcal{p}(0 ) \\ = ( 1-\xi \eta )\mathcal{p}(0 ) \end{array}\ ] ] and the efficiency while the probability of observing counts is + ( 1-\xi)\mathcal{p}(i)\ ] ] and the efficiency can be estimated as six repeated measurement of five hours , corresponding approximately to heralding counts were made . by measuring both the number of events triggered by the laser pulses and detected by det1 in the presence of pdc emission , , and in the absence of pdc light , , the probability of having true heralding counts is calculated as , obtaining .the histograms of fig .[ data ] show that only the peaks corresponding to the detection of zero , one and two photons have enough counts to be identified , while peaks corresponding to the arrival of three or more photons simultaneously to the tes are negligible .thus , three different values of the quantum efficiency were estimated : , , and .an exhaustive uncertainty analysis is given in table [ tabletes ] .the three values obtained for are consistent with each other , since the tes has been recently proved to be a linear detector , as generally believed . in average , the result for the efficiency of the tes channel is . [ 0.8 ] .uncertainty contributions of the different quantities ( q ) to the measurement of , and .the uncertainties are calculated according to the guide of uncertainty measurement with . [ cols="^,^,^,^,^,^ " , ] to provide a precise estimation of the bare tes quantum efficiency a careful estimation of the optical transmittance is needed , accounting for the coupling efficiency in the optical fiber and the optical losses in the non - linear crystal . according to the results of ref.s , one could provide an estimation of this parameter with a relative uncertainty better than 1% .the parameter is estimated to be . concerning the tes sensor , on the basis of the material used the espected efficiency should be around 49% .geometrical and optical losses in the conection between the superconducting film and the outside of the refrigerator contribute to lower the value of down to 7% .the most general measurement in quantum mechanics is described by positive operator - valued measurement ( povm ) .the most general description of a pnrd is therefore provided by its povm . a complete description of tes detectors is crucial for several applications . as stated before , tess are intrinsically phase - insensitive linear pnrds , with a detection process corresponding to a binomial convolution and with not dark counts .thus , we can assume that the elements of their povm are diagonal operators in the fock basis , with completeness relation : the probability of detecting photons with photons impinging on the tes is described by the matrix elements . by exploiting a technique based on recording the detector response for a known and suitably chosen set of input states ,the characterization of the detector at the quantum level can be achieved .in order to carry out the tomography of the tes povm ( i.e. to reconstruct the ) , we use an ensemble of coherent probes providing a sample of the husimi q - function of the povm elements .if we consider a set of coherent states of different amplitudes , , the probability of detecting photons , with the -th state as input is given by = \sum_m \pi_{nm}\ , q_{mj}\ ] ] where is the photon statistics of the coherent state , being its average number of photons . by sampling the probabilities and inverting the statistical model composed by the set of eqs .( [ eq : stat_model ] ) , the matrix elements are reconstructed .a sensible truncation on the hilbert space of the should be chosen , e.g. with the constraint that , with the chosen set of coherent states , we have no significant data for , and so we can not investigate the performances of the detector in that regime . to solve the statistical model in ( [ eq : stat_model ] ) maximum likelihood ( ml ) methods may be used . in our case , we have estimated by regularized minimization of the square difference the physical constraints of `` smoothness '' are satisfied by exploiting a convex , quadratic and device - independent function .normalization is forced ( ) , with the last povm element defined as .the tes sensor characterized in this experiment is composed by a thick ti / au film , with an effective sensitive area of ( ) m . the characterization has been performed in a dilution refrigerator with a base temperature of 30 mk .a dc - squid current sensor is used to read out operations , associated to room temperature squid electronics , addressed to an oscilloscope for the data acquisition .tes is illuminated with a fiber - coupled power - stabilized pulsed - laser at nm ( with a pulse duration of 37 ns and a repetition rate of 9 khz ) .the laser pulse is also used to trigger the data acquisition for a temporal window of 100 ns . by using a calibrated power meter the laser pulse energyis measured ( pj ) , and then attenuated to a range going from 130 to 6.5 photons per pulse in average , obtaining 20 different coherent states where is the channel transmissivity , . + since our source emits almost monochromatic photons , in ideal conditions a discrete energy distribution with outcomes separated by a minimum energy gap is expected .experimentally , a distribution with several peaks is observed , whose fwhm is determined by the energetic resolution of the tes . in a first calibration run, we fit the data with a sum of independent gaussian functions ( fig .[ f1_tes ] ) ; these fits allowed us to fix the amplitude thresholds ( located close to the local minima of the fit ) corresponding each to a different number of detected photons .the histogram of counts is obtained just by binning on the intervals identified by these thresholds .the distributions are finally evaluated upon normalizing the histogram bars to the total number of events collected for each j. the povm reconstruction of the tes detection system has been performed up to incoming photons and considering povm elements .the probability operator of more than 10 photons is given by .the matrix elements of the first 9 povm operators ( ),for are shown in fig .[ f2_povm ] . as mentioned before, the povm of a linear photon counter can be expressed as spectral measure with , where is the quantum efficiency of the detector . in order to compare the povm elements of the linear detector ( ) , with the reconstructed povm elements ( )it is necessary to estimate first the value of the quantum efficiency .this can be done by averaging the values of which maximize the log - likelihood functions where is the number of -count events obtained with the -th input state . using this procedure ,the estimated value of the quantum efficiency is , where the uncertainty accounts for the statistical fluctuations . as a function of for (respectively green , black and red graphs in plot ( a ) ) , ( b ) , ( c ) .continuous lines show the povm of a linear photon counter with quantum efficiency . ]an excellent agreement between the reconstructed povm and the linear one with the estimated quantum efficiency is observed in fig .[ f2_povm ] . in particular , the elements of the povm are reliably reconstructed for ,whereas for higher values of the quality of the reconstructions degrades . in this regime ( )the fidelity is larger than 0.99 , while it degrades to 0.95 for .experimental uncertainties effects are investigated by performing a sensitivity analysis and taking into account the uncertainties on the energy of the input state and on the attenuators , obtaining fidelities always greater than for all the entries . in order to further confirm the linearity hypothesis ,we have compared the measured distributions with the ones computed for a linear detector , i.e. and with those obtained using the reconstructed povm elements , i.e. the excellent agreement between these three distributions ( fidelities always above 99.5% ) confirms the linear behaviour of the detector , proving that the reconstructed povm provides a reliable description of its detection process . finally , to take into account the possible presence of dark counts , the detection model has been modified . assuming a poissonian background , the matrix elements of the povm are now given by ; a ml procedure has been developed to estimate both the quantum efficiency and the mean number of dark counts per pulse . with this procedure, we found that the value for is statistically indistiguishable from the one obtained with the linear - detector model , whereas the estimated dark counts per pulse are zero within the statistical uncertainties , , in excellent agreement with the direct measurement performed on our tes detector ( ) .alternatively to the classical technique described in the previous section , the povm of pnrds can be reconstructed by exploiting strong quantum correlations of twin - beams generated by pdc . in this case , one beam is sent to the photon - number - resolving dut and the other to a spad ( with variable quantum efficiency ) used as a quantum tomographer . with this technique ,significant advantages can be obtained , improving both precision and stability with respect to their classical counterparts .first , let us presume that a bipartite system can be prepared in a certain state , described by the density operator , and that a known observable with a discrete set of outcomes is measured at the tomographer .as before , our dut is phase - insensitive , and s are the matrix elements ( expressed in the fock basis ) of the povm to be reconstructed .the bipartite state in our experiment consists of the optical twin beams , , being the fock state with photons and the probability amplitude associated to a particular state . in this experiment ,an `` event '' is constituted by a detection of photons at the dut correlated to the corresponding measurement outcome of the tomographer ( click " or no - click " ) , which occur with probabilities , \nonumber\\ \\ p(n,{\rm no - click})=\sum_m \pi_{nm } |r_m|^2 ( 1-\eta)^m , \label{probs}\nonumber\end{aligned}\ ] ] we collect these probabilities for a set of different quantum efficiencies ( , ... ,n ) , in order to exploit an on / off reconstruction method similar to the one of ref.s .a reliable reconstruction of the elements can be obtained using the unconditional tomographer no - clik events , which occur with probability .this procedure is simpler than full quantum tomography , because it only reconstructs the diagonal elements of the optical state density matrix , thus not needing phase control . in the following step ,the reconstructed elements are substituted in eq.s [ probs ] ; the povm elements can be extracted by inverting the equation system obtained varying the tomographer quantum efficiency .crystal producing type - i pdc .one of the generated twin beams is sent to the tomographer ( t ) , while the other is addressed to the dut . by rotating the linear polarizer on the t - path ,the tomographer efficiency is varied .interference filters ( if ) with 20 nm bandwith are used .an fpga is used for real - time processing and data acquisition .the dut ( inset ) is a detector - tree type pnrd made of two spads connected through a 50:50 fibre beam splitter ] the experimental setup is shown on fig .[ f : f1 ] .the twin beam is generated by means of a pulsed ti - sapphire laser ( 76 mhz of repetition rate ) at 800 nm .the laser is doubled in frequency and injected into a 10 mm long liio crystal , producing type - i pdc .the two beams are addressed respectively to the dut and the tomographer .the dut is a detector tree composed by a 50:50 fiber beam splitter with the outputs connected to two si - spads , thus it can give 3 different outcomes : 0 , 1 and 2-or - more detected photons per pulse .event 0 occurs if neither spad clicks , event 1 is registered if either spad clicks ( but not both ) and event 2 corresponds to both spads clicking at once .the two si - spads outputs of the dut , together with the one of the tomographer ( another si - spad ) and a laser trigger pulse , are sent to a field programmable gate array ( fpga ) based data collection and processing system .the fpga is programmed to take data only if the three detectors are available , discarding the events affected by the dead time of the three si - spads .the relative frequencies , and , corresponding to the number of 0- , 1- and 2-click events normalized to their sum , need to be determined to allow the reconstruction of the dut s povm .in addition , for each efficiency the relative frequencies of conditional events are determined , paired with tomographer s clicks ( , ) and no - clicks ( , , ) . as mentioned before , the reconstruction of the photon number distribution of the bipartite state is the first step in obtaining the povm elements : the elements are extracted exploiting the no - click frequencies of the tomographer , ( black dots ) given by a poisson distribution with average photons per pulse .( b ) comparison between the reconstructed distribution ( red bars ) , with a poisson distribution ( light blue bars ) with the photon number determined by the fit in ( a ) .the uncertainties represent the variations in the reconstructions performed on 30 different data sets .since in this experiment the probability of observing 5 or more photons is negligible ( less than 4 ) , data are shown only up to photons.,title="fig : " ] + ( black dots ) given by a poisson distribution with average photons per pulse .( b ) comparison between the reconstructed distribution ( red bars ) , with a poisson distribution ( light blue bars ) with the photon number determined by the fit in ( a ) .the uncertainties represent the variations in the reconstructions performed on 30 different data sets .since in this experiment the probability of observing 5 or more photons is negligible ( less than 4 ) , data are shown only up to photons.,title="fig : " ] fig .[ stato ] shows that , as expected , the experimentally reconstructed photon distribution is in excellent agreement with the poisson distribution , with a fidelity larger than . by substituting the s together with the set of calibrated efficiencies into eq .( [ probs ] ) , the quantities are reconstructed using a regularized least square method to minimize the deviation between the measured and theoretical values of the probabilities . in particular , for each and for each output of the dut , the deviation between the observed and theoretical probabilities is minimized , as well as the deviation between and .+ the reconstructed , , are presented in fig .excellent agreement between theoretical and experimental results are supported by the high fidelities ( above 99.9 % ) for values , while for the quality of the povm reconstruction rapidly decreases . in fig .[ experiment ] are reported the fidelities of the reconstructed povm elements shown in fig .[ povm ] : the high values obtained confirm that the extracted povm provides a reliable quantum description of the detection process . .the reliability of the reconstruction is confirmed by the fact that the fidelities are all above . ]in this review , we have presented some recent progresses achieved in the inrim quantum optics labs about calibration of single or few photon detectors .we hope that the illustrated results can give the reader an idea about the problems , the state of the art and the perspectives of this kind of studies , fostering further efforts in these directions .we also stress that , in our opinion , such a topic is of crucial relevance for the present and future developments in photonic quantum technologies and related research fields ( e.g. quantum information and cryptography ) .the research leading to these results has received funding from the european union on the basis of decision no .912/2009/ec ( project ind06-miqc ) , from miur ( firb grants no .rbfr10uauv , no .rbfr10vzug and no .rbfr10yq3h , and progetto premiale p5 `` oltre i limiti classici della misura '' ) , and from compagnia di san paolo. m. genovese , _ phys ._ 413 , 319 * ( 2005 ) * j. r. croca , _ quantum matter _ 2 , 1 * ( 2013 ) * p. traina , m.gramegna , a. avella , a. cavanna , d. carpentras , i. p. degiovanni , g. brida and m. genovese , _ quantum matter _ 2 , 153 * ( 2013 ) * v. scarani , h. bechmann - pasquinucci , n. j. cerf , m. dusek , n. lutkenhaus and m. peev , _ rev_ 81 , 1301 * ( 2009 ) * n. gisin , g. ribordy , w. tittel and h. zbinden , _ rev . mod . phys . _ 74 , 145 * ( 2002 ) * s. l. braunstein and p. van loock , _ rev_ 77 , 513 * ( 2005 ) * t. d. ladd , f. jelezko , r. laflamme , y. nakamura , c. monroe and j. l. obrien , _ nature _ 464 , 45 * ( 2010 ) * p. kok , w. j. munro , k. nemoto , t. c. ralph , j. p. dowling and g. j. milburn , _ rev ._ 79 , 135 * ( 2007 )* j. c. zwinkels , e. ikonen , n. p. fox , g. ulm , m. l. rastello , _ metrologia _ 47 , r15 * ( 2010 ) * v. giovannetti , s. lloyd and l. maccone , _ nat . phot ._ 5 , 222 * ( 2011 ) * j. perina , o. haderka , v. michalek and m. hamar , _ optics letters _ 37 , 2475 * ( 2012 ) * d. a. kalashnikov , si - hui tan and l. a. krivitsky , _ optics express _ 20 , 5044 * ( 2012 ) * g. brida , m. genovese , i. ruo - berchera , m. chekhova and a. penin _ josa b _ 23 , 2185 * ( 2006 ) * g. brida , a. meda , m. genovese , e. predazzi and i. ruo - berchera , _ journal of modern optics _ 56 , 201 * ( 2009 ) * g. brida , m. chekhova , m. genovese and i. ruo - berchera , _ optics express _ 16 , 12550 * ( 2008 ) * m. lindenthal and j. kofler _ applied optics _ 45 , 6059 * ( 2006 ) * i. n. agafonov , m.v.chekhova , t. s. iskhakov , a. n. penin , g. o. rytikov and o. a. shumilkina , _ ijqi _ 9 , 251 * ( 2011 ) * g. zambra , m. bondani , a. s. spinelli and a. andreoni , _ rev ._ 75 , 2762 * ( 2004 ) * a. allevi , m. bondani , and a. andreoni _ opt ._ 35 , 1707 * ( 2010 ) * g. q. zhang , x. j. zhai , c. j. zhu , h. c. liu and y. t. zhang , _ ijqi _ 10 , 1230002 * ( 2012 ) * m. ramilli , a. allevi , a. chmill , m. bondani , m. caccia and a. andreoni , _ j. opt. soc . am .b _ 27 , 852 * ( 2010 ) * o. haderka , j. peina jr . , m. hamar , and j. peina , _ _ 71 , 033815 * ( 2005 ) * d. a. kalashnikov , s. h. tan , m. v. chekhova and l. a. krivitsky , _ opt . exp ._ 19 , 9352 * ( 2011 ) * l. lolli , g. brida , i. p. degiovanni , m. gramegna , e. monticone , f. piacentini , c. portesi , m. rajteri , i. ruo berchera , e. taralli and p. traina , _ int .j. quantum inform ._ 9 , 405 * ( 2011 ) * d. fukuda , g. fujii , t. numata , k. amemiyaand , a. yoshizawa , h. tsuchida , h. fujino , h. ishii , t. itatani , s. inoue and t. zama , _ opt . express _19 , 870 * ( 2011 ) * d. prele , m. r. piat , e. l. breelleand , f. voisin , m. pairat , y. atik , b. belier , l. dumoulin , c. evesque , g. klisnick , s. marnieros , f. pajot , m. redonand and g. sou , _ ieee t. appl. supercon ._ 19 , 501 * ( 2009 ) * b. cabrera _ j. low temp . phys ._ 151 , 82 * ( 2008 ) * a. e. lita , a. j. miller and s. w. nam , _ opt . express _ 16 , 3032 * ( 2008 ) * d. rosenberg , a. e. lita , a. j. millerand and s. w. nam , _ _ 71 , 061803 * ( 2005 ) * s. r. bandler , e. figueroa - feliciano , n. iyomoto , r. l. kelley , c. a. kilbourne , k. d. murphy , f. s. porter , t. saab , and j. sadleir , _ nucl .instrum . meth .a _ 559 , 817 * ( 2006 ) * d. achilles , ch .silberhorn , c. sliwa , k. banaszek and i. a. walmsley _ opt ._ 28 , 2387 * ( 2003 ) * m. j. fitch , b. c. jacobs , t. b. pittman , and j. d. franson , _ _ 68 , 043814 * ( 2003 ) * j. rehacek , z. hradil , o. haderka , j. perina , jr ., and m. hamar , _ _ 67 , 061801 * ( 2003 ) * o. haderka , m. hamar , j. perina jr _ epjd _ 28 , 149 * ( 2004 ) * l. a. jiang , e. a. dauler and j. t. chang , _ phys . rev .a _ 75 , 062325 * ( 2007 ) * a. divochiy , f. marsili , d. bitauld , a. gaggero , r. leoni , f. mattioli , a. korneev , v. seleznev , n. kaurova , o. minaeva , g. goltsman , k g. lagoudakis , m. benkhaoul , f. levy and a. fiore , _ nat .photonics _ 2 , 302 * ( 2008 ) * d. c. burnham and d. l. weinberg , _ phys_ 25 , 84 * ( 1970 ) * d. n. klyshko _ sov . j. quantum electron ._ 7 , 591 * ( 1977 ) * p. g. kwiat , a. m. steinberg , r. y. chiao , p. h. eberhard and m. d. petroff , _ appl ._ 33 , 1844 * ( 1994 ) * a.l .migdall , r.u .datla , a. sergienko and y.h .shih , _ metrologia _ 32 , 479 * ( 1996 ) * e. dauler , a. migdall , n. boeuf , r. datla , a. muller and a. sergienko , _ metrologia _ 35 , 259 * ( 1998 ) * g. brida , s. castelletto , i. p. degiovanni , c. novero and m. l. rastello , _ metrologia _ 37 , 625 * ( 2000 ) * s. castelletto , i. p. degiovanni and m. l. rastello , _ j. optb _ 19 , 1247 * ( 2002 ) * a. ghazi - bellouati , a. razet , j. bastie , m. e. himbert , i. p. degiovanni , s. castelletto and m. l. rastello , _ metrologia _ 42 , 271 * ( 2005 ) * a. migdall , s. castelletto , i. p. degiovanni and m. l. rastello , _ appl . opt ._ 41 , 2914 * ( 2002 ) * s. v. polyakov and a.l .optics express _ 15 , 1390 * ( 2007 ) * j. y. cheung , c. j. chunnilall , g. porrovecchio , m. smid and e. theocharous , _ opt .express _ 19 , 20347 * ( 2011 ) * a. n. penin and a. v. sergienko , _ appl ._ 30 , 3582 * ( 1991 ) * g. brida , m. genovese and c. novero , _ journal of modern optics _ 47 , 2099 * ( 2000 ) *g. brida , m. genovese and m. gramegna , _ laser phys ._ 3 , 115 * ( 2006 ) * a. a. malygin , a. n. penin and a. v. sergienko , _ sov . j. quantum electron ._ 11 , 939 * ( 1981 ) * d. n. klyshko , _ sov .j. quantum electron ._ 10 , 1112 * ( 1980 ) * s. r. bowman , y. h. shih and c. o. alley , _ proc .spie _ 633 , 24 * ( 1986 ) * v. m. ginzburg , n. g. keratishvili , ye .l. korzhenevich , g. v. lunev and a. n. penin , _ metrologia _ 30 , 367 * ( 1993 ) * j. y. cheung , m. p. vaughan , j. r. mountford and c. j. chunnilall , _ proc . spie _ 5161 , 365 * ( 2004 )* w. h. louisell and a. yariv , _ phys ._ 124 , 1646 * ( 1961 )* j. g. rarity , k. d. ridley and p. r. tapster , _ appl . opt ._ 26 , 4616 * ( 1987 ) * g. brida , m. genovese , m. gramegna , m. l. rastello , m. chekhova , and l. krivitsky _ josa b _ 22 , 488 * ( 2005 ) * g. brida , i. p. degiovanni , m. genovese , v. schettini , s. v. polyakov , and a. migdall , _ opt ._ 16 , 11750 * ( 2008 ) * d.n .klyshko , _ photons and nonlinear optics _ , gordon and breach science publishers * ( 1988 ) * g. brida , s. castelletto , c. novero and m. l. rastello , _ metrologia _ 35 , 397 * ( 1998 ) * a. migdall , _ physics today _ 52 , 41 * ( 1999 ) * joint committee for guides in metrology _ evaluation of measurement data guide to the expression of uncertainty in measurement _ , bipm , * ( 2008 ) * m. g. mingolla , g.brida , paper in preparation s. castelletto , i. p. degiovanni and m. l. rastello , _ metrologia _ 37 , 613 * ( 2000 ) * a. avella , g. brida , i. p. degiovanni , m. genovese , m. gramegna , l. lolli , e. monticone , c. portesi , m. rajteri , m. l. rastello , e. taralli , p. traina and m. white , _ opt . express _ 19 , 23249 * ( 2011 ) * c. portesi , e. taralli , r. rocci , m. rajteri and e. monticone , _ j. low temp .phys _ 151 , 261 * ( 2008 ) * c. portesi , l. lolli , e. monticone , m. rajteri , i. novikov and j. beyer , _ supercond ._ 23 , 105012 * ( 2010 ) * k. d. irwin _ appl ._ 66 , 1998 * ( 1995 ) * l. lolli , e. taralli , c. portesi , d. alberto , m. rajteri and e. monticone , _ ieee trans . appl. supercond ._ 21 , 215 * ( 2011 ) * d. drung , c. assmann , j. beyer , a. kirste , m. peters , f. ruede and t. schurig , _ ieee trans ._ 17 , 699 * ( 2007 ) * s. v. polyakov and a. l. migdall , _ j. mod .opt _ 56 , 1045 * ( 2009 ) * g. brida , l. ciavarella , i. p. degiovanni , m. genovese , l. lolli , m. g. mingolla , f. piacentini , m. rajteri , e. taralli and m. g. a. paris , _ new j. phys ._ 14 , 085001 * ( 2012 ) * r. h. hadfield , _ nature photon ._ 3 , 636 * ( 2009 ) * c. silberhorn , _ contemp . phys ._ 48 , 143 * ( 2007 ) * a. luis and l. l. sanchez - soto , _ phys . rev . lett ._ 83 , 3573 * ( 1999 ) * j. fiurasek , _ phys . rev .a _ 64 , 024102 * ( 2001 ) * f. demartini , a. mazzei , g. m. ricci , and g. m. dariano , _ _ 67 , 062307 * ( 2003 ) * m. w. mitchell , c. w. ellenor , s. schneider and a. m. steinberg , _ phys ._ 91 , 120402 * ( 2003 ) * a. r. rossi , s. olivares and m. g. a. paris , _ phys .a _ 70 , 055801 * ( 2004 ) * g. zambra , m. bondani , a. andreoni , m. gramegna , m. genovese , g. brida , a. rossi and m. g. a. paris , _ phys ._ 95 , 063602 * ( 2005 ) * m. lobino , d. korystov , c. kupchak , e. figueroa , b. c. sanders and a. i. lvovsky , _ science _ 322 , 563 * ( 2008 ) * s. rahimi - keshari , a. scherer , a. mann , a. t. rezakhani , a. i. lvovsky , b. c. sanders , _ new j. phys ._ 13 , 013006 * ( 2011 ) * g. m. dariano , l. maccone and p. lo presti _ phys ._ 93 , 250407 * ( 2004 ) * z. hradil , d. mogilevtsev and j. rehacek , _ phys ._ 96 , 230401 * ( 2006 ) * j. s. lundeen , a. feito , h. coldenstrodt - ronge , k. l. pregnell , ch .silberhorn , t. c. ralph , j. eisert , m. b. plenio and i. a.walmsley , _ _ nat .phys.__5 , 27 * ( 2009 ) * j. rehacek , d. mogilevtsev and z. hradil , _ phys . rev_ 105 , 010402 * ( 2010 ) * c. portesi , e. taralli , r. rocci , m. rajteri and e. monticone , _ j. low temp ._ 151 , 261 * ( 2008 ) * e. taralli , m. rajteri , e. monticone and c. portesi , _ int . j. quantum . inf ._ 5 , 293 * ( 2007 ) * d. drung , c. assmann , j. beyer , a. kirste , m. peters , f. ruede and th .schurig , _ ieee t. appl. supercon ._ 17 , 699 * ( 2007 ) * g. brida , l. ciavarella , i. p. degiovanni , m. genovese , l. lolli , m. g. mingolla , f. piacentini , m. rajteri , e. taralli and m. g. a. paris , _ new j. phys ._ 14 , 085001 * ( 2012 ) * a. luis and l.l .sanchez - soto , _ phys ._ 83 , 18 * ( 1999 )* m. genovese , g. brida , g. zambra , a. andreoni , m. bondani , m. gramegna , a. rossi and m. g. a. paris , _ laser physics _ 16 , 385 * ( 2006 ) * g. brida , m. genovese , m. gramegna , m. g. a. paris , e. predazzi and e. cagliero , _ open syst . & inf . dyn ._ 13 , 333 * ( 2006 ) * g. brida , m. genovese , m.g.a .paris and f. piacentini , _ opt_ 31 , issue 23 , 3508 * ( 2006 ) * g. brida , m. genovese , m. g. a. paris , f. piacentini , e. predazzi and e. vallauri , _ opt . & spect ._ 103 , 95 * ( 2007 ) * t. moroder , m. curty and n. ltkenhaus , _ new j. physics _ 11 , 045008 * ( 2009 ) * g. brida , m. genovese , a. meda , s. olivares , m. g. a. paris and f. piacentini , _ journ ._ 56 , 196 * ( 2009 ) * a. allevi , a. andreoni , m. bondani , g. brida , m. genovese , m. gramegna , p. traina , s. olivares , m. g. a. paris and g. zambra , _ _ 80 , 022114 * ( 2009 ) * d. mogilevtsev , z. hradil and j. perina , _ quantum ._ 10 , 345 * ( 1998 ) * k. vogel and h. risken , _ phys rev .a _ 40 , 2847 * ( 1989 ) * g. dariano , c. macchiavello and m. g. a. paris , _ phys rev .a _ 50 , 4298 * ( 1994 ) * u. leonhardt , m. munroe , t. kiss , t. richter and m. g. raymer _ opt_ 127 , 144 * ( 1996 ) * a. i. lvovsky , m. g. raymer , _ rev_ 81 , 299 * ( 2009 ) * yu. i. bogdanov , g. brida , i. d. bukeev , m. genovese , k. s. kravtsov , s. p. kulik , e. v. moreva , a. a. soloviev and a. p. shurupov , _ _ 84 , 042108 * ( 2011 ) * m. asorey , p. facchi , g. florio , v.i .manko , g. marmo , s. pascazio and e.c.g .sudarshan , _ phys .a _ 375 , 861 * ( 2011 ) * g. m. dariano , m. g. a. paris , m. f. sacchi , _ _ adv . in im . and el . phys.__128 , 205 * ( 2003 ) * g. m. dariano , m. de laurentis , m. g. a. paris , a. porzio and s. solimeno,_j . opt .b _ 4 , 127 * ( 2002 ) * g. brida , l. ciavarella , i. p. degiovanni , m. genovese , a. migdall , m. g. mingolla , m. g. a. paris , f. piacentini , and s. v. polyakov , _ _ 108 , 253601 * ( 2012 ) *
this paper s purpose is to review the results recently obtained in the quantum optics labs of the national institute of metrological research ( inrim ) in the field of single- and few - photon detectors calibration , from both the classical and quantum viewpoint . + in the first part of the paper is presented the calibration of a single - photon detector with absolute methods , while in the second part we focus on photon - number - resolving detectors , discussing both the classical and quantum characterization of such devices .
parameter estimation is a process of assessing unknown parameters with respect to a given limited amount of information .it is a procedure that is widely used in modeling and control of dynamical systems , where the applications range from biology to chemistry , physics and many others fields of science and engineering . in real world applications ,many systems do exhibit partially or completely unknown parameters .knowledge about the time evolution of these parameters becomes the prerequisite to analyze , control , and predict the underlying dynamical behaviors .thus , this topic has drawn great attention in various areas due to its theoretical and practical significance .for example , in the biological networks , it is important to estimate unknown protein - dna interactions in the regulation of various cellular processes or detection of failures / anomalies . in aircraft / spacecraft dynamics ,estimation of the unknown states determines the fine line between stability and instability .another significant area of interest for parameter estimation problems is chaotic systems . in many real - life problems , ranging from information sciences to life sciences , from systems biology to quantum physics , nonlinear systems exhibit the phenomenon of chaos. an important application of chaos control and synchronization is parameter estimation through adaptive control methodologies . in this case, the aim is to estimate the uncertainties as well as minimize the synchronization error .however , such adaptive control methodology is associated with the stability and the synchronization regime of systems .it is also a relatively conservative methodology that is constrained by several conditions such as persistent excitation or linearly independence , to guarantee the convergence . on the other hand ,receding horizon control is a branch of model predictive control methodology that aims to obtain an optimal feedback control law by minimizing the given performance index .the performance index of a receding - horizon control problem has a moving initial time and a moving terminal time , where the time interval of the performance index is finite . since the time interval of the performance index is finite , the optimal feedback law can be determined even for a system that is not stabilizable .the receding horizon optimal control problem can deal with a broader class of control objectives than asymptotic stabilization .the receding horizon control was originally applied to linear systems and then was extended to nonlinear systems . through its functionality ,nonlinear receding horizon control ( nrhc) has made an important impact on industrial control applications and is being increasingly applied in process controls .various advantages are known for nrhc , including the ability to handle time - varying and nonlinear systems , input / output constraints , associated plant uncertainties , and so on . in recent years , many methods have been proposed for parameter estimation in nonlinear systems . some of them focused on using adaptive feedback control algorithms to estimate unknown parameters of nonlinear systems .huang studied the adaptive synchronization with application to parameter estimation .yu et al proposed the linear independence conditions to ensure the parameter convergence based on the lasalle invariance principle .however , most of these literatures on adaptive estimation impose an assumption that the parameters to be estimated are constant or slowly time - varying .moreover , the parameter estimation problem could be formed as an optimization problem , which leads to many intelligent optimization schemes : li et al .( ) proposed the chaotic ant swarm algorithm and conducted parameter estimation tests by using the lorenz system as an example . employed the particle swarm optimization method in parameter estimation .lin et al . proposed an oppositional seeker optimization algorithm with application to parameter estimation of chaotic systems .different from the above methods , in this paper , a method of parameter estimation for nonlinear systems is proposed based on real - time nonlinear receding horizon control ( nrhc ) methodology . with this approach, we provide a configuration which is especially applicable to chaotic and time varying systems . here , the estimation procedure is reduced to a family of finite horizon optimization control problems . to avoid high computational complexity ,the stabilized continuation method is employed , which is a non - iterative optimization procedure with moderate data storage capacity . based on this method , the nrhc problemis then solved by the backward sweep algorithm , in real time .the algorithm itself is executable regardless of controllability or stabilizability of the system , which is one of the powerful aspects of the approach .experimental results show that the real - time nrhc is applicable to the chaotic systems with unknown constant parameters as well as time - varying parameters .furthermore , we explore the noise reduction of the proposed method by simulations . in the light of these facts ,the paper is organized as following : in section-[sec : prob_form ] , the problem formulation , based on nrhc , is defined as an estimation routine . in section-[sec : sec : backwards_sweep ] , brief background on previous work of ohtsuka s is provided , and then stability analysis of this approach is discussed in section-[sec : sec : conv_stab_analysis ] .we demonstrate the power of nrhc as an estimation routine through specific applications on a chaotic system ( in this case lorenz oscillator ) with constant ( section-[sec : sec : app2chaotic_sys_const_params ] ) and time varying parameters ( section-[sec : sec : app2chaotic_sys_timevar_params ] ). we also test and demonstrate robustness properties of nrhc algorithm in presence of noise , in section-[sec : sec : app2chaotic_sys_noise ] . at the end , with the discussions and conclusions ( section-[sec : conclusion ] ) , we finalize the paper .to demonstrate the parameter estimation routine of nonlinear systems , suppose that we are given the dynamical system representation as follows : where is the state vector , and are the linear coefficient matrix and nonlinear part of system presented in eq ., respectively . is a known function vector and denotes the unknown parameters , where they can be constant or time - varying . to formulate the parameter estimation problem , the system in eq .( 1 ) is considered as a drive / reference system .if we construct a driver - response configuration , the corresponding response system becomes where is the state vector and represents the estimated parameter . here , functions and satisfy the global lipschitz condition , therefore there exist positive constants and such that is satisfied . in this specific formulation , the synchronization error is defined to represent the difference between the drive system and the response system which is modeled as also , the estimation error is denoted by . in order to utilize the real - time nonlinear receding horizon control method as an estimation routine , the following finite horizon cost function ( performance index )is associated with the synchronization and estimation error : {\rm d}\tau,\\ = & \int_t^{t+t}(e^tqe+\bar{\theta}^tr\bar{\theta}){\rm d}\tau . \end{split}\ ] ] here , and are weighting matrices , affiliated with the state and estimation error , respectively . in this specific set - up ,the performance index evaluates the performance from the present time-( ) to the finite future-( ) , where is the terminal time or the horizon .the performance index is minimized for each time starting from .thus , the present receding horizon control problem can be converted to a family of finite horizon optimal control problems on the axis that is parameterized by time .the trajectory starting from is denoted as . since the performance index of receding horizon control is evaluated over a finite horizon , the value of the performance index is finite even if the system is not stabilizable .it is well known from literature that first order necessary conditions of optimality are obtained from the two - point boundary value problem ( tpbvp ) by computing the variations as the following in eqs ., is the hamiltonian defined as .\end{split}\ ] ] in this notation , denotes the partial derivative of with respect to , and so on . according to this unique approach, the estimation error is calculated as = 0\}.\ ] ] in this context , the tpbvp is regarded as a nonlinear equation with respect to the costate at as since the nonlinear equation has to be satisfied at any time , holds along the trajectory of the closed - loop system .the ordinary differential equation of can be solved numerically without applying any iterative optimization methods . however , numerical error in the solution may accumulate through the integration process in practice , and numerical stabilization techniques are required to correct the error . therefore, the stabilized continuation method is introduced in this paper as follows : where denotes any stable matrix and will enforce the exponential convergence of the solution .the horizon is defined as a smooth function of time such that and as , where is the desired terminal time . in order to compute the estimation error ,first , the differential equation of is integrated in real time .the partial differentiation of eqs .( with respect to time and ) converts the problem in hand into the following linear differential equation : where , , . since the reference trajectory , they are canceled in eq . and data storage is reduced .the derivative of the nonlinear function with respect to time is rewritten by to reduce the computational cost , the backward - sweep algorithm is employed at this point and the relationship between the costate and other variables is expressed as : where in eqs . , due to the terminal constraint on -axis , the following conditions hold thus , the differential equation of is obtained in real time as follows : at each time , the euler - lagrange equations eqs . are integrated forward along the axis .are integrated backward with terminal conditions expressed in eqs .. then the differential equation of is integrated for one step along the axis so as to minimize the estimation error from eq .. the estimated parameters are derived from the difference between the true values and the estimation errors .if the matrix is nonsingular , the algorithm is executable regardless of controllability or stabilizability or the system . in this section, the stability of the closed - loop system by using the nrhc strategy is briefly analyzed .the candidate lyapunov function is constructed in the form of here , clearly , and for all , thus , is a lyapunov function .the time derivative of along the trajectory is obtained by \\ & \le e^t[ae+b\beta_1e+(d(y)\hat{\theta}-d(x)\hat{\theta})-d(y)d^t(y)\lambda r^{-1}]\\ & \le e^t[ae+b\beta_1e+\beta_2e\hat{\theta}-d(y)d^t(y)w(y , e)r^{-1}]\\ & = e^tpe . \end{split}\ ] ] from the tpbvp and eq ., we know that is the costate and can be described as a function of and , here denoted by . by adjusting the stable matrix and the function of horizon , we aim to design a reasonable function to make smaller than zero note that if and only if . from the barbalat s lemma , we can attain it is clear that as .thus , the synchronization error is asymptotically stable .although the back - ward sweep algorithm is executable whenever the system is stable or not , with some choice of suitable stable matrix and horizon , we can also ensure the stability of the closed - loop nonlinear system by nonlinear receding horizon control .in this section , we use a classical example from chaotic systems , namely the lorenz oscillator , to verify the effectiveness of the proposed method not only on time invariant parameters , but also on systems with time - varying parameters .we first consider the parameter estimation problem of lorenz chaotic system with constant parameters . for this example , the lorenz system is given by where, the performance index is chosen as follows : ^tq[y(\tau)-x(\tau)]+[\theta(\tau)-\hat{\theta}(\tau)]^tr[\theta(\tau)-\hat{\theta}(\tau)]\}{\rm d}\tau.\ ] ] in this example , the weighting matrix .the horizon in the performance index is given by where and .it is clear that satisfies and converges to the desired terminal time as time increases .the stable matrix is chosen as .the initial states of the system are given by the simulation is implemented in matlab .the time step on the axis is and the time step on the axis is .fig.[fig1 ] shows the trajectories of drive - response systems in eqs . with initial conditions in eqs.. the estimated parameters and are presented in fig.[fig2 ] which clearly show the estimated parameters converge to their true values by using the nrhc method . in the following , we extend the nrhc estimation methodology to the case of time varying parameters .we consider the same lorenz system , which was given in section-[sec : sec : app2chaotic_sys_const_params ] , with time varying parameters and .the initial states are given by fig.[fig3 ] shows the trajectories of systems given in eqs . with time - varying parameters .the estimated parameters and are shown in fig.[fig4 ] .it is clear from fig.[fig3 ] and fig.[fig4 ] that in case of time varying characteristics nrhc still is able to perform as desired , and converges to the true values .noise , or generally speaking external disturbances , usually have significant effects on the performance and the outcomes of parameter estimation routine .such external effects not only causes a drift in estimated parameters around the nominal value , but also results in potentially unstable systems . in the following ,we investigate the effect of the noise in aforementioned dynamics .we first consider the case where the noise propagates in the drive system with constant parameters which could be expressed as where represents the band - limited white noise .the simulation results are depicted in figs.[fig5],[fig6],[fig7 ] . in this case, the estimated constant parameters precisely match their original values , which demonstrates the robustness characteristic of proposed , nrhc based , parameter estimation routine . in the time invariant systems.,width=415 ] in the time - varying systems.,width=453 ] next , we propagate the noise content in the drive system with time - varying parameters which is expressed as where again is the band - limited white noise .the simulation results are illustrated in figs.[fig8],[fig9],[fig10 ] . in this case, the estimated time - varying parameters also demonstrate a good match with their original values , in presence of noise .in this paper , a novel method based on real time nonlinear receding horizon control is proposed for estimating unknown parameters of general nonlinear and chaotic systems . in this specificset - up , the estimation problem is reduced to a form of solving the nonlinear receding horizon optimization problem as a parameter optimization method .based on the stabilized continuation method , the back - ward sweep algorithm is introduced to integrate the costate in real time and to minimize the estimation error .the algorithm does not require any stability assumption of the system and also can guarantee the stability with some suitable choice of stable matrix and horizon length .the method is applicable for both time invariant and time varying dynamics with noise , which demonstrates the power of the methodology .pecora , t.l .carroll , `` synchronization in chaotic system , '' phys .64 , 821 - 824 ( 1990 ) .elson , h.i .selverston , r. huerta , et al ., `` synchronous behavior of two coupled biological neurons , '' phys .lett 81 , 5692 ( 1988 ) .d. huang , `` adaptive - feedback control algorithm , '' phys .e 73 , 066204 - 066211 ( 2006 ) .z. sun , g. si , f. min , et al ., `` adaptive modified function projective synchronization and parameter identification of uncertain hyperchaotic ( chaotic ) systems with identical or non - identical structures , '' nonlinear dyn .68 , 471 - 486 ( 2012 ) .
in this paper , based on real - time nonlinear receding horizon control methodology , a novel approach is developed for parameter estimation of time invariant and time varying nonlinear dynamical systems in chaotic environments . here , the parameter estimation problem is converted into a family of finite horizon optimization control problems . the corresponding receding horizon control problem is then solved numerically , in real - time , without recourse to any iterative approximation methods by introducing the stabilized continuation method and backward sweep algorithm . the significance of this work lies in its real - time nature and its powerful results on nonlinear chaotic systems with time varying parameters . the effective nature of the proposed method is demonstrated on two chaotic systems , with time invariant and time varying parameters . at the end , robustness performance of the proposed algorithm against bounded noise is investigated . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
recently , interest in the statistical and dynamical features of human social behavior has been growing , enabled by the development of new devices that allow tracking of social data in real time , with increasing precision and duration .a remarkable recent finding from the analysis of spatiotemporal data on cell - phone locations is that human mobility patterns are highly predictable , a finding that is in contrast to the traditional view .for instance , in epidemic models that take the mobility of subjects into account , subjects are usually assumed to perform a conventional random walk from one location to another .however , actual traveling patterns of humans often deviate from such random walk models , and the displacement distribution follows a power law .furthermore , the statistics of the next location of the individual is affected not only by the current location , but also by the history of the traveling pattern , resulting in approximately 90% predictability of the mobility patterns . in this study, we address a similar predictability question for a different component of human social behavior : conversation events .conversation events mediate the spreading and routing of diverse contents such as new ideas , opinions , and infectious diseases in social networks . in models describing these phenomena , it is a norm that each individual possesses a dynamically changing state ( _ e.g. _ , opinion a or opinion b in opinion dynamics , and susceptible or infected state in epidemic dynamics ) . the law of transition from one state to another is usually assumed to be markovian , _i.e. _ , independent of the history of the process .the markovian property , which is a type of unpredictability , is an assumption for simulating such dynamics based on a static social network .however , the plausibility of this assumption is unclear .imagine the office that you share with other colleagues in your company .when you have a question about a project , you may talk to your boss .after this conversation event , you may tend to talk to a particular individual to communicate the instruction of the boss . during lunchtime , you may chat with your close colleagues in a particular order that you do not perceive .how predictable is your choice of your next conversation partner given the current partner ?we examine the predictability of conversation events using two sets of longitudinal data collected from company offices in japan .we use the information about the timing and duration of conversations between each pair of individuals , but do not use _ a priori _ knowledge about status or other social attributes of individuals .our data are unique in that they are collected from a relatively high number of individuals ( _ i.e. _ , approximately 200 individuals ) over a long recording period ( _ i.e. _ , approximately three months ) .we examine the sequence of conversation events for each individual .we find that a conversation event has notable deterministic components .in other words , the uncertainty about the next partner that you talk with decreases by on average , given the identity of the partner you are currently talking with ( see sec . [sec : predictability of partner sequences ] ) . it should be noted that our approach is related to , but different from , the studies of power - law interval distributions in conversation events .the interval between successive conversation events for an individual or a given pair of individuals often follows a power law .modeling studies have revealed implications of these empirical results in contagions and opinion formation .in contrast to conventional models in which the poisson interval distribution is assumed , these results indicate that the next conversation time given the previous one is relatively predictable in that a conversation event in the recent past is a precursor to a burst of events in the near future .we argue that the bursty nature of the point process largely contributes to the predictability of conversation events .we also show that the degree of predictability depends on individuals .individuals located inside a network community , _i.e. _ , a dense subnetwork loosely connected to other parts of the entire network , quantified in this study via strong links and the clustering coefficient , behave relatively randomly .on the other hand , individuals that connect different communities by weak links tend to have a high predictability .we analyze two sets of face - to - face interaction logs obtained from different company offices using the business microscope system developed by hitachi , ltd ., japan .the data were collected by world signal center , hitachi , ltd .data set consists of recordings from individuals for 73 days .data set consists of recordings from individuals for 120 days .each subject wears a name tag strapped around the neck and placed at the chest , and each name tag contains an infrared module .the infrared modules can communicate with each other if they are less than 3 meters apart .an infrared module only senses the modules situated within a circular sector in front of the name tag , and the system detects conversation events only when two individuals are facing each other .communication between modules includes exchanging the owners ids every 10 sec .we regard two individuals to be involved in a conversation event if their infrared modules communicate with each other at least once in 1 min . in other words ,the time resolution of the system is equal to one minute .the list of conversation partners and time stamps is stored in the name tag of each individual and sent to the central database on a daily basis .the data transfer occurs when the individual leaves work and puts the name tag on a gateway device connected to the individual s computer .each data set contains a list of conversation events , as shown in fig .[ fig : sequence - generation ] .a conversation event is specified by the ids of the two individuals talking with each other , the date and time at which the dialogue starts , and the duration of the dialogue .we are not concerned with the content of the dialogue .data sets and contain and events , respectively .we investigate the predictability of each individual s conversation patterns .our preliminary data analysis revealed that the timing of conversation events lacks sufficient temporal correlation and is unpredictable .therefore , we neglect the timing of conversation events in the data unless otherwise stated and focus on the partner sequence defined as follows . to generate the partner sequence of individual 1 , we first sift out all the conversation events that involve individual 1 from the entire data set ( fig .[ fig : sequence - generation](b ) ) .next , we ignore the time stamp and duration of the conversation events .the remaining data define the partner sequence , _i.e. _ , the chronologically ordered sequence of the ids of the conversation partners for individual 1 ( fig . [fig : sequence - generation](c ) ) . when multiple conversation events involving individual 1 are initiated in the same minute, we determine their order at random . to evaluate the predictability of the partner sequence , we calculate three entropy measures , inspired by those used for the analysis of human mobility patterns .first , we define the random entropy for individual as where represents the number of s partners for the entire recording .if chooses the partner with equal probability from all the s acquaintances in each conversation event , quantifies the degree of randomness .second , we define the uncorrelated entropy as where is the set of s partners containing elements . represents the probability that individual talks with individual in a conversation event for ; the normalization is given by .compared to , accounts for the heterogeneity among .third , we define the conditional entropy as where represents the conditional probability that individual talks with individual immediately after talking with individual . measures the second - order correlation in the partner sequence of . for each individual , is satisfied .we quantify the predictability of the partner sequence by the mutual information as follows : where represents the joint probability that individual talks with individual immediately after talking with individual . for each individual , is satisfied . quantifies the predictability of the partner sequence ; it is equal to the amount of the information about the next partner that is earned by knowing the current partner . when the partner sequence lacks a second - order correlation such that , takes the minimum value . in this case , knowing the current partner does not help predict the next partner at all . when the partner sequence is completely deterministic , _i.e. _ , the next partner is completely predicted from the current partner such that , takes the maximum value .although our primary interest in this study is the temporal properties of partner sequences , we also analyze the conversation networks ( cns ) and constructed by aggregating all the conversation events in and , respectively , over the entire recording . in a cn, the node represents an individual , and the weight of the link , denoted as , represents the number of conversation events between individuals and during the entire recording period . by the definition of the conversation event, holds true ; the cn is an undirected network .the degree of individual is equal to the number of s for which .we found that both cns , and , are composed of a single connected component .the cn is visualized in fig .[ fig : graph_00 ] ; we will analyze the relation between the cns and the predictability in sec .[ sec : variation ] .the clustering coefficient of the unweighted versions of and is equal to and , respectively .the pearson assortativity coefficient of the degree of and is equal to and , respectively .therefore , the cns have typical properties of social networks , _ i.e. _ , high clustering and positive assortativity .for the two cns , we measure the distributions of degree , node strength , and link weight .the node strength is the sum of link weights connecting to node , _ i.e. _ , the total number of conversation events for individual , defined as the mean and standard deviation of of and are equal to and ( mean standard deviation ) , respectively . because two individuals are adjacent if there is at least one conversation event for a few months , the mean of both networks is relatively large . of and is equal to and , respectively . of and is equal to and , respectively .the cumulative distribution of the three quantities are shown in fig .[ fig : statdist_cn ] .we examine the predictability of partner sequences using the entropy measures .because the estimation of entropy is notoriously biased when the data size is small , we discard individuals with less than 100 conversation events ( _ i.e. _ , ) .there remain 146 and 210 individuals in data sets and , respectively after the thresholding . because the results for the two datasets are similar , we report the results for in the following .the results for are given in appendix a. the histograms of the three types of entropies for partner sequences are shown in fig .[ fig : hist_h](a ) .for all the individuals , is at least smaller than .this implies that individuals exhibit a preference when selecting partners from their neighbors in the cn .the values of and for each individual are shown in fig .[ fig : hist_h](b ) .the mutual information is positive for all the individuals regardless of the value of .in general , the finite size effect decreases and by different amounts such that the estimated is generally inherited with a positive bias . for our data , the positive values of not an artifact caused by the small data size . through a bootstrap test ( see appendix b for details ) , we confirmed that the empirical values of are significantly ( at level ) larger than the values obtained from the bootstrap samples . in short ,the bootstrap samples are randomized partner sequences that destroy temporal correlation in the data but preserve the original and account for the portion of derived from the finite size effect .it should also be noted that we determined the order of partners at random when conversation events with different partners initiate in the same minute .this randomization does not make larger because it conserves and makes larger than the true value .in fact , the pearson correlation coefficient between and the fraction of such overlapping conversation events for individual ( , ) is slightly negative ( _ i.e. _ , ) . in summary, the information about the current conversation partner gives the information about the next partner ; is , on average , smaller than .the predictability present in the data is mainly explained by the bursty activity patterns , _i.e. _ , long - tailed distributions of the interevent intervals , that have been observed for various data .our data also possess this feature ( see appendix c for details ) .because the interevent interval for a given pair of individuals obeys a long - tailed distribution , individual tends to talk with individual again within a short period from their previous conversation . in the remainder of this section , we show that the predictability is mainly caused by the bursty activity patterns ( fig .[ fig : iei - setest](a ) ) and that predictability also exists in the data even if we omit the bursts from the data ( fig .[ fig : iei - setest](b ) ) .we examine the contribution of the bursty activity pattern to the predictability by calculating the mutual information of the randomized partner sequence .the randomization of the interevent intervals between each pair of individuals is realized by swapping interevent intervals of the original data within each day in a completely random order ( see appendix d for the precise methods ) .because of the computational cost of the randomization procedure , we obtain the mean and standard deviation of from 100 randomized partner sequences , instead of estimating the confidential interval of .the mean accounts for of the original on average ( fig .[ fig : iei - setest](a ) ) . because the randomization procedure preserves the interevent interval distribution , fig .[ fig : iei - setest](a ) suggests that a large is mainly attributed to the bursty activity patterns .it should be noted that is large partly because the randomizing procedure conserves the timings of the first and last conversation events of each pair on any day .therefore , we may be overestimating the contribution of burstiness to .the predictability is not solely determined by the bursty activity patterns . to clarify this point, we calculate the mutual information of the modified partner sequence generated by merging the consecutive conversation events with the same partner in the original partner sequence into one event .this merging procedure allows us to eliminate the contribution of the bursty activity pattern to the predictability .for example , if individual talks with individual 3 times without being interrupted by other partners , we merge the three conversation events into one .the values of are shown in fig .[ fig : iei - setest](b ) . to confirm that the positive values of are not an artifact caused by the small data size , we carry out a bootstrap test for similar to that for . by definition ,no partner i d appears successively in the merged partner sequence .therefore , we generate the bootstrap sample of the merged partner sequence by sampling from the merged sequence with replacement under the condition that the same partner is not consecutively chosen ( see appendix d for details ). is significantly larger than the values obtained from the bootstrap samples .therefore , the original partner sequence possesses some predictability even after removing bursts originating from the bursty nature .the predictability , quantified by , depends on individuals . in this section, we investigate the relationship between the predictability of individuals and the properties of nodes in the cn .the results shown in this section are summarized as follows .first , is negatively correlated with node strength and with mean node weight defined as ( fig .[ fig : corr_mi ] ) .second , the cn possesses the `` strength of weak ties '' structure ( fig .[ fig : overlap_mi.cc](a ) ) .third , the individuals bridging different communities with weak links tend to have large , and those concealed in a single community and surrounded by strong links tend to have small ( fig . [ fig : overlap_mi.cc](b ) ) .one may speculate that is strongly affected by the node degree because and and comprise many terms if is large .however , and are uncorrelated , as shown in fig .[ fig : corr_mi](a ) .we found that is negatively correlated with ( fig .[ fig : corr_mi](b ) ) and with ( fig .[ fig : corr_mi](c ) ) . using the bootstrap test, we verified that the negative correlation shown in fig .[ fig : corr_mi](b ) and [ fig : corr_mi](c ) is not because of the finite sampling size ( see appendix b for details ) . the correlation shown in fig .[ fig : corr_mi ] and the following results do not qualitatively change if we use the normalized mutual information ( see appendix e ) .we also verified that alternatively defining the link weight by the total duration of the conversation events for each pair , instead of the total number of the conversation events , does not qualitatively change the results described in this section ( see appendix f for details ) . for a fixed , both and decrease with the number of weak links ( _ i.e. _ , the links with small weight ) connected to individual .this fact leads us to hypothesize that individuals surrounded by weak links select partners in a relatively deterministic order . according to granovetter s theory of the strength of weak ties ,weak links tend to interconnect different communities in a social network and bring valuable external information to both end nodes , while strong links tend to be intracommunity links .therefore , the individuals bridging different communities with weak links may have large values of .we first verify the strength of weak ties hypothesis in the cn .the network visualized in fig .[ fig : graph_00 ] appears to be consistent with the hypothesis ; weak links tend to connect communities composed of strong links .to quantify the extent to which a link is engaged in intracommunity connection , we measure the relative neighborhood overlap of a link , defined as where denotes the number of elements in the set .when , individuals and do not have a common neighbor and the link is considered to connect different communities .when , individuals and share all of the neighbors and the link is confined in a community .the strength of weak ties hypothesis suggests that is positively correlated with . in fig .[ fig : overlap_mi.cc](a ) , averaged over the links with weights smaller than , denoted as , is plotted against the fraction of links with weights smaller than , denoted as . because monotonically increases with , the cn possesses the strength of weak ties property , as in the case of mobile communication networks . because weak links are associated with a large ( fig .[ fig : corr_mi](c ) ) and intercommunity links ( fig .[ fig : overlap_mi.cc](a ) ) , individuals with a large are expected to bridge different communities and those with a small are expected to be shielded inside a community .this concept is consistent with the visual inspection of fig .[ fig : graph_00 ] . to verify this point , we show that is negatively correlated with a calibrated clustering coefficient in the following ( fig . [fig : overlap_mi.cc](b ) ) .note that , when the clustering coefficient is large , the individual tends to be inside a community quantified by the abundance of triangles .when it is small , the individual tends to connect different communities .the clustering coefficient for each node is defined by ( number of triangles including individual i)/ $ ] . in fig . [fig : overlap_mi.cc](b ) , the pearson correlation coefficient between and is plotted against , where is the local clustering coefficient for the subgraph of the cn generated by eliminating the links with weights smaller than .we opted to use instead of the weighted clustering coefficient defined for weighted networks because the latter quantity is , by definition , strongly correlated with and ; we already discussed the negative correlation between and and between and in fig .[ fig : corr_mi](b ) and [ fig : corr_mi](c ) , respectively . for , and almost uncorrelated .this is because almost all the individuals have a large regardless of in the original cn ( refer to fig . [fig : graph_00 ] for a visual confirmation of this statement ) . for , and negatively correlated ( squares in fig .[ fig : overlap_mi.cc](b ) ). therefore , an individual with a large tends to bridge different communities as quantified by the clustering coefficient .an individual with a small tends to be confined within communities .the circles in fig .[ fig : overlap_mi.cc](b ) represent the partial correlation coefficient between and , with and fixed . here , and are , respectively , the degree and strength of individual , calculated after eliminating the links with weights smaller than . because the pearson and partial correlation coefficients behave similarly , the negative correlation between and is not ascribed to the negative correlation between and ( fig . [ fig : corr_mi](b ) ) or between and ( fig .[ fig : corr_mi](c ) ) . in closing this section, we stress the robustness of our results against observation failures .the wearable tag used in our measurement fails to detect a conversation event if the tag is sealed behind obstacles such as a desk or partition .for example , suppose that two individuals chat for five minutes and either of their tags is just under a desk and is undetected in the third minute .then , the single conversation event is split into two spurious conversation events , each lasting for two minutes . to examine the robustness of our results against such observation failures , we repeat the same set of analyses after filling short intervals between successive conversations between the same pair of individuals .if individual has two successive conversation events with individual and the interval between the two events is smaller than or equal to minutes , we merge the two events into one .the original partner sequence corresponds to .the number of conversation events decreases with .the interpolation reduces , , and and conserves , , and .we confirmed that our findings are reproduced when we interpolate the original data with and ( see appendix g for details ) .we have shown that sequences of conversation events have deterministic components .the entropy in the distribution of the conversation partners of an individual decreases by , on average , for data set and for data set , if we know the current partner .much of the predictability of conversation events results from the bursty activity patterns . in general , daily and weekly rhythms of human activity can cause bursty activity patterns . during the night and weekend , the individuals are out of the office .therefore , interevent intervals are usually longer than those within working hours .nevertheless , we consider that the effects of such long interevent intervals on the predictability of conversation partners are small .this is because the fraction of long interevent intervals , _i.e. _ , those over five hours , for example , is relatively small , occupying 4.31% in and 2.95% in .in addition , there is no particular reason to believe that the last conversation partner in a day and the first partner in the next day are specifically correlated . in this study, we did not correct for the effect of the night and weekend .the degree of predictability depends on individuals .in particular , we have shown that individuals connecting different communities in conversation networks behave relatively deterministically .we quantified the degree to which an individual is confined in communities by the clustering coefficient . in the context of an overlapping community structure ,individuals connect different communities when they belong to multiple overlapping communities . such individuals tend to be surrounded by many triangles if we define the community by 3-cliques ( _ i.e. _ , triangles ) .this apparently contradicts our results .this contradiction comes from the difference in what we mean by connecting different communities .we regard individuals as bridging different communities when they are not strongly bound to any community and they have links to different communities . in this sense , nodes with small clustering coefficient values connect different communities in networks with hierarchal structure . in general ,links bridging different communities have large betweenness centrality values .the clustering coefficient of a node tends to decease with the betweenness centrality .this lends more support to our view that individuals with small clustering coefficient values tend to connect different network communities .it should be noted that the strength of weak ties property of the cn and the relationship between and the individual s position in the cn are preserved , if we define the link weight by the total duration of the conversation events for each pair ( see appendix f ) .we do not have an access to the contents of dialogs for ethical reasons .therefore , the understanding of the reason for the correlation between the individual s position and predictability is limited .nevertheless , individuals that own many weak links and connect distinct groups may mediate information flows necessary to coordinate tasks involving these groups ( _ e.g. _ , project groups in a company ) .such individuals may control the information flow between the groups in a rigid manner to yield a large .in contrast , individuals with few weak links may enjoy casual ( and perhaps creative ) conversations within their own groups to choose the partners in a random manner .such individuals may tend to have a small .it should be noted that our data were obtained in company offices. roles or formal positions of individuals in the company may affect and the local abundance of weak links surrounding the individuals .song _ et al ._ discovered a remarkable predictability in the mobility patterns of humans . in terms of the analysis tools ,our methods are similar to theirs .we have applied the entropy measures and the concept of predictability to different types of data sets . in our data ,the physical location of individuals is irrelevant ; individuals work in offices in the companies .it should be noted that although we have not implemented the prediction algorithm , the predictability of the data is implied by the large mutual information that we observed .this logic parallels that made for human mobility patterns .t.t . acknowledges the support provided through grant - in - aid for scientific research ( no .10j06281 ) from jsps , japan .m.n . acknowledges the support provided through grant - in - aid for scientific research ( no .10j08999 ) from jsps , japan . n.m . acknowledges the support provided through grants - in - aid for scientific research ( no .20760258 and no .23681033 ) from mext , japan .we obtained qualitatively the same results for as those for .the results for are shown in figs .[ fig : hist_h_d2 ] , [ fig : iei - setest_d2 ] , [ fig : corr_mi_d2 ] , and [ fig : overlap_mi.cc_d2 ] , which correspond to figs .[ fig : hist_h ] , [ fig : iei - setest ] , [ fig : corr_mi ] , and [ fig : overlap_mi.cc ] in the main text , respectively . to confirm that the large value of the empirically obtained is not because of the small data size, we carry out a bootstrap test as follows .first , we make a bootstrap sample of a partner sequence with length by resampling partners ids from the empirical partner sequence of individual without replacement ( _ i.e. _ , shuffling ) .then , we use eq . to calculate the mutual information for the bootstrap sample . by resampling 5,000 bootstrap partner sequences, we construct the distribution of , which we denote as . on the basis of , we carry out a hypothesis test for .the null hypothesis of the test is that is positive just because of the small data size .the alternative hypothesis is that is larger than the value expected for unstructured data of a small size .we set the significance level of the test to .consequently , the critical region of the null hypothesis is the half - open interval above the 99 percentile point of . in fig .[ fig : boottestmi ] , the results of the bootstrap test are summarized .apparently , is above the 99 percentile point ( _ i.e. _ , the upper end of the each error bar ) .in fact , for all the individuals in and , except individual 14 in and 149 in , the null hypothesis is rejected with a significance level .human activity patterns are characterized by long - tailed distributions of the interevent intervals , a feature that is shared by our data .we define the interevent interval as the interval between the initiation time of two successive conversation events involving a given individual .the unit of is a minute , corresponding to the time resolution of the recording . as shown in fig .[ fig : taus](a ) , the distribution of , denoted by , for a typical individual in is long - tailed .the tail of the empirical data ( solid line ) is much fatter than that of the exponential distribution whose mean is equal to that of the empirical data ( dashed line ) .the histogram of the coefficient of variation ( cv ) of on the basis of all the individuals in and the same histogram for are shown in fig .[ fig : taus](b ) .the value of cv is equal to the ratio of the standard deviation to the mean and is equal to unity for exponential distribution .figure [ fig : taus](b ) indicates that the cv of is much larger than unity for all the individuals .a possible mechanism governing the predictability of the conversation events is the bursty activity patterns . to examine the effect of the long - tailed behavior of on the predictability ,we carry out a statistical test based on the shuffling of as follows .consider the sequence of conversation events of focal individual with individual . if and talk four times in a given day and the interevent intervals are equal to , , and in the chronological order , we randomize their order .for example , the interevent intervals in the shuffled data are ordered as , , and .we carry out the same randomization for each day and each partner .then , we combine the randomized sequences ( _ i.e. _ , point processes ) for different s into the one point process from which we read out the randomized partner sequence for . we define as the mutual information for this randomized partner sequence . in fig . [fig : iei - setest](a ) , the mean and standard deviation of obtained from 100 randomized partner sequences are shown for different individuals in .the empirical values of ( circles ) are significantly larger than for most individuals .however , consistently occupies a large fraction of and increases with .therefore , the burstiness is a major cause of the predictability regardless of the value of .the burstiness is not the only contributor to the predictability . to show this, we examine the reduced partner sequence generated by merging all the consecutive events with the same partner into one event .for example , the original partner sequence yields the merged partner sequence .we calculate the mutual information in the merged partner sequence , denoted by . measures the predictability of conversation events that does not result from the burstiness .we do not directly compare with the original because the merging procedure shortens the length of the partner sequence and the amount of mutual information generally depends on the length of a sequence .instead , we carry out a bootstrap test for . by definition ,the partner changes every time in the merged partner sequence .we obtain bootstrap samples respecting this property as follows .the frequency with which partner appears in the merged partner sequence of individual is denoted by .we select the first partner of , denoted by , randomly according to .the second partner is selected according to , where .we repeat the same procedure until the generated sequence becomes as long as the merged partner sequence .figure [ fig : iei - setest](b ) summarizes the results of the bootstrap test for . is consistently larger than the values expected for the bootstrap samples for all the individuals .therefore , the partner sequence is predictable to some extent even without the effect of the bursty activity patterns . in the field of clusterpartitioning , the normalized mutual information is used to quantify the accuracy of partitioning methods , because the relationship is convenient for comparing different methods .our main results are qualitatively the same if we replace by ( fig .[ fig : normmi ] ) .[ [ f .- alternative - definition - of - the - link - weight - based - on - the - duration - of - conversation ] ] f. alternative definition of the link weight based on the duration of conversation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ in the main text , we defined the link weight by the total number of conversation events for each pair .an alternative definition is given by the total duration of the conversation events for each pair .this alternative definition changes , , and and conserves , , and . for the cn where the link weight is defined by the total duration , we repeat the same set of analyses as that conducted in sec .[ sec : variation ] . as shown in fig .[ fig : duration_weight ] , the change in the definition of the link weight does not affect our main results .we observed a negative correlation between and ( fig .[ fig : duration_weight](a ) ) , that between and ( fig .[ fig : duration_weight](b ) ) , the `` strength of weak ties '' property ( fig .[ fig : duration_weight](c ) ) , and a negative correlation between and ( fig .[ fig : duration_weight](d ) ) . to examine the robustness of our results against observation failures ,we analyze the data sets after interpolating short intervals between successive conversations between the same pairs of individuals .suppose that individuals and talk with each other twice and that does not talk with anybody else between the two conversation events with .we merge the two conversation events into one if the difference between the ending time of the first event and the starting time of the second event is less than or equal to minutes . in fig .[ fig : compare_m1 ] , , , , and , which are the quantities calculated for the data obtained with , are compared with , , , and , respectively . as expected , is smaller than , and and are generally larger than and , respectively . as shown in fig .[ fig : confirm_m1 ] , the important properties of the data sets are not changed by the interpolation with . in other words , a negative correlation between and ( fig . [fig : confirm_m1](a ) ) and that between and ( fig .[ fig : confirm_m1](b ) ) , the strength of weak ties property ( fig .[ fig : confirm_m1](c ) ) , and a negative correlation between and ( fig .[ fig : confirm_m1](d ) ) are observed .the results are qualitatively the same for , as shown in figs .[ fig : compare_m5 ] and [ fig : confirm_m5 ] .y. wakisaka , k. ara , m. hayakawa , y. horry , n. moriwaki , n. ohkubo , n. sato , s. tsuji , and k. yano , , ( carnegie mellon university , pittsburgh , 2009 ) p. 14 . k. yano , k. ara , n. moriwaki , and h. kuriyama , , 139144 ( 2009 ) . . for clarity ,only the nodes with strengths larger than 100 and the links among them are drawn .the darkness of the node color represents the value of ; a darker node has a larger .the thickness of the link is proportional to its weight . the links with weights larger than or equal to ( smaller than ) the median value ( _ i.e. _ , 5 ) are drawn by red ( blue ) lines .( b ) relationship between and in .the solid line represents ., title="fig : " ] .( b ) relationship between and in .the solid line represents . ,title="fig : " ] .the circles represent and in ( a ) and ( b ) , respectively .the error bars represent the statistics for the bootstrap samples .( a ) results of the shuffling test . and the error bars are plotted in the ascending order of .the error bars indicate 1 standard deviation around the mean of , which was obtained from 100 shuffled partner sequences .the ticks at the middle of the error bars indicate the mean .( b ) results of the merging test . and the confidential intervals ( error bars ) are plotted in the ascending order of .the lower and upper ends of the error bars represent and percentile points , respectively .the ticks at the middle of the error bars indicate the mean ., title="fig : " ] + .the circles represent and in ( a ) and ( b ) , respectively .the error bars represent the statistics for the bootstrap samples .( a ) results of the shuffling test . and the error bars are plotted in the ascending order of .the error bars indicate 1 standard deviation around the mean of , which was obtained from 100 shuffled partner sequences .the ticks at the middle of the error bars indicate the mean .( b ) results of the merging test . andthe confidential intervals ( error bars ) are plotted in the ascending order of .the lower and upper ends of the error bars represent and percentile points , respectively .the ticks at the middle of the error bars indicate the mean ., title="fig : " ] is plotted against ( a ) degree , ( b ) node strength , and ( c ) average node weight , for .the pearson correlation coefficient between the plotted quantities is also shown ., title="fig : " ] is plotted against ( a ) degree , ( b ) node strength , and ( c ) average node weight , for .the pearson correlation coefficient between the plotted quantities is also shown ., title="fig : " ] + is plotted against ( a ) degree , ( b ) node strength , and ( c ) average node weight , for .the pearson correlation coefficient between the plotted quantities is also shown ., title="fig : " ] as a function of the fraction of links with weights smaller than for .( b ) pearson correlation coefficient between and ( squares ) and the partial correlation coefficient between them with and fixed ( circles ) , for .the horizontal line represents zero correlation ., title="fig : " ] as a function of the fraction of links with weights smaller than for .( b ) pearson correlation coefficient between and ( squares ) and the partial correlation coefficient between them with and fixed ( circles ) , for .the horizontal line represents zero correlation ., title="fig : " ] .( b ) relationship between and in . the solid line represents ., title="fig : " ] .( b ) relationship between and in .the solid line represents ., title="fig : " ] is plotted against ( a ) degree , ( b ) node strength , and ( c ) average node weight , for .the pearson correlation coefficient between the plotted quantities is also shown ., title="fig : " ] is plotted against ( a ) degree , ( b ) node strength , and ( c ) average node weight , for .the pearson correlation coefficient between the plotted quantities is also shown ., title="fig : " ] + is plotted against ( a ) degree , ( b ) node strength , and ( c ) average node weight , for .the pearson correlation coefficient between the plotted quantities is also shown ., title="fig : " ] as a function of the fraction of links with weights smaller than for .( b ) pearson correlation coefficient between and ( squares ) and the partial correlation coefficient between them with and fixed ( circles ) , for ., title="fig : " ] as a function of the fraction of links with weights smaller than for .( b ) pearson correlation coefficient between and ( squares ) and the partial correlation coefficient between them with and fixed ( circles ) , for ., title="fig : " ] and ( b ) . ( circles ) and the confidential intervals ( error bars ) of individuals are plotted in the ascending order of .the lower and upper ends of the error bars represent and percentile points , respectively .the ticks at the middle of the error bars indicate the mean.,title="fig : " ] + and ( b ) . ( circles ) and the confidential intervals ( error bars ) of individuals are plotted in the ascending order of .the lower and upper ends of the error bars represent and percentile points , respectively .the ticks at the middle of the error bars indicate the mean.,title="fig : " ] ( solid line ) .the dotted line represents the power - law fit with exponent , which wa obtained from the maximum likelihood test .the dashed line represents the exponential distribution with the same mean as that of the data .( b ) distributions of the cv of in and ., title="fig : " ] ( solid line ) .the dotted line represents the power - law fit with exponent , which wa obtained from the maximum likelihood test .the dashed line represents the exponential distribution with the same mean as that of the data .( b ) distributions of the cv of in and ., title="fig : " ] is plotted against ( a ) degree , ( b ) node strength , and ( c ) average node weight , for .the pearson correlation coefficient between the plotted quantities is also shown .( d ) pearson correlation coefficient between and ( squares ) and the partial correlation coefficient between them with and fixed ( circles ) ., title="fig : " ] is plotted against ( a ) degree , ( b ) node strength , and ( c ) average node weight , for .the pearson correlation coefficient between the plotted quantities is also shown .( d ) pearson correlation coefficient between and ( squares ) and the partial correlation coefficient between them with and fixed ( circles ) ., title="fig : " ] is plotted against ( a ) degree , ( b ) node strength , and ( c ) average node weight , for .the pearson correlation coefficient between the plotted quantities is also shown .( d ) pearson correlation coefficient between and ( squares ) and the partial correlation coefficient between them with and fixed ( circles ) ., title="fig : " ] is plotted against ( a ) degree , ( b ) node strength , and ( c ) average node weight , for .the pearson correlation coefficient between the plotted quantities is also shown .( d ) pearson correlation coefficient between and ( squares ) and the partial correlation coefficient between them with and fixed ( circles ) ., title="fig : " ] .the mutual information is plotted against ( a ) node strength and ( b ) average node weight .the pearson correlation coefficient between the plotted quantities is also shown .( c ) averaged neighborhood overlap as a function of the fraction of links with weights smaller than .( d ) pearson correlation coefficient between and ( squares ) and the partial correlation coefficient between them with and fixed ( circles ) ., title="fig : " ] .the mutual information is plotted against ( a ) node strength and ( b ) average node weight .the pearson correlation coefficient between the plotted quantities is also shown .( c ) averaged neighborhood overlap as a function of the fraction of links with weights smaller than .( d ) pearson correlation coefficient between and ( squares ) and the partial correlation coefficient between them with and fixed ( circles ) ., title="fig : " ] .the mutual information is plotted against ( a ) node strength and ( b ) average node weight .the pearson correlation coefficient between the plotted quantities is also shown .( c ) averaged neighborhood overlap as a function of the fraction of links with weights smaller than .( d ) pearson correlation coefficient between and ( squares ) and the partial correlation coefficient between them with and fixed ( circles ) . , title="fig : " ] . the mutual information is plotted against ( a ) node strength and ( b ) average node weight .the pearson correlation coefficient between the plotted quantities is also shown .( c ) averaged neighborhood overlap as a function of the fraction of links with weights smaller than .( d ) pearson correlation coefficient between and ( squares ) and the partial correlation coefficient between them with and fixed ( circles ) ., title="fig : " ] and .( a ) node strength , ( b ) uncorrelated entropy , ( c ) conditional entropy , and ( d ) mutual information for the interpolated data with are plotted against those without interpolation ( _ i.e. _ , original data sets ) ., title="fig : " ] and .( a ) node strength , ( b ) uncorrelated entropy , ( c ) conditional entropy , and ( d ) mutual information for the interpolated data with are plotted against those without interpolation ( _ i.e. _ , original data sets ) ., title="fig : " ] and .( a ) node strength , ( b ) uncorrelated entropy , ( c ) conditional entropy , and ( d ) mutual information for the interpolated data with are plotted against those without interpolation ( _ i.e. _ , original data sets ) ., title="fig : " ] and .( a ) node strength , ( b ) uncorrelated entropy , ( c ) conditional entropy , and ( d ) mutual information for the interpolated data with are plotted against those without interpolation ( _ i.e. _ , original data sets ) ., title="fig : " ] .we use data set .the mutual information is plotted against ( a ) node strength and ( b ) mean weight .( c ) averaged neighborhood overlap as a function of the fraction of links with weights smaller than .( d ) pearson correlation coefficient ( squares ) and partial correlation coefficient ( circles ) between and .,title="fig : " ] .we use data set .the mutual information is plotted against ( a ) node strength and ( b ) mean weight .( c ) averaged neighborhood overlap as a function of the fraction of links with weights smaller than .( d ) pearson correlation coefficient ( squares ) and partial correlation coefficient ( circles ) between and .,title="fig : " ] .we use data set .the mutual information is plotted against ( a ) node strength and ( b ) mean weight .( c ) averaged neighborhood overlap as a function of the fraction of links with weights smaller than .( d ) pearson correlation coefficient ( squares ) and partial correlation coefficient ( circles ) between and .,title="fig : " ] .we use data set .the mutual information is plotted against ( a ) node strength and ( b ) mean weight .( c ) averaged neighborhood overlap as a function of the fraction of links with weights smaller than .( d ) pearson correlation coefficient ( squares ) and partial correlation coefficient ( circles ) between and .,title="fig : " ] and .( a ) node strength , ( b ) uncorrelated entropy , ( c ) conditional entropy , and ( d ) mutual information for the interpolated data with are plotted against those without interpolation ( _ i.e. _ , original data sets ) . ,title="fig : " ] and .( a ) node strength , ( b ) uncorrelated entropy , ( c ) conditional entropy , and ( d ) mutual information for the interpolated data with are plotted against those without interpolation ( _ i.e. _ , original data sets ) ., title="fig : " ] and .( a ) node strength , ( b ) uncorrelated entropy , ( c ) conditional entropy , and ( d ) mutual information for the interpolated data with are plotted against those without interpolation ( _ i.e. _ , original data sets ) ., title="fig : " ] and .( a ) node strength , ( b ) uncorrelated entropy , ( c ) conditional entropy , and ( d ) mutual information for the interpolated data with are plotted against those without interpolation ( _ i.e. _ , original data sets ) ., title="fig : " ] .we use data set .see the caption of fig .[ fig : confirm_m1 ] for legends.,title="fig : " ] .we use data set .see the caption of fig .[ fig : confirm_m1 ] for legends.,title="fig : " ] .we use data set .see the caption of fig .[ fig : confirm_m1 ] for legends.,title="fig : " ] .we use data set .see the caption of fig .[ fig : confirm_m1 ] for legends.,title="fig : " ]
recent developments in sensing technologies have enabled us to examine the nature of human social behavior in greater detail . by applying an information theoretic method to the spatiotemporal data of cell - phone locations , [ c. song _ et al . _ science * 327 * , 1018 ( 2010 ) ] found that human mobility patterns are remarkably predictable . inspired by their work , we address a similar predictability question in a different kind of human social activity : conversation events . the predictability in the sequence of one s conversation partners is defined as the degree to which one s next conversation partner can be predicted given the current partner . we quantify this predictability by using the mutual information . we examine the predictability of conversation events for each individual using the longitudinal data of face - to - face interactions collected from two company offices in japan . each subject wears a name tag equipped with an infrared sensor node , and conversation events are marked when signals are exchanged between sensor nodes in close proximity . we find that the conversation events are predictable to a certain extent ; knowing the current partner decreases the uncertainty about the next partner by on average . much of the predictability is explained by long - tailed distributions of interevent intervals . however , a predictability also exists in the data , apart from the contribution of their long - tailed nature . in addition , an individual s predictability is correlated with the position of the individual in the static social network derived from the data . individuals confined in a community in the sense of an abundance of surrounding triangles tend to have low predictability , and those bridging different communities tend to have high predictability .
protein - dna recognition plays an essential role in the regulation of gene expression .although a significant number of structures of dna binding proteins have been solved in complex with their dna binding sites increasing our understanding of recognition principles , most of the questions remain unanswered .several studies showed that protein - dna recognition could not be explained by a simple one - to - one correspondence between amino acids and bases , even if hypothesized hydrogen bonding patterns and definite preferences have been actually found in experimentally solved structures .moreover regulatory proteins are known to recognize specific dna sequences directly through atomic contacts between protein and dna and/or indirectly through water - mediated contacts and conformational changes .the degree of redundancy and flexibility seems to suggest that the recognition mechanism is ambiguous , therefore the prediction of dna target sequences is not straightforward .dna protein interactions can be studied using several different computational methods , which could offer several advantages compared to the current experimental methods , more laborious and slow . in the followingwe will indicate , for simplicity , dna - binding protein target sequences with the more specific term `` transcription factor binding sequences '' , although the first term is more general . + computational tools for the identification of transcription factors ( tf ) binding sequences can be organized in two main approaches : * `` sequence based methods '' in which a central role is played by the statistical properties of the base distribution in the dna regions which are expected to be involved in transcriptional regulation ( see for a general review on the subject ) .* `` structure based tools '' which use the structural information on protein - dna complexes derived from x - ray crystallography and nuclear magnetic resonance .the main focus of this paper is on the second approach , although the best results will likely be obtained by tools able to combine in a clever way these two approaches .0.3 cm * sequence based methods * + this type of algorithms can in turn be divided into two broad groups : \i ) enumerative methods , which explore all possible motifs up to a certain length ( see e.g. ) .\ii ) local search algorithms , including expectation maximization and various flavours of gibbs sampling ( see e.g. ) .it is important to stress that this type of studies can not be based exclusively on the statistical features of the dna regions presumably involved in transcriptional regulation , but must be complemented with independent information about gene regulation . in this respectthree important sources of information may be used : the functional annotations collected in public databases , gene expression data on a global scale , and the so called phylogenetic footprinting. in particular this last approach , thanks to the increasing number of sequenced genomes , has proved to be very effective in these last few years ( see e.g. ) .the major problem of all these tools is the large number of false positives , above all in the case of higher eukaryotes ( for a thorough analysis of this problem see the interesting assessment of tf binding sites discovery tools reported in ) .it is exactly to cope with this type of problem that it could be important to resort to structure based approaches .0.3 cm * structure based methods + * these methods can be broadly divided into two classes according to a nomenclature adopted in the context of protein structure prediction : + i ) those based on knowledge based potentials ( mostly statistical effective energy functions , seefs ) ; + ii ) those based on physical potentials ( or physical effective energy functions , peefs ) . +seefs are energy functions derived from a dataset of known protein - dna structures .a set of features is selected ( e.g. nucleotide - amino acid contacts , roll angles for dna bases , interatomic distances , etc . ) ; the process often involves parameter choices , like threshold on distances or interval binning .the statistical properties of these features are compared with a - priori expectations and log - odd scores are derived . at the most basic level , structures may be used to define contacts among dna bases and protein amino acids and , for each pair of positions , the occurrences of nucleotides and amino acids contacts are used to derive effective potentials . moreover a statistical potential , taking into account contact geometry and spatial arrangement of contacting residues can be derived .recently interesting developments of this approach have been proposed ( ) .the approach suffers from theoretical and practical problems . from the theoretical point of view potentials of mean forceare not in general additive and the exact modelization of a - priori expectations ( or so - called reference state ) may be difficult for complex systems ( see e.g. ) .the main practical problem is the requirement of a large number of sequences or binding experimental data since the available data may be biased towards specific classes of protein - dna complexes .moreover datasets generally do not contain unfavourable interactions between amino acids and bases since they entail protein - dna complexes that occur naturally .thus the statistical potential may predict correctly the wild type targets as opposed to incorrect ones , but it may not be as good at distinguishing among mutants .+ notwithstanding all caveats usage of seefs are widespread in the field of structural predictions .provided that sufficient data are available these methods are reasonably fast and accurate , as demonstrated for instance in the field of protein structure prediction ( see e.g ) .+ a more radical approach is to estimate the free energy of binding starting directly from the available ( or homology built ) protein - dna complexes using physical effective energy functions ( peefs ) .this approach has been successfully used in many contexts , ranging from estimation of dna- or protein - ligand binding free energy to estimation of protein - dna binding free energy ( see e.g. ) .there are , however , many problems connected with the approach which are mainly due to : + i ) difficulties in estimating entropic effects ; + ii ) difficulties in properly estimating solvation effects ; + no consensus has emerged on the choice of parameters ( e.g. inner dielectric constant , surface tension coefficient , forcefield parameters ) and on the protocols that should be applied ; + iii ) difficulties in estimating gas - phase energy with available forcefields which are derived from the analysis of small compounds at equilibrium and do not take into account electrostatic polarization . + in order to get rid as far as possible of all these problems , binding free energies are expressed relative to a reference system and in most computational studies optimal parameters have been chosen for matching experimental data . + as far as protein - dna complexes are concerned attempts to compute binding free energies using physics based approaches have started in the 1990s .the electrostatic component of the binding free energy has been studied according to continuum methods and its dependence on temperature and salt concentration has been computed .integration of electrostatics with other components including dna conformational free energy has been extended from dna - ligand complexes and protein - peptide complexes to protein - dna complexes .recently wojciechowski et al . studied the complex of telomerase end binding protein with single stranded dna optimizing the weights of different contributions in order to reproduce binding data .the availability of the successful analytical generalized born model treatment of electrostatics solvation effects enabled computation of binding energies with hybrid molecular mechanics / generalized born surface accessibility methods by jayaram et al . .the group of kollman developed the molecular mechanics / poisson boltzmann surface accessibility ( mm / pbsa ) methodology and applied it extensively to biomolecular systems ( see for a review of these applications and for important extensions of these ideas ) .+ however , when mm / gbsa or mm / pbsa energy versus time plots are presented for explicit solvent molecular dynamics simulation snapshots , fluctuations in the range of tens to hundreds of kcal / mol are found , thus posing an issue on the reliability of averages . in this respectseefs appear much more robust energy estimation methods .+ in a few very recent reports interesting results have been reported concerning the capability of hybrid methods to predict protein - dna binding sites . in this paperwe focus on the application of peefs to a single dna binding protein in complex with many different dna sequences .+ the availability of high resolution x - ray crystal structure and suitable experimental data makes the repressor - operator complex an interesting system for computational analysis of protein - dna interaction .+ the bacteriophage repressor protein is a small , 92 amino acid , protein that binds the dna as a dimer .each monomer binds to an operator half site .the amino - terminal domain of repressor is responsible for dna binding and the carboxy - terminal domain is primarily responsible for dimerization .each monomer contains a typical helix - turn - helix motif found in a variety of dna binding proteins .the free energy of binding of repressor for wild - type operator dna and of all possible single base - pair substitutions within the operator have been experimentally measured using the filter binding assay technique and changes in the free energy of binding caused by the mutations have been determined .+ besides being a perfect playground to test our methods , the so called `` -switch '' in which the repressor is involved is very interesting in itself ( for a review see ) .this `` genetic '' switch is tightly regulated by the repressor and the proteins . in these last yearsthis system , due to its relative simplicity and to the availability of rather precise experimental data attracted a lot of interest and various models ( see for instance and references therein ) have been proposed to describe its behaviour . despite these efforts in all these modelsthere are still a few open problems which need to be understood .in particular it has been recently realized that in order to ensure the remarkable stability of the switch one should require a very high non - specific affinity both for the repressor and for .such a prediction is very difficult to test experimentally but could rather directly be evaluated with the tools which we shall discuss in this paper .in fact one of the main goal of the test which we shall perform on the repressor will be the evaluation of its non - specific binding energy and the comparison with the prediction of the model discussed in . in the present workwe apply different techniques to evaluate the binding affinities by means of computational methods .it is assumed that the relative free energy of binding of a protein to different dna sequences may be expressed as the sum of a molecular mechanical term , that includes the non - bonded electrostatic and van der waals contributions , and a hydration term that can be further split in a polar and a hydrophobic contribution . due to the peculiar nature of hydrogen bondssimilar alternative models are tested where an energy term proportional to the number of hydrogen bonds is added .+ the systems studied here differ only in one or two base - pairs and therefore the inaccuracies implicit in the assumption of rigid docking , of the solvation model , of the treatment of entropy and in lack of a complete conformational search for side chains at protein - dna interface should mostly cancel out in comparison . +the aims of this paper are : + 1 ) to provide an assessment of the accuracy of different methods and protocols by comparison with experimental data ; + 2 ) to provide a reliable estimate of non - specific binding energies ; + 3 ) to propose a protocol for the prediction of dna - binding target sequences which makes no use of sequence information . + to pursue these objectives we make use of extensive computations and address several specific issues . in particular : + i ) we estimate optimal weights for different contributions to dna - protein binding free energies using different solvation models ; + ii ) for 52 single base - pair mutants we perform 1 ns molecular dynamics ( md ) runs and we assess the effect of md on the computed binding energies ; + iii ) we compute mm / gbsa binding energies for one thousand complexes where the bases of the double stranded dna are substituted according to randomly generated dna sequences in order to estimate non - specific binding free energy ; + iv ) we scan the entire bacteriophage genome with the scoring profiles obtained from free energy computations .one of the profiles is obtained making use only of the structural data available for a single molecular complex , with no sequence information .+ the statistical analysis of the results show that computational methods may offer a predictive tool truly complementary to sequence - based identification of dna - binding protein target sequences .this is particularly important in view of the emergence of consensus protocols where the independence of the different methods is a prerequisite .binding free energy changes between the repressor dimer and the dna operator mutants have been calculated using different methodologies , as described in the _ methods _ section .we calculated the binding free energy between the repressor dimer and the dna operators , after having energy - minimized every complex using a distance dependent dielectric constant ( 1r , 2r , 4r , 8r , respectively , in order to match subsequent energy evaluation ) .the interaction energy between the protein and the dna , , has been evaluated using four values for ( 1r , 2r , 4r , 8r ) then the solvation term has been determined according to the model of oobatake et al. using eq .the best scaling factors have been determined ( together with the standard deviation computed according to eq .[ sigma2 ] ) fitting the set of experimentally measured protein - dna binding affinities and are reported in table 1 .the addition of a specific hydrogen bond term reduces the coefficients of the electrostatic term .the rmsd and the correlation coefficient have been computed and a leave - one - out scheme has been adopted , in order to verify the performance of the model ( table 2 ) . the same analysis has been performed for 5000 replicas of the dataset with one third of the set left out and used for cross - validation .the average rmsd and correlation are essentially the same reported for the leave - one - out scheme reported in table 2 . from the same analysis variances of the coefficients have been estimated with essentially the same results as those reported in table 1 .+ the best correlation coefficient ( = 0.703 for mm / dddc - oons(+hb ) model ) has been obtained for , although values of and gave very similar results for both mm / dddc - oons and mm / dddc - oons(+hb ) models . except for the mm / dddc - oons(+hb ) model with , the f - statistic shows that the model is significant ( p 0.001 ) .the dielectric constant , which gives the worst results tends in many cases to overestimate binding free energy changes lower than 1.0 kcal / mol whereas binding free energy changes greater than 2 kcal / mol are underestimated .a similar behaviour has been observed for , even if these models are able to better reproduce binding free energy changes , in particular improvements have been obtained for values lower than 1.0 kcal / mol ( figure 1 ) .the correlation coefficients between calculated and experimental values are 0.543 , 0.667 , 0.703 and 0.701 for , respectively . ]the analysis of the best scaling coefficients is not straightforward because there is a strong correlation between the energy terms .for instance , for all models the electrostatic term is strongly anticorrelated with the oons solvation term . + moreover the estimated variance of coefficients is often very large . notwithstanding these difficultiesit is worth noting that some terms appear to be particularly important .for instance each protein - dna hydrogen bond ( when explicitly included in the model ) appears to contribute -0.15 to -0.27 kcal / mol , depending on the electrostatic model assumed .+ as expected the electrostatic term is reduced when hydrogen bonds are taken into account separately . for the best scaling coefficient changes from 0.154 to 0.182 upon removal of the term proportional to the number of hydrogen bonds .+ the correlation between the different contributions is reflected in the changes , with changing dielectric model , of the oons term scaling factor , which is always strongly reduced by the scaling factor ranging from 0.075 to -0.066 .finally the constant term which takes into account common entropic terms ( which can be estimated to be in the range 20 to 40 kcal / mol ) and the free energy of binding of the reference complex ( which implies the addition of 11.3 kcal / mol ) , expected to be in the range of 30 to 50 kcal / mol , is slightly larger than expected .+ the oons solvation term is accounting for both apolar and electrostatic solvation terms which should be already taken into account , at least partly , in the distance dependent dielectric constant .the same calculations described above have been performed using a similar approach in which the solvation term of the binding free energy is taken to be proportional to the polar / apolar accessible surface area of the molecule ( see eq .[ eq6 ] ) .the best scaling factors have been determined fitting the set of experimentally measured protein - dna binding affinities ( table 1 ) .the quality of the computed binding free energies has been assessed evaluating the linear correlation coefficient and the root mean square deviation ( rmsd ) between calculated and experimental values . in order to verify the performance of the model ,a leave - one - out scheme has been adopted ( table 2 ) .the f - statistic shows that the model is significant ( p 0.001 ) . + all the values of the distance dependent dielectric constant which have been tested gave a quite high and similar linear correlation coefficient .the highest correlation value ( = 0.745 ) was obtained for for the mm / gbsa(+hb ) model and the lowest ones for similar to the mm / dddc - oons model . generally , binding free energy changes lower than 1.0 kcal / molare overestimated whereas binding free energy changes greater than 2 kcal / mol are underestimated in all the cases .more accurate predictions have been obtained for , in particular for values lower than 1.0 kcal / mol ( figure 2 ) .the correlation coefficients between calculated and experimental values are 0.684 , 0.745 , 0.728 and 0.739 for , respectively . ]the optimal scaling coefficients are in the expected range ( table 1 ) , in particular for the constant term is in the range 10 - 20kcal / mol , moreover the coefficients and have the right order of magnitude of typically used surface tension coefficients for water biomolecular interface , even if the sign is incorrect .it should be noted however that there is a strong correlation ( ranging in this case from 0.2 to 0.6 ) between the coefficients of most terms and the coefficient of the constant term .+ also for the present model the addition of an explicit hydrogen bond term reduces the coefficient of the electrostatic term as could be expected .+ these results support the conclusion that , in general , there is no advantage in using the detailed solvation models compared to the simpler polar / apolar model , as far as the binding free energy is concerned . based on the range of the scaling coefficients the two models appear of similar quality .scaled free energy components for the mm / dddc - hp(+hb ) model are reported in table 3 . in this approachall structures have been energy - minimized using the generalized born solvent model , then the binding free energy for every molecule has been calculated according to the mm / gbsa model using the eq .[ eq7 ] . as in the previous cases , we determined the best scaling factors ( and standard deviations according to eq .[ sigma2 ] ) fitting the set of experimentally measured protein - dna binding affinities ( table 4 ) , then we assessed the quality of predictions evaluating the linear correlation coefficient and the root mean square deviation ( rmsd ) between calculated and experimental values .finally we verified the performance of the model , using the leave - one - out scheme ( table 2 ) .the same analysis has been performed for 5000 replicas of the dataset with one third of the set left out and used for cross - validation .the average rmsd and correlation are essentially the same reported for the leave - one - out scheme reported in table 2 .the standard deviations of the coefficients are essentially the same as reported in table 4 .our calculation shows that the mm / gbsa(+hb ) model gives the best performances ( = 0.746 ) , although the linear correlation coefficient between calculated and experimental values differs slightly from the best values obtained from the other models .the f - statistic shows that the model is significant ( p 0.001 ) .+ computed values versus experimental data are reported in figure 3 .as far as the scaling coefficients are concerned ( see table 4 ) , it is worth noting that addition of an explicit hydrogen bond term has a dramatic effect on the coefficients of van der waals and electrostatic terms , as could be expected , because the latter terms already take into account hydrogen bond energetics . for the mm / gbsa model ( with no explicit term for hydrogen bonds ) the coefficients of the electrostatic and gb solvation terms are 0.16 and 0.14 which correspond to a dielectric constant of .surface tension coefficients and ( -0.010 and -0.029 respectively ) have the same order of magnitude of the commonly used surface tension coefficient ( ca 0.02 kcal / mol ) , but opposite sign .however the terms proportional to the solvent accessible surface area are strongly correlated to each other and to the constant term .+ the constant term is -11.7 kcal / mol , lower than what expected , probably as a consequence of the correlation of this term with the polar and hydrophobic surface area terms ( the linear correlations of the coefficients are 0.51 and 0.75 , respectively ) .the standard deviation of this term is however very large ( 28.0 kcal / mol ) .+ scaled free energy components for the mm / gbsa(+hb ) model are reported in table 5 .the analysis of table 5 shows that the most important feature for computing the binding free energy is the number of intermolecular hydrogen bonds .the correlation of the associated energy term with the experimental free energy of binding is 0.58 .other terms are strongly correlated among each other and therefore it is difficult to single out specific contributions .the correlation between different energetic terms range from -0.99 , for gb solvation energy and coulombic energy , to 0.44 , for gb solvation energy and polar area burial energy term .+ 18 single base - pair mutants exhibit large ( greater than 2.0 kcal / mol ) unfavourable free energy of binding .loss of hydrogen bonds contributes for 1 or 1.5 kcal / mol for mutants 14cg , 8at , 8gc , 9at , 18at , 12ta , 12gc , while for other mutants the most important unfavourable contributions come mostly from coulombic and van der waals terms .it should be noted , however , that solvation terms are correlated with coulombic and van der waals terms . +this analysis is in general in line with the detailed analysis reported by oobatake et al . , although the exact values of energy contributions differ .the procedure used for computing binding energies may suffer from incomplete relaxation and incomplete conformational sampling .an approach that has been used in the past for sampling more conformations and reduce the effect of fluctuations is to analyse snapshots from molecular dynamics runs . in many studiesno scaling factor was applied at all , with good results . + we performed 1 ns of md simulations for every structure in order to test the effectiveness of a first principles computation of binding free energies and to check the effect of molecular dynamics relaxation on the computed energies .we calculated the average value of every component of the binding free energy using snapshots taken every 50 ps , then we used the same set of fitting equations using average values to determine the best scaling factors .we chose to use the mm / gbsa(+hb ) model for computing binding free energies because it gave good results on the starting structures and the coefficients can be used to monitor the quality of the fitting .figure 3 reports the mm / gbsa(+hb ) binding free energy values obtained from md simulations versus experimental values . the quality of the computed binding free energies has been assessed evaluating the linear correlation coefficient and the root mean square deviation ( rmsd ) between calculated and experimental values ( see table 6 ) .results show that md simulations do not improve the prediction capabilities of the model .actually the linear correlation coefficient calculated averaging over 1.0 ns is 0.356 , much lower than the correlation at t = 0.0 ns .results obtained averaging over the time interval 0.0 - 0.5 ns , gave a linear correlation coefficient comparable to what obtained with other models on the starting structures ( r = 0.534 ) but lower than the linear correlation coefficient obtained at t = 0.0 ns .the linear correlation coefficient between experimental values and the results obtained averaging over the time interval 0.5 - 1.0 ns , is = 0.284 , indicating that md causes the loss of any correlation .+ moreover optimal scaling factors obtained averaging over the time interval 0.5 - 1.0 ns have the tendency to lose any physical meaning ( table 7 ) . when optimal scaling factors obtained on the starting structure are used to compute binding free energies using average values , no correlation is detectable with experimental data .the value of the binding free energy change of every complex across 1 ns of simulation has been observed to strongly fluctuate , making it difficult to obtain an accurate estimate of it .+ in order to verify whether this problem could have been circumvented using a larger conformational sampling , the simulations of 10 mutants have been extended to 4 ns , obtaining a total of 400 snapshots for every simulation .in particular we extended the simulations of the wild type complex and the best and the worst mutants ( g17-c25 and t14-a28 respectively ) with negative results .although the system is most probably not fully equilibrated , it is reasonable to suspect that even longer ( in the range of few tens ns ) molecular dynamics simulations will not improve the results obtainable on the starting structures .+ the main reasons of failure of this approach are probably the large conformational fluctuations developing in md simulations and the combination of relatively short molecular dynamics simulations with snapshots energy evaluation using the mm / gbsa(+hb ) continuum model .large conformational fluctuations observed in md simulations are reflected in energetic fluctuations in the range of tens of kcal / mol , thus posing an issue on the reliability of the free energy average values .moreover , since we observed that the results could not be improved extending the simulation time , it is reasonable to ascribe the failure of the method , at least partially , to inaccuracies in the force field parametrization .actually , all force fields are based on numerous approximations , in particular nucleic acid force fields could suffer from two main problems which could give rise to inaccuracies .the first is that the target experimental data used in the optimization process are typically crystal structures of dna and rna .however , the presence of the lattice environment in crystals is known to influence the structure of dna , limiting the transferability of crystal data to solution .the second is the treatment of electrostatics which is crucial in these simulations , given the polyanionic nature of dna . in particular, the electrostatic polarization , which is an effect that can significantly reduce electrostatic interactions of partial atomic charges , is very important for accurate treatment of interactions in different environments , since significant structural changes of dna may occur in response to environment .table 8 shows the number of correct predictions , according to the criteria described in the _ methods _ section .+ in the last column the number of cases in which the difference is lower than 0.3 kcal / mol , that is the number of the more accurate predictions , has been reported .it should be noted that the fitting of coefficients aims at minimizing the rmsd between calculated and experimental values and not at maximizing the number of `` correct '' predictions . when a simple simulated annealing procedure is applied to the coefficientsthe number of correct predictions can be increased by several units .it is instructive for instance to consider the mm / gbsa(+hb ) model , where 41 `` correct '' predictions can be achieved with minor ( mostly less than 10% ) variations relative to the starting values of coefficients .+ from this qualitative point of view , the prediction capabilities of the different models can be compared .the best performing models appear to be the mm / dddc - hp model with . on averagethe dddc - hp model performs better than the similar dddc - oons model . for resultsare worst than for higher values .+ molecular dynamics trajectories were analysed similarly , using average values for the different contributions to the free energy of binding .in particular the lowest number of correct predictions has been obtained averaging over the time interval 0.5 - 1.0 ns , actually there is no cases in which both and are .0 kcal / mol and the number of cases in which and are separated by less than 0.3 kcal / mol has been strongly reduced .+ generally , we observed that the number of cases in which and are both .0 kcal / mol decreases while the number of cases in which and are both 1.0 kcal / mol remains nearly constant ; however at the same time the number of cases in which and are separated by less than 0.3 kcal / mol strongly decreases , indicating that there is a reduction of the accuracy in reproducing experimental binding energies .overall this analysis is consistent with the analyses reported in the previous sections .the optimal scaling coefficients are likely to depend on the complex and mutants studied . in order to verify that such coefficients do not produce wild results when applied on different complexes with similar binding features we considered the or1 complexes which was obtained from crystallographic structure ( pdb i d .code : 6cro ) after mutation of 14 bases .the protein belongs to the same family of repressor but to a different domain , according to scop classification and it has very limited similarity with repressor although they bind dna in a similar fashion .this system is therefore suited for testing the overall quality of the scaling procedure . also for a set of measurements for each mutant of the or1 sequence is available .when all contributions to the binding free energy , computed according to the mm / gbsa(+hb ) model , are scaled by the coefficients determined on the repressor complexes the computed energies show a remarkable correlation coefficient of 0.62 with the experimental values , although the binding energies are overestimated by approximately 10 kcal / mol .this fact could reflect differences in the entropic contribution to binding ( arising from restriction in side chain and backbone mobility ) that are likely to be different for the two systems .indeed , the crystallized protein is roughly only two thirds of the repressor sequence . notwithstanding the differences in overall binding energy , the binding differences for the mutants are on average reproduced by the energetic model .+ as a further test , we performed the reverse analysis where the scaling coefficients are obtained on the -or1 complex and validated on the repressor - operator complex .also in this case the computed energies show a remarkable correlation coefficient of 0.69 with the experimental values , although they are all underestimated by approximately 16 kcal / mol .+ in order to verify how sensitive the scaling coefficients are to the experimental data used in the fit , we calculated the binding free energies of and each mutant of the or1 sequence according to the mm / gbsa model , using eq .finally we combined the two experimental datasets of repressor and and we refitted the model . + as in the previous cases , we calculated the best scaling factors fitting the set of experimentally measured protein - dna binding affinities ( table 4 ) , then we assessed the quality of predictions evaluating the linear correlation coefficient and the root mean square deviation between calculated and experimental values .finally we verified the performance of the model , using the leave - one - out scheme .the best performance has been obtained for the mm / gbsa(+hb ) model , which gives a correlation coefficient of 0.69 and a rmsd of 0.74 for and a correlation coefficient of 0.67 and a rmsd of 0.83 for the two combined systems .the same analysis has been performed for 5000 replicas of the dataset with one third of the set left out and used for cross - validation .the average rmsd and correlation are essentially the same reported for the leave - one - out scheme . from the same analysis variances of the coefficientshave been estimated with essentially the same results as those reported in table 4 .as far as the scaling coefficients are concerned ( see table 4 ) , by comparing the results obtained for , and the two combined systems , we can observe that the sets of values obtained for and are all in the same range except for the constant term , probably as a consequence of the fact that the entropic contribution to binding are likely different for the two systems .however it is worth noting that the standard deviation of this term is very large in both cases . as far as the scaling coefficients obtained for , they are rather different from the others , except for and , which scale the van der waals and h - bonds contributions respectively .however we observed that the electrostatic and gb solvation terms are strongly correlated to each other ( the linear correlation coefficient is 0.998 ) , as well as the constant term and the polar and hydrophobic surface area terms ( the linear correlation of the coefficients is 0.645 and 0.784 ) .the standard deviation of the constant term is also very large ( see table 4 ) .overall these results validate the approach for predicting binding free energies for similar protein - dna complexes .+ in order to study non - specific protein - dna binding one thousand random dna sequences have been generated and each sequence has been threaded onto the dna phosphate backbone of the crystal structure in order to obtain a set of structural models with new dna sequences .minimization was performed according to the protocol described in the _ methods _ section .we refer to to this set of complexes as to the `` non - specific '' set .+ binding free energies for each member of the generated non - specific set have been computed according to the mm / gbsa(+hb ) model , using the optimal scaling factors determined by fitting the 52 experimental data ( see table 4 ) .+ we calculated the z - scores of both the random structures and the single base - pair mutants , i.e. the distribution of the difference between the binding free energy of a complex and the average energy of the non - specific set , normalized by the standard deviation of the computed energies .z - scores represent the specificity of a complex , with larger negative values corresponding to higher specificity . figure 4 shows the distribution of computed energies .the distribution of the z - scores of the single base - pair mutant complexes , is found at the negative tail of the non - specific distribution , indicating that these complexes are more stable than the complexes formed with a dna random sequence , as one expects .the computed energies have an average difference of 4.8 kcal / mol and a standard deviation of 2.2 kcal / mol , giving thus an average z - score for the single base - pair mutants of 2.14 and 2.87 for the lowest computed energy in the set .+ the average non - specific binding energy seems surprisingly low ( meaning that it implies that a rather large fraction of repressors present in the cell is actually non - specifically bound to dna ) but , remarkably enough , it agrees within the errors with the value proposed in as a way to explain the impressive stability of the -switch .+ it is interesting to compare the computed free energies of binding for the non - specific dna complexes with those expected based on single mutants binding energies under the assumption of additivity .the expected free energies are higher than those computed by optimal scaling of contributions .the average difference , with respect to the specifically bound sequence , are 18.4 kcal / mol and 6.1 kcal / mol , respectively .this has been interpreted as a consequence of the fact that adjacent multiple substitutions may introduce additional energy minima compared to single mutations in a tight complex .this result is in line with the saturation effect in observed vs. predicted binding energy that has been described by stormo and co - workers and recently experimentally demonstrated .it is also interesting to note that the non - specific binding energy is comparable to the energy computed by northrup and co - workers for loosely docked complex of to non - cognate dna , which implies that the mode of binding may substantially change for non - specifically bound dna sequences .this would be consistent with the capability of the protein of sliding along dna , which would not be feasible for a tight complex .the aim of this section is to understand whether the methods described here can be used for searching genomes for candidate transcription factor binding sites .+ in particular we aim at verifying : + i ) whether the mm / gbsa(+hb ) model is able to identify transcription factor binding sites in the absence of thermodynamic data about single base - pair mutants , but just knowing the recognized sequence ; + ii ) whether some predictions can still be afforded in the absence of thermodynamic data and of any information on recognized sequences .the latter situation could be encountered when a model of the complex is built by homology and differences in protein dna - contacting residues imply a different specificity .+ the analysis in the previous section used knowledge about single base - pair mutants which is rarely available . herewe ask what predictions can be made when no thermodynamic data on mutants or wild - type sequence binding is available , but the cognate sequence is available .one thousand random dna sequences were generated and the corresponding structural models were built by performing mutations on the double stranded dna in the complex crystal structure using the program whatif .structures were energy minimized using the same protocol used for the mm / gbsa(+hb ) methodology . assuming that random sequences will have a larger free energy of binding compared to the bound sequence , optimal scaling parameters were sought in order to make the free energy difference in binding with respect to the naturally occurring complex equal to 10.0 kcal / mol .this value is arbitrary , albeit not unrealistic .[ eq7 ] is solved ( in a least square sense ) subtracting the row corresponding to the wild - type complex from all other rows , and fixing all the energy differences equal 10.0 .the differences in coulombic , van der waals , gb solvation energy , polar and apolar surface area and number of hydrogen bonds , with respect to wild - type complex , have been tabulated and the optimal scaling parameters have been determined .+ the free energies computed on the random sequences have been used to compute single base - pair mutant free energies as described in the _ methods _ section .the single base - pair mutant energies for the wild type sequence have been reset to 0.0 ( this assumes that the specific bound sequence is known ) and the lowest computed single base - pair mutant binding energy has been subtracted from all other values . +the plot of computed single base - pair mutant energies vs. experimental energies ( computed under the hypothesis of additivity ) shows a good correlation ( 0.58 ) but seems insufficient for predictive purposes .however , when the bacteriophage genome is scanned using the corresponding free energy matrix ( see methods ) , high - affinity binding sites are correctly recognized , and in general the energies computed using the matrix and those predicted based on addition of single base - pair mutation effects are well correlated ( corr .+ we asked what is the advantage of such computation compared to the simpler model that assigns a constant energy penalty to each mutation over the specific bound sequence .in such case the correlation between the computed and reference binding energies is slightly lower , but still significant ( 0.72 ) .the advantage of using computational results over a much simpler single parameter approach seems therefore very limited , although the 1% best sites predicted by the mm / gbsa(+hb ) energy and the simple mutation models display only 15% common sites , proving that the two methods are largely uncorrelated .+ as a last test we simulate a realistic situation in which no thermodynamic data or information on the recognized sequences are available .we considered the set of one thousand random dna sequences and the corresponding structural models built by performing mutations on the double stranded dna in the complex crystal structure as the only information available .obviously the crystallographic complex does contain information on the specific sequence because protein and dna conformations are fitting each other in the complex . if non - specific complexes were to be built by homology without knowing the exact dna sequence bound , it is likely that side chains would be placed differently with different results . finally , structures were energy minimized using the same protocol used for the mm / gbsa(+hb ) methodology .as in the tests above we found optimal scaling factors in order to make all ( non - specific ) binding free energies equal to 10.0 kcal / mol .+ in order to avoid a trivial solution to the fitting problem with all coefficients equal 0.0 except the constant term equal 10.0 , we follow a two - step procedure . in the first stepwe assume a reasonable value ( 30 kcal / mol ) for the constant term which must be brought to the left - hand side of eq .coulombic , van der waals , gb solvation energy , polar and apolar surface area and number of hydrogen bonds have been evaluated , eq .[ eq7 ] is then solved ( in a least square error sense ) and the optimal scaling parameters have been determined . the lowest binding energy sequence according to the scaling parameters is determined .the row corresponding to this complex is subtracted from all other rows thus removing the constant term .+ in the second step the newly obtained matrix , which does not include the constant term anymore is used to find the best coefficients to make all the energy differences equal 10.0 kcal / mol .therefore all energies are expressed relative to the lowest computed energy at the first step . +the free energies computed on the random sequences have been used to compute single base - pair mutant free energies as described in the _ methods _ section . at variance with the test performed abovewe do not set to 0.0 the energies of specific bound sequence ( which is assumed here to be unknown ) .the correlation coefficient between computed and experimental energies ( computed under the assumption of additivity ) for the bacteriophage genome is 0.50 ( figure 5 ) .+ as a further test of the performance of the approach we generated the logo of the 10 best binding sequences according to the thermodynamic data on single base - pair mutants and those found with the present approach ( figure 6 ) .an overall agreement between the two logos is apparent .in the present work physical effective energy functions are used to estimate the free energy of binding of repressor to the dna operator and single base - pair mutants , for which thermodynamic data are available .+ thermodynamic data allow one to study the best results achievable , with the modeling approach and energy functions presented here , with models that assume that the binding energy is a linear combination of different contributions .+ simple models that use a distance dependent dielectric constant and simple terms for surface area proportional energy contributions and for hydrogen bonding perform surprisingly well for values of ranging from to . + a two - parameter model for surface area proportional energy contributions performs better than the more complex model of oobatake et al . , which was however not derived for usage in the more complex energy functions employed here .+ the performance of mm / gbsa(+hb ) and to a lesser extent mm / gbsa model is comparable to or superior to other models .a conclusion for the mm / gbsa model is that electrostatic energies should be reduced by a proper scaling factor corresponding to dielectric constants in the range of 6 .this conclusion is reached also by a similar analysis of protein -operator mutants .+ the effect of molecular dynamics on the computed binding free energies is in general negative and the reproducibility of the experimental values decreases with the increase of simulation time considered .this may be a consequence of the large fluctuations developing in md simulations which probably would require a much longer simulation time .moreover it is reasonable to take into account that the poor performance of the method can be partially caused by the errors in the force field used in md simulations .another plausible source of inaccuracy is the mismatch between the energy model and system representation used in md simulation and those used for minimization and energy evaluation .it appears therefore that it is worth to invest more time in optimizing the starting structure , rather than for sampling the conformational space by molecular dynamics simulations , or , alternatively , to adopt different strategies for sampling protein and dna flexibility .+ the analysis of non - specific complexes using the best performing energetic model with properly scaled coefficients allows to evaluate a non - specific binding energy difference , with respect to the specific bound sequence , of 6.06 2.17 kcal / mol , definitely lower than what expected based on an additive model ( 18.1 kcal / mol for the single base - pair mutants computed energies ) .this result is in line with the saturation effect described by stormo and co - workers and with the theoretical analysis of bakk and melzer .+ although the results presented on single base - pair mutants are not exciting , using computational methods may be very useful for identifying transcription factor binding sites .+ when no thermodynamic data are available but the specific bound sequence is known the computed mm / gbsa(+hb ) free energies are slightly more predictive than a simple substitution profile which assigns a penalty for any point mutation . + the most interesting test performed here considers a realistic scenario where no information on the bound sequence is available .even in this case mm / gbsa(+hb ) energies are predictive . +this result has important consequences for the prediction of transcription factor binding sites which often use consensus methods .a prerequisite for the usefulness of consensus methods is that these are as independent of each other as possible .since most methods use common prior knowledge and often related statistical methods , independence is not guaranteed .methods which are based on completely independent principles , like those based on physical effective energy functions and free energy computations , offer a completely complementary methodology for deriving profile matrices for scanning entire genomes .the results reported here , with much caution because the structural model for the specific bound sequence is known and not modeled by homology or other methods , support usage of these methods for the identification of dna - binding protein target sequences . in view of the very recent impressive results reported by the group of baker it is apparent that significant improvements to the approach described in this paper may be obtained by extensive refinement and screening of protein side chain conformation at protein - dna interface .atomic coordinates of the repressor dimer bound to dna operator were taken from the 1.8 resolution x - ray crystal structure deposited in the protein data bank ( pdb code 1lmb ) .the operator is 17 base - pairs in length and is composed by two approximately symmetric parts , the `` consensus half '' ( maintaining the notation of the pdb file , base - pairs a19-t23 to g11-c31 ) and the `` non - consensus half '' ( base - pairs t3-a39 to g10-c32 ) ( see figure 7 ) . since the coordinates of the nh-terminal arm of the repressor bound to the non - consensus half operator were not available , the lacking amminoacids were added using the protein bound to the consensus half operator . using the program profit v2.2 ,the c carbons of the proteins have been superimposed and afterward the amino acids of the rotated structure have been added to the other one . since the detailed x - ray crystal structure is made up of repressor dimer and operator dna while the experimental data concern the site , the whatif program was used to substitute the base - pair at position 5 to obtain the wild - type operator .all possible single base - pair substitutions within the dna sequence were generated using the program whatif .hydrogen atoms have been added using the program pdb2gmx of the gromacs package .every structure has been optimized performing 200 steps of energy minimization using the namd program , fixing all c carbons and phosphate groups coordinates .a dielectric constant of 10 has been employed with a cut - off of 12 for non - bonded interactions .+ the net charge of the system ( ) has been neutralized placing a corresponding number of sodium counterions in energetically favourable positions .the electrostatic potential was calculated via numerical solutions of the poisson - boltzmann equation using the university of houston brownian dynamics ( uhbd , version 6.x ) program .a counterion was placed at the lowest potential position at 7.0 from any heavy atom of the solute .the cycle was repeated until the net charge of the system was 0 .+ the complex and counterions were solvated in a box of tip3p water molecules using the solvate module in the program vmd .the resulting system contained about 4200 solute atoms and 50400 solvent atoms .the coordinates of the solute were fixed and the solvent was energy minimized using 100 steps of conjugate gradient . a solvent equilibration was carried out by performing molecular dynamics for 50 ps using a 1 fs time step to let the water molecules move to adjust to the conformation of the solute .the system was then energy minimized using 100 steps of conjugate gradient and , after 100 ps equilibration , 1-ns md simulations was performed using a 2-fs timestep .a snapshot of the trajectory was stored every 10 ps for later analysis .the shakeh algorithm was used in order to fix bond length between each hydrogen and its mother atom to its nominal value and to extend the simulation time - step .all molecular dynamics simulations of the complex were run under constant npt conditions using the namd program .the pressure of the system was coupled , through a berendsen - thermostat , to a pressure bath with target pressure 1.01325 bar and time constant 100 fs .the temperature has been kept to 300 k by simple velocity rescaling every picosecond .long - range electrostatic interactions were treated by particle mesh ewald ( pme ) method employing a grid of 128x128x128 points .the cut - off was 12 and the tolerance was which resulted in an ewald coefficient of 0.257952 .the order for pme interpolation was 4 .+ the simulations were performed on a cluster composed by ten dual - processor nodes based on intel xeontm 2.8 ghz , with hyper - threading technology .the free energy of binding for each structure has been computed according to the framework reviewed by gilson et al . who derived the expression of the free energy of binding in terms of the microscopic properties of the two molecules involved , using standard statistical thermodynamics. here , similar to other works employing continuum methods several simplifications are adopted .the free energy of binding for each complex minus the entropic contribution is expressed as the sum of the interaction energy between the protein and the dna and a solvation free energy term : it has been assumed that the entropy restriction in internal degrees of freedom and overall rotation and translation degrees of freedom is the same for all complexes . + the effect that association has on intramolecular energy has been neglected .moreover no extended conformational search has been performed for protein side chains and dna , partly because this task is not easily accomplished and partly because large conformational changes often result in large molecular mechanics energy changes , so we aimed at keeping the systems to be compared as close as possible .+ the free energy of binding has been calculated using different methodologies detailed below .for all models alternative versions in which an energy term proportional to the number of hydrogen bonds has been added have been considered .+ except where noted , all contributions to the free energy of binding have been optimally scaled in order to best reproduce available experimental data ( see later ) .+ in this method electrostatic interactions have been estimated using a distance dependent dielectric constant ( dddc ) while the solvation energy is proportional to the solvent accessible surface area through the atomic solvation parameters of oobatake , ooi , nemethy and scheraga ( oons ) .+ all structures have been energy minimized with 200 conjugate gradient steps , using a distance - dependent dielectric constant ( four values have been tested : , , , , with the distance r expressed in ) and a cut - off of 12 .the molecular mechanics interaction energy was evaluated using charmm ( version 27b2 ) , a classic and well - tested molecular mechanics force - field .this term includes the nonbonded electrostatic and van der waals contributions .the solvation free energy term has been calculated according to the model developed by oobatake et al .this model consists in assigning every atom to one of 9 classes of chemical groups and assuming that the hydration free energy of every group in a solute is proportional to its solvent accessible surface area ( sasa ) , because the group can directly interact only with water molecules at the surface . proportionality constants have been determined from thermodynamic data on the transfer of small molecules from the gas phase into aqueous environment assuming the additivity of contributions from individual groups .+ in a very similar approach the oons 9-parameter solvation model has been replaced by a simpler 2-parameter hydrophobic , polar ( hp ) solvation model . energy minimization protocol and tested values are the same as for the mm / dddc - oons for proper comparison .+ in this method the solvation free energy term is split in a polar ( electrostatic ) and a non - polar ( hydrophobic ) term . polar term is computed using the generalized born approach .all complexes have been energy minimized by 200 conjugate gradients minimization steps using the generalized born model as implemented in the charmm program , then the solute and solvation energy terms have been computed for both the complex and the isolated molecules .the binding energy was then computed by subtraction .doubling the number of minimization steps does not affect significantly the results .+ the non - polar term , which takes into account the tendency of the non - polar parts of the molecule to collapse , is taken to be proportional to the solvent - accessible surface area , i.e. , where the surface tension coefficient has been empirically determined to be equal to 20 cal mol for this kind of applications . + a variant of this methodology including splitting the solvent accessible surface area into a polar and a hydrophobic contribution ( i.e. using two different surface tension coefficients ) , and including a term proportional to the number of hydrogen bonds has been considered here . + the choice of methods and parameters in molecular mechanics/ implicit solvent methods is subject to large uncertainties . in order to explore the best performance achievable with these methodologies ,optimal scaling factors for the different contributions were searched that could best reproduce the experimental data . this approach is not new and it has been used successfully by other groups ( see e. g. ) . in practiceit is expected that proper scaling is able to compensate for the many inaccuracies of the model .+ in general terms , the free energy of binding has been computed as a linear combination of contributions , with corresponding coefficients , i.e. : where represents the difference between the complex and the isolated protein and dna molecules .+ coefficients have been found in order to best reproduce the 52 experimentally available free energies of binding .contributions have been arranged in a matrix where each row corresponds to each structural model and each column corresponds to a different contribution to the free energy of binding .the experimental binding free energies have been arranged in a -component vector .the linear system where is the -component vector of coefficients , has been solved ( in a least square sense ) using singular value decomposition and the best coefficients have been used to calculate binding energies .+ a constant term takes into account the entropy loss upon complexation and other possible contributions identical for all complexes . + a linear model , compared to more sophisticated methods , has the advantage that the number of adjustable parameters is limited and easily interpretable in physical terms .+ in the following we detail the contributions considered for each energetic model .+ the free energy of binding has been computed for the mm / dddc - oons model according to the following equation : where is the van der waal contribution , is the coulombic energy , computed with a distance dependent dielectric constant , is the solvation energy according to the oobatake et al .model and is the number of intermolecular hydrogen bonds . +as mentioned above , the coefficients bear physical meaning .for instance the term should account for rotational and translational entropy loss upon binding and it can be expected to be in the range 20 - 40 kcal / mol . +the term proportional to the number of hydrogen bonds was alternatively added in order to take into account possible inaccuracies in the treatment of these interactions by molecular mechanics and solvation terms . in practice every time this term is added the coefficients of molecular mechanics and solvation terms are greatly reduced thus avoiding double counting of hydrogen bond interactions . + a similar expression forthe free energy of binding has been used for the mm / dddc - hp model : here the coefficients and represent the surface tension coefficients multiplying hydrophobic and polar solvent accessible surface areas and , respectively .we expect these coefficients to be in the range of tens of cal mol .+ the solvent accessible area has been also splitted in polar and hydrophobic area for finding optimal scaling parameters for the mm / gbsa methodology : where is the generalized born solvation energy .the coefficients and are exactly and roughly , respectively , inversely proportional to the effective dielectric constant and are thus expected to be in the range 0.05 to 1.0 . + scaling energy terms for free energy evaluation of models which have been minimized without scaling such terms is clearly inconsistent . a correct procedure would be to iteratively find the optimal scaling factors , minimizing the energy using such scaling factors and repeating these two steps until convergence .this procedure faces some difficulties because an important term like the hydrogen bond term is discrete and does not have a counterpart in standard forcefields , where such interactions are described typically through electrostatic and van der waals terms .similarly the minimization of terms proportional to the solvent accessible surface area requires algorithms which are rarely available in molecular mechanics packages .a further difficulty is that any unbalance among forcefield terms might introduce distortions in molecular structure , notably of hydrogen bond lengths .although the issue of iteratively fitting optimal scaling factors is worth being further investigated , here the approach of scaling factors has been applied in a more rough way .we have matched as far as possible the energetic model used for minimization with that used for fitting scaling factors , as mentioned above , but we have not minimized again the models using the scaling factors . a similar mismatch between conformational sampling and energy evaluation is implicit in the analysis of molecular dynamics snapshots .other sources of error in this case are the large conformational ( and energetic ) fluctuations molecules undergo during simulation and in general the inaccuracy of implicit solvent methods ( used in energy evaluation ) where small energy differences arise from subtraction of rather large values .it should be noted that for molecular dynamics snapshots inaccuracies do not cancel out because there are no restrained parts in the molecules .+ an important aspect of protein - dna interaction , addressed quantitatively by olson and co - workers , is the capability of dna sequences to adopt specific local conformations .the statistics of parameters and pairwise parameter correlations shows definite preferences . in the approach described above, changes in intramolecular energy terms are disregarded altogether by the assumption of rigid docking .the strains introduced in complex molecular structures , however , are typically relaxed over the structure and should have consequences on the intermolecular energy terms too . in order to assess the effect of dna sequence dependentdeformability we followed the approach of olson and co - workers , who made available average parameters for the six parameters describing local geometry of a base - pair step in b - dna , the force constant parameters for all pairwise deviation from equilibrium values and a program to analyse dna structures .+ the analysis was performed for the native structure parameters , simply replacing the identity of the base - pair mutated , and on the mutated structures , minimized using the generalized born model . for both casespoor correlation with experimental binding data was found . remarkably , however , the native sequence was the third lowest energy sequence among all 52 sequences .energy minimization in general increases the energy associated with the deformability of dna .computation of the fitness of a sequence to local geometry parameters gives important informations although it is likely that the computed energy is not accurate for conformations far from equilibrium .inclusion of the dna sequence dependent deformability energy in the analyses detailed below did not improve results significantly , notwithstanding the additional scaling parameter introduced for this purpose .for this reason this term was not considered further .after fitting scaling factors to experimental data , the root mean square difference between calculated and experimental data was computed .this quantity can provide however a poor evaluation of the predictive power of the calculations when the test systems are very similar .therefore the correlation coefficient between calculated and experimental data was also computed .optimal scaling factors were computed taking all the data available .+ fitting 52 experimental data with up to 7 parameters will always results in a positive correlation coefficient . in order to make sure that the results obtained are significant we performed different kind of analyses : + i )a leave - one - out scheme has been adopted .all but one of the data were taken and the root mean square difference and correlation coefficient were computed using the set of data not used in the fitting procedure .+ the same scheme has been applied to 5000 replicates with one third of the data left out of the fitting procedure and used for rmsd and correlation coefficient computation .+ ii ) the variance of each linear coefficient has been estimated from the multiple regression analysis using the variance / covariance matrix and the square error of computed data , according to standard linear regression procedures . in practicethe standard deviation of experimental data has been estimated as then the variance of each coefficient has been estimated from the variance / covariance matrix of coefficients : the different models considered employ a different number of fitting parameters and therefore different performances are expected . although these parameters are often correlated , the analysis of the variance gives an immediate clue as to which variables are more important .+ iii ) analysis of variance ( anova ) calculations have been performed and a significance test based on the f - statistic and the corresponding confidence level has been computed .+ iv ) one thousand replicates of the original data has been generated with the column elements containing the experimental data randomly swapped .the average of the correlation coefficient between swapped experimental data and fitted data has been computed together with the standard deviation .the results of this computation ( not reported ) fully supports the results of the statistical analyses described above ; + finally , a useful alternative to assess the quality of predictions and to compare the different models from a qualitative point of view , consists in determining the number of `` correct predictions '' , defined as the number of cases in which both and are .0 kcal / mol , or .0 kcal / mol , or else separated by less than 0.3 kcal / mol . the threshold value of 1.0 kcal / mol requires some explanations .the experimental values of the free energy change relative to the wild - type operator have been calculated using the equation = - 0.546 ln ( of substituted sequence)/( of ) after having determined the dissociation constant of every mutant .it is simple to verify that the threshold value of 1.0 kcal / mol corresponds to a remarkable reduction in the dissociation constant of the mutant ( ca .5-fold ) , with respect to the dissociation constant of the wild - type operator ( of = ) , whereas values of higher than 1.0 kcal / mol correspond to a reduction in the dissociation constant from 5 ( = 1.0 kcal / mol ) to 25-fold ( =3.4 , which is the maximum value of ) .therefore it is reasonable to define as correct , if both and are in one of the defined intervals or even if the difference is lower than 0.3 kcal / mol , which corresponds to a ratio between the dissociation constant of a mutant and the dissociation constant of the wild - type complex lower than 2.0 .+ one thousand random dna sequences were generated and the corresponding structural models were generated by performing mutations on the double stranded dna in the complex crystal structure using the program whatif .the resulting dataset of complexes was assumed to be representative of non - specific protein - dna complexes .we are interested in understanding how reliable is the method for predicting putative binding sites .the so - called z - score of the specific bound sequence compared to random sequences has been considered .the z - score is defined here as the distance of the free energy computed for the specific bound 17-mer ( ) from the average non - specific binding energy ( ) , normalized by the standard deviation of the computed non - specific binding energies ( ) . averages are performed over the one thousand random sequences .a large z - score implies that the specific bound sequence can be distinguished from other non - specific bound sequences .the structures were energy minimized using the same protocol used for mm / gbsa free energy estimation . for all minimized complexes the coulombic energy , van der waals energy , gb solvation energy , polar and apolar surface accessible area and intermolecular hydrogen bonds number were tabulated . for each model of the 1000 random dna sequence complexesthe binding energy has been computed using different amounts of the experimental information available .different analyses , detailed in the results section , were performed .+ the possibility of using the data computed on the set of non - specific complexes for defining a profile of the recognized dna sequences has been explored as follows .the calculated binding energy values for the set of non - specific complexes were summarised in a set of 68 values corresponding to the average contribution to the binding free energy of each possible of the 4 bases at each of the possible 17 bound sequence positions .these 68 values have been derived as follows .possible substitutions are indexed from 1 to 4 for a , c , g and t , respectively .a matrix was set where each element is 1.0 or 0.0 if the base at position ( rounded at the closer upper integer ) has index in sequence i. the set of substitution free energies were found by solving ( in a root mean square error sense ) the overdetermined equation .the resulting 68-element vector was arranged in a matrix .variants on this procedure are described in the results section according to the level of information available included in the analysis .+ the free energy matrix derived from the analysis of non - specific protein - dna complexes was used to score all 17-mer subsequences in the bacteriophage genome ( accession number : nc_001416.1 , 48502 base - pairs ) on both strands . in principlethe score represents the free energy of binding of the 17-mer considered .+ reference `` experimental '' binding free energy values , for comparison with computed data , were obtained under the hypothesis of additivity using experimental data on single base - pair mutants .em designed and performed most tests and analyses , and wrote part of the code used for the analyses .mc and ff conceived the project , supervised the work and designed some of the analyses .ff wrote part of the code used for the analyses .all authors have read and approved the final version of the manuscript .we wish to thank drs .g. tecchiolli and p. zuccato of exadron , the hpc division of the eurotech group , for providing hardware and expert technical assistance .m. isola of the university of udine is gratefully acknowledged for helpful discussions on statistical aspects of multiple regressions .+ em wishes to thank profs . c. destri , g. marchesini and f. rapuano for helpful discussions .+ part of the research was funded by firb grant rbne03b8kk from the italian ministry for education , university and research .+ 99 pabo c and sauer r t : .1994 , * 61*:1053 - 1095 .matthews b : .1998 , * 335*:294 - 295 .pabo c o and sauer r t : . 1984 , * 53*:293 - 321 . harrison s c and aggarwal a k : . 1990 , * 59*:933 - 969 .gromiha m m , siebers j g , selvaraj s , kono h and sarai a : .2004 , * 337*:285 - 294 .kono h and sarai a : .1999 , * 35*:114 - 131 .wassermann w w and sandelin a : .2004 , * 5*:276 - 287 .pennacchio l a and rubin e m : . 2001 , * 2*:100 - 109 .sinha s and tompa m : . 2002 , * 30*:5549 - 5560 .sinha s and tompa m : .2003 , * 31*:3586 - 3588 .birnbaum k , benfey p n and shasha d e : .2001 , * 11*:1567 - 1573 .wolfsberg t g , gabrielian a e , campbell m j , cho r j , spouge j l , and landsman d : .1999 , * 9*:775 - 792 .caselle m , di cunto f , and provero p : .2002 , * 3*:7 .cora d , di cunto f , provero p , silengo l , and caselle m : .2004 , * 5*:57 .van helden j , andre b and collado - vides j : .1998 , * 281*:827 - 842 .jensen l j and knudsen s : .2000 , * 16*:326 - 333 .lawrence c e and reilly a a : .1990 , * 7*:41 - 51 .thijs g , marchal k , lescot m , rombauts s , moor b d , rouze p and moreau y : .2002 , * 9*:447 - 464 .thompson w , rouchka e c , and lawrence c e : .2003 , * 3*:3580 - 3585 .hughes j d , estep p w , tavazoie s , and church g m : .2000 , * 296*:1205 - 1214 .hardison r : .2000 , * 16*:369 - 372 .duret l , dorkeld f and gautier c : .1993 , * 21*:2315 - 2322 .loots g g , locksley r m , blankespoor c m , wang z e , miller w , rubin e m , and frazer k a : .2000 , * 288*:136 - 140 .goettgens b , barton l , gilbert j , bench a , sanchez m , bahn s , mistry s , grafham d , mcmurray a , vaudin m , amaya e , bentley d , green a , and sinclair a : .2000 , * 18*:181 - 186 .flint j , tufarelli c , peden j , clark k , daniels r , hardison r , miller w , philipsen s , tan - un k , mcmorrow t , frampton j , alter b , frischauf a , and higgs d : .2001 , * 10*:371 - 382 .lenhard b , sandelin a , mendoza l , engstrm p , jareborg n and wasserman w w : . 2003 , * 2*:13 .zhang z and gerstein m : .2003 , * 2*:11 .cora d , herrmann c , dieterich c , di cunto f , provero p and caselle m : . 2005 , * 6*:110 .sandelin a , wasserman w w , and lenhard b : . 2004 , * 32*:w249-w252 .prakash a , blanchette m , sinha s and tompa m : .2004 , * 9*:348 - 359 .tompa m , li n , bailey t l , church g m , moor b d , eskin e , favorov a v , frith m c , fu y , kent w j , makeev v j , mironov a a , noble w s , pavesi g , pesole g , regnier m , simonis n , sinha s , thijs g , van helden j , vandenbogaert m , weng z , workman c , ye c and zhu z : .2005 , * 23*:137 - 144 .lazaridis t and karplus m : .2000 , * 10*:139 - 145 .sippl m : .1990 , * 213*:859 - 883 .lustig b and jernigan r1995 , * 23*:4707 - 4711 .kaplan t , friedman n and margalit h : .2005 , 1(1 ) .mandel - gutfreund y and margalit h : .1998 , * 26*:2306 - 2312 .liu z , mao f , guo j , yan b , wang p , qu y and xu y : .2005 , * 33*:546 - 558 .zhang c , liu s , zhu q and zhou y : . 2005 , * 48*:2325 - 2335 .olson w k , gorin a a , lu x , hock l m and zhurkin v b : . 1998 , * 95*:11163 - 11168 .zhou h and zhou y : .2002 , * 11*:2714 - 2726 .schueler - furman o , wang c , bradley p , misura k and baker d : .2005 , * 310*:638 - 642 .wang w , donini o , reyes c and kollman p : .2001 , * 30*:211 - 243 .kollman p , massova i , reyes c , kuhn b , huo s , chong l , lee m , duan y , wang w , donini o , cieplak p , srnivasan j , case d and cheatham t : . 2000 , * 33*:889 - 897 .zacharias m , luty b , davis m and mccammon j : .1992 , * 63*:1280 - 1285 .misra v , hecht j , sharp k , friedman r and honig b : .1994 , * 238*:263 - 280 .fogolari f , elcock a , esposito g , viglino p , briggs j , and mccammon j : .1997 , * 267*:368 - 381 .baginski m , fogolari f and briggs j : . 1997 , * 274*:253 - 267 .froloff n , windemuth a and honig b : . 1997 , * 6*:1293 - 1301 . misra v , hecht j , yang a and honig b : .1998 , * 75*:2262 - 2273 .wojciechowski m , fogolari f and baginski m : .2005 , * 152*:169 - 184 .jayaram b , mcconnell k , dixit sb and beveridge d l : .1999 , * 151*:333 - 357 .gorfe a a and jelesarov i : .2003 , * 42*:11568 - 11576 .endres r. g , schulthess t c and wingreen n s : .2004 , * 57*:262 - 268 .oobatake m , kono h , wang y and sarai a : .2003 , * 53*:33 - 43 .oobatake m and ooi t : .1993 , * 59*:237 - 284 .havranek j j , duarte c m and baker d : . 2004 , * 344*:59 - 70 .morozov a , havranek j , baker d and siggia e : . 2005 , * 33*:5781 - 5798 .beamer l and pabo c : .1992 , * 227*:177 - 196 .johnson a d , poteete a r , lauer g , sauer r t , ackers g k and ptashne m : .1981 , * 294*:217 - 223 .brennan r and matthews b : .1989 , * 264*:1903 - 1906 .sarai a and takeda y : .1989 , * 86*:6513 - 6517 .ptashne m : .2nd edn . ,cambridge , ma . : cell press and blackwell scientific publications ; 1992 .ackers g , johnson a and shea m : .1982 , * 79*:1129 - 1133 .shea m a and ackers g k : .1985 , * 181*:211 - 230 .aurell e , brown s , johanson j and sneppen k : .2002 , * 65*:051914.1 - 051914.9 .bakk a and melzer r .2004 , * 563*:66 - 68 .[ http://scop.mrc-lmb.cam.ac.uk/scop/ ] takeda y , sarai a and rivera v m : .1989 , * 86*:439 - 443 .benos p v , bulyk m l , and stormo g d : 2000 , * 30*:4442 - 4451 .benos p v , lapedes a s , and stormo g d : 2002 , * 24*:466 - 475 .maerkl s j quake s r : , 2007 , * 315*:233 - 237 .thomasson ka , ouporov i v , baumgartner t , czaplinski j , kaldor t and northrup s h : .1997 , * 101*:9127 - 9136 .vriend g : .1990 , * 8*:52 - 54 .crooks g e , hon g , chandonia j m and brenner s e : .2004 , * 14*:1188 - 1190 .van dijk m , van dijk a d j v hsu v , boelens r and bonvin a m j j : .2006 , * 34*:3317 - 3325 .ashworth j , havranek j j , duarte c m , sussman d , monnat r j , stoddard b l and baker d : . 2006 , * 441*:656 - 659 .berman h , westbrook j , feng z , gilliand g , bhat t , weissig h , shindyalov i and bourne p : .2000 , * 28*:235 - 242 . $ ] .berendsen h , der spoel d and van drunen r : .1995 , * 91*:43 - 56 .lindahl e , hess b , van der spoel d : .2001 , * 7 * : 306 - 317 .madura j , davis m , gilson m , wade r , luty b and mccammon j : .1994 , * 5*:229 - 267 .madura j , briggs j , wade r , davis m , luty b , ilin a , antosiewicz j , gilson m , bagheri b , scott s l r and mccammon j : .1995 , * 91*:57 - 95 .jorgensen wl , chandrasekhar j , madura j d , impey r w and klein m l : .1983 , * 79*:926 - 935 .humphrey w , dalke a and schulten k : .1996 , * 14*:33 - 38 .andersen h c : .1983 , * 52*:24 - 34 .kale l , skeel r , bhandarkar m , brunner r , gursoy a , krawetz n , phillips j , shinozaki a , varadarajan k and schulten k : .1999 , * 151*:283 - 312 .berendsen h j c , postma j p m , van gunsteren w f , di nola a and haak j r : .1984 , * 81*:3684 - 3690 .darden t , york d and pedersen l : . 1993 , * 98*:10089 - 10092 .gilson m , given ja , bush bl and mccammon j : .1997 , * 72*:1047 - 1069 .brooks b , bruccoleri r , olafson b , states d , swaminathan s and karplus m : .1983 , * 4*:187 - 217 .mackerell a , bashford d , bellott m , dunbrack r , evanseck j , field m , fischer s , gao j , guo h , ha s , joseph - mccarthy d , kuchnir l , kuczera k , lau f , mattos c , michnick s , ngo t , nguyen d , prodhom b , reiher w , roux b , schlenkrich m , smith j , stote r , straub j , watanabe m , wiorkiewicz - kuczera j , yin d and karplus m : . 1998 , * 102*:3586 - 3616 .qiu d , shenkin p , hollinger f and still w : .1997 , * 101*:3005 - 3014 .fogolari f , brigo a and molinari h : . 2003 , * 85*:159 - 166 .press w h , teukolsky s a , vetterling w t , and flannery b p : .2nd edn , cambridge university press ; 1995 .lu x j and olson w k : .2003 , * 31*:5108 - 5121 .berenson m l , levine d m and goldstein m : .nj . , prentice - hall , inc ., englewood cliffs ; 1983 ..optimal scaling factors for the mm / dddc - oons model and the mm / dddc - hp model .standard deviations ( see methods section ) are given in parentheses . [ cols="^,^,^,^,^,^ " , ]
specific binding of proteins to dna is one of the most common ways gene expression is controlled . although general rules for the dna - protein recognition can be derived , the ambiguous and complex nature of this mechanism precludes a simple recognition code , therefore the prediction of dna target sequences is not straightforward . dna - protein interactions can be studied using computational methods which can complement the current experimental methods and offer some advantages . in the present work we use physical effective potentials to evaluate the dna - protein binding affinities for the repressor - dna complex for which structural and thermodynamic experimental data are available . the binding free energy of two molecules can be expressed as the sum of an intermolecular energy ( evaluated using a molecular mechanics forcefield ) , a solvation free energy term and an entropic term . different solvation models are used including distance dependent dielectric constants , solvent accessible surface tension models and the generalized born model . the effect of conformational sampling by molecular dynamics simulations on the computed binding energy is assessed ; results show that this effect is in general negative and the reproducibility of the experimental values decreases with the increase of simulation time considered . + the free energy of binding for non - specific complexes , estimated using the best energetic model , agrees with earlier theoretical suggestions . as a results of these analyses , we propose a protocol for the prediction of dna - binding target sequences . the possibility of searching regulatory elements within the bacteriophage genome using this protocol is explored . our analysis shows good prediction capabilities , even in absence of any thermodynamic data and information on the naturally recognized sequence . + this study supports the conclusion that physics - based methods can offer a completely complementary methodology to sequence - based methods for the identification of dna - binding protein target sequences .
in recent years there has been an explosion of interest in exploitation of quantum mechanical systems as a basis for new quantum technologies , giving birth to the field of quantum information science . to develop quantum technologies, it has been recognized from early on that quantum control systems will play a crucial role for tasks such as manipulating a quantum mechanical system to perform a desired function or to protect it from external disturbances .moreover , recent advances in quantum and nanotechnology have provided a great impetus for research in the area of quantum feedback control systems ; e.g. , see .= -1perhaps just about the simplest and most tractable controller to design would be the linear quantum controllers , and this makes them an especially attractive class of controllers . in this class, one can have classical linear quantum controllers that process only classical signals which are obtained from a quantum plant by measurement of some plant output signals ( e.g. , ) , but more recently there has also been interest in fully quantum and mixed quantum - classical linear controllers that are able to manipulate quantum signals . in fact , an experimental realization of a fully quantum controller in quantum optics has been successfully demonstrated in . as noted in that paper , the class of fully quantum controllers or _ coherent - feedback controllers _ , as they are often known in the physics literature , presents genuinely new control - theoretic challenges for quantum controller design .an important open problem raised in the works is how one would systematically build or implement a general , arbitrarily complex , linear quantum controller , at least approximately , from basic quantum devices , such as quantum optical devices .this problem can be viewed as a quantum analogue of the synthesis problem of classical electrical networks ( in this paper the qualifier `` classical '' refers broadly to systems that are not quantum mechanical ) that asks the question of how to build arbitrarily complex linear electrical circuits from elementary passive and active electrical components such as resistors , capacitors , inductors , transistors , op - amps , etc .therefore , the quantum synthesis problem is not only of interest for the construction of linear quantum stochastic controllers , but also more broadly as a fundamental aspect of linear quantum circuit theory that arises , for example , in quantum optics and when working with phenomenological models of quantum rlc circuits such as described in , as well as in relatively new fields such as nanomechanical circuit quantum electrodynamics .a key result of this paper is a new synthesis theorem ( theorem [ th : synthesis ] ) that prescribes how an arbitrarily complex linear quantum stochastic system can be decomposed into an interconnection of basic building blocks of one degree of freedom open quantum harmonic oscillators and thus be systematically constructed from these building blocks . in the context of quantum optics , we then propose physical schemes for `` wiring up ''one degree of freedom open quantum harmonic oscillators and the interconnections between them that are required to build a desired linear quantum stochastic system , using basic quantum optical components such as optical cavities , beam splitters , squeezers , etc .an explicit yet simple example that illustrates the application of the theorem to the synthesis of a two degrees of freedom open quantum harmonic oscillator is provided . to motivate synthesis theory in the context of linear dynamical quantum systems, we start with a brief overview of aspects of linear electrical network synthesis that are relevant for the current work . as is well known , a classical ( continuous time , causal , linear time invariant ) electrical network described by a set of ( coupled ) ordinary differential equations can be analyzed using various representations , for example , with a frequency domain or transfer function representation , with a modern state space representation and , more recently , with a behavioral representation .it is well known that the transfer function and state space representation are equivalent in the sense that one can switch between one representation to the other for any given network . however , although one can associate a unique transfer function representation to a state space representation , the converse is not true : for a given transfer function there are infinitely many state space representations .the state space representation can be made to be unique ( up to a similarity transformation of the state space matrices ) by requiring that the representation be of minimal order ( i.e. , the representation is both controllable and observable ) .the synthesis question in linear electrical networks theory deals with the inverse scenario , where one is presented with a transfer function or state space description of a linear system and would like to synthesize or build such a system from various linear electrical components such as resistors , capacitors , inductors , op - amps , etc .a particularly advantageous feature of the state space representation , since it is given by a set of first order ordinary differential equations , is that it can be inferred directly from the representation how the system can be _ systematically _ synthesized .for example , consider the system below , given in a state space representation : (t)+ \left[\begin{array}{c } 1\\ 0.1 \end{array}\right]u(t),\\ y(t)&=&\left[\begin{array}{cc } 0 & 1 \end{array}\right]x(t ) + u(t ) , \nonumber\end{aligned}\ ] ] where is the state , is the input signal , and is the output signal . in an electrical circuit , could be the voltage at certain input ports of the circuit and ) .[fig : circ - diag ] ] .[fig : circ - hw ] ] could be the voltage at another set of ports of the circuit , different from the input ports .this system can be implemented according to the schematic shown in figure [ fig : circ - diag ] .this schematic can then be used to to implement the system at the hardware level as shown in figure [ fig : circ - hw ] ( * ? ? ?* chapter 13 ) . however , linear electrical network synthesis is a mature subject that deals with much more than just how one can obtain _ some _ realization of a particular system .for instance , it also addresses fundamental issues such as how a passive network , a network that does not require an external source of energy , can also be synthesized using only passive electrical components , and how to synthesize a given circuit with a minimal number of circuit elements or with a minimal number of certain types of elements ( such as active elements ) . in this paperour primary objective is to develop an analogously systematic method for synthesizing arbitrarily complex linear dynamical _ quantum _ stochastic systems that are given in an abstract description that is similar in form to ( [ eq : ss - eg-1 ] ) .these linear dynamical quantum stochastic systems are ubiquitous in linear quantum optics , where they arise as idealized models for linear open quantum systems . however , since there is currently no comprehensive synthesis theory available for linear dynamical quantum systems ( as opposed to _static _ linear quantum systems in linear quantum optics that have been studied in , e.g. , ) and related notions such as passivity have not been extensively studied and developed , here we focus our attention solely on the development of a _ general _ synthesis method that applies to _ arbitrary _ linear dynamical quantum systems which does not exploit specific physical properties or characteristics that a particular system may possess ( say , for instance , passivity ) .although the latter will be an important issue to be dealt with in further development of the general theory , it is beyond the scope of the present paper ( which simply demonstrates the existence of _ some _ physical realization ) .= -1a quantum system is never completely isolated from its environment and can thus interact with it .such quantum systems are said to be _ open quantum systems _ and are important in modeling various important physical phenomena such as the decay of the energy of an atom . the environment is modeled as a separate quantum system in itself and can be viewed as a _heat bath _ to which an open quantum system can dissipate energy or from which it can gain energy ( see ( * ? ? ?* chapters 3 and 7 ) ) .an idealization often employed in modeling the interaction between an open quantum system and an external heat bath is the introduction of a _assumption : the dynamics of the coupled system and bath is essentially `` memoryless '' in the sense that future evolution of the dynamics of the coupled system depends only on its present state and not at all on its past states .open quantum systems with such a property are said to be _markov_. the markov assumption is approximately valid under some physical assumptions made on the system and bath , such as that the heat bath is so much `` larger '' than the system ( in the sense that it has many more degrees of freedom than the system ) and is weakly coupled to the system that its interaction with the latter has little effect on its own dynamics and can thus be neglected ; for details on the physical basis for this markovian assumption , see ( * ? ? ? * chapters 3 and 5 ) .markov open quantum systems are important , as they are often employed as very good approximations to various practically relevant open quantum systems , particularly those that are encountered in the field of quantum optics , yet at the same time are relatively more tractable to analyze as their dynamics can be written in terms of first order operator differential equations. in markov open quantum systems , heat baths can be idealistically modeled as a collection of a continuum of harmonic oscillators oscillating at frequencies in a continuum of values .an important consequence of the markov approximation in this model is that the heat bath can be effectively treated in a quantum statistical sense as quantum noise , and thus markov open quantum systems have inherently stochastic quantum dynamics that are most appropriately described by quantum stochastic differential equations ( qsde ) . to be concrete , a single heat bath in the markov approximation is formally modeled as an operator - valued _ quantum white noise _process , where denotes time , that satisfies the singular commutation relation =\delta(t - t') ] acts on operators and as =ab - ba ] .it is said to be open if it is interacting with elements of its environment .for instance , consider the scenario in of an atom trapped in an optical cavity .the light in the cavity is strongly coupled to the atomic dipole , and as the atom absorbs and emits light , there are random mechanical forces on the atom . in an appropriate parameter regime , the details of the optical and atomic dipole dynamics are unimportant , and the optical field can be modeled as an environment for the atomic motion . under the assumptions of the `` motional observables '' of the trapped atom ( its position and momentum operators ) can then be treated like those of an open quantum harmonic oscillator .linear markov open quantum models are extensively employed in various branches of physics in which the markov type of arguments and approximations such as discussed in the preceding subsection can be justified .they are particularly prominent in quantum optics , but have also been used , among others , in phenomenological modeling of quantum rlc circuits , in which the dissipative heat baths are realized by infinitely long transmission lines attached to a circuit . for this reason , the general synthesis results developed herein ( cf .theorem [ th : synthesis ] ) are anticipated to be be relevant in various branches of quantum physics that employ linear markov models .for example , it has the potential of playing an important role in the systematic and practical design of complex linear photonic circuits as the technology becomes feasible .a general linear dynamical quantum stochastic system is simply a many degrees of freedom open quantum harmonic oscillator with several pairs of canonical position and momentum operators , with ranging from 1 to , where is the number of degrees of freedom of the system , satisfying the ( many degrees of freedom ) ccr =2i\delta_{jk} ] , where is the kronecker delta which takes on the value unless , in which case it takes on the value 1 , that is linearly coupled to a number of external bosonic fields . in the interaction picture with respect to the field and oscillator dynamics , the operators evolve unitarily in time as while preserving the ccr =2i\delta_{jk} ] , and the dynamics of the oscillator is given by ( here and ) ,\nonumber\\ dy(t)&=&c x(t ) dt + d da(t ) , \label{eq : q - ss}\end{aligned}\ ] ] where , , , and . herethe variable acts as the output of the system due to interaction of the bosonic fields with the oscillator ; a component of is the transformed version of the field that results _ after _ it interacts with the oscillator .hence , can be viewed as an _ incoming _ or _ input _ field , while is the corresponding _ outgoing _ or _ output _ field . to make the discussion more concrete ,let us consider a well - known example of a linear quantum stochastic system in quantum optics : an optical cavity ( see section [ sec : opt - cav ] for further details of this device ) , shown in figure [ fig : fabry - perot ] .the cavity depicted in the picture is known as a standing ] wave or fabry perot cavity and consists of one fully reflecting mirror at the cavity resonance frequency and one partially transmitting mirror .light that is trapped inside the cavity forms a standing wave with an oscillation frequency of , while parts of it leak through the partially transmitting mirror .the loss of light through this mirror is modeled as an interaction between the cavity with an incoming bosonic field in the vacuum state ( i.e. , a field with zero photons or a zero - point field ) incident on the mirror .the dynamics for a cavity is linear and given by where is the coupling coefficient of the mirror , are the interaction picture position and momentum operators of the standing wave inside the cavity , and is the outgoing bosonic field that leaks out of the cavity . a crucial point to notice about ( [ eq : q - ss ] ) is that it is in a similar form to the classical deterministic state space representation such as given in ( [ eq : ss - eg-1 ] ) , with the critical exception that ( [ eq : q - ss ] ) is a ( quantum ) stochastic system ( due to the quantum statistical interpretation of ) and involves quantities which are operator - valued rather than real / complex - valued .furthermore , the system matrices in ( [ eq : q - ss ] ) can not take on arbitrary values for ( [ eq : q - ss ] ) to represent the dynamics of a physically meaningful system ( see and ( * ? ? ?* chapter 7 ) for further details ) .for instance , for arbitrary choices of the many degrees of freedom ccr may not be satisfied for all as required by quantum mechanics ; hence these matrices can not represent a physically feasible system . in , the notion of _physically realizable _ linear quantum stochastic systems has been introduced that corresponds to open quantum harmonic oscillators ( hence are physically meaningful ) , which do not include scattering processes among the bosonic fields . in particular , necessary and sufficient conditionshave been derived on the matrices for a system of the form ( [ eq : q - ss ] ) to be physically realizable . more generally , however , are linear quantum stochastic systems that are completely described and parameterized by three ( operator - valued ) parameters : its hamiltonian ( , ) , its linear coupling operator to the external bosonic fields ( ) , and its _ unitary _ scattering matrix .in particular , when there is no scattering involved ( , then it has been shown in that can be recovered from ( since , here necessarily ) and vice - versa .although does not consider the scattering processes , the methods and results therein can be adapted accordingly to account for these processes ( this is developed in section [ sec : param - corspnds ] of this paper ) .the works were motivated by the problem of the design of robust fully quantum controllers and left open the question of how to systematically build arbitrary linear quantum stochastic controllers as a suitable network of basic quantum devices .this paper addresses this open problem by developing synthesis results for general linear quantum stochastic systems for applications that are anticipated to extend beyond fully quantum controller synthesis , and it also proposes how to implement the synthesis in quantum optics . the organization of the rest of this paper is as follows .section [ sec : models ] details the mathematical modeling of linear dynamical quantum stochastic systems and defines the notion of an open oscillator and a generalized open oscillator , section [ sec : concat - ser - red - net ] gives an overview of the notions of the concatenation and series product for generalized open oscillators as well as the concept of a reducible quantum network with respect to the series product , and section [ sec : param - corspnds ] discusses the bijective correspondence between two descriptions of a linear dynamical quantum stochastic system .this is then followed by section [ sec : syn - theory ] that develops the main synthesis theorem which shows how to decompose an arbitrarily complex linear dynamical quantum stochastic system as an interconnection of simpler one degree of freedom generalized open oscillators , section [ sec : system - synth ] that proposes the physical implementation of arbitrary one degree of freedom generalized open oscillators and direct interaction hamiltonians between these oscillators , and section [ sec : example ] that provides an explicit example of the application of the main synthesis theorem to the construction of a two degrees of freedom open oscillator .finally , section [ sec : conclude ] provides a summary of the contributions of the paper and conclusions .in the previous works linear dynamical quantum stochastic systems were essentially considered as open quantum harmonic oscillators . herewe shall consider a more general class of linear dynamical quantum stochastic systems consisting of the cascade of a static passive linear quantum network with an open quantum harmonic oscillator .however , in this paper we restrict our attention to synthesis of linear systems with purely quantum dynamics , whereas the earlier work considers a more general scenario where a mixture of both quantum and classical dynamics is allowed ( via the concept of an augmentation of a quantum linear stochastic system ) .the class of mixed classical and quantum controllers will be considered in a separate work . to this end , let us first recall the definition of an open quantum harmonic oscillator ( for further details , see ) . in this paper we shall use the following notations : , will denote the adjoint of a linear operator as well as the conjugate of a complex number , if ] , and is defined as , where denotes matrix transposition .we also define and and denote the identity matrix by whenever its size can be inferred from context and use to denote an identity matrix .let be the canonical position and momentum operators , satisfying the canonical commutation relations =2i\delta_{jk},\,[q_j , q_k]=0,\,[p_j , p_k]=0 ] and =0 ] , it can be shown by straightforward calculations using the quantum ito stochastic calculus that the cascade is equivalent ( in the sense that it produces the same dynamics for and the output of the system ) to a linear quantum system whose dynamics is governed by a unitary process satisfying the qsde ( for a general treatment , see ) where ( ) are fundamental processes , called the gauge processes , satisfying the quantum ito rules with all other remaining second order products between and vanishing .this yields the following dynamics for and the system output : , \label{eq : goo - dyn-1}\\ dy(t ) & = & cx(t)dt + d da(t ) , \label{eq : goo - dyn-2}\end{aligned}\ ] ] with , \nonumber \\ c&= & sk , \nonumber \\ d&= & s. \nonumber\end{aligned}\ ] ] for convenience , in the remainder of the paper we shall refer to the cascade of a static passive linear quantum network with an open oscillator as a _generalized open oscillator_. let be a generalized open oscillator that evolves according to the qsde ( [ eq : qsde-2 ] ) with given parameters , , and . for compactness , we shall use a shorthand notation of and denote such a generalized open oscillator by . in the next section we briefly recall the concatenation and series product developed in that allows one to systematically obtain the parameters of a generalized open oscillator built up from an interconnection of generalized open oscillators of one degree of freedom .in this section we will recall the formalisms of concatenation product , series product , and reducible networks ( with respect to the series product ) developed in for the manipulation of networks of generalized open oscillators as well as more general markov open quantum systems .let and be two generalized open oscillators , where .the concatenation product of and is defined as where .\end{aligned}\ ] ] it is important to note here that the possibility that or that some components of coincide with those of are allowed .if and are independent oscillators ( i.e. , the components of act on a distinct hilbert space to that of the components of ) , then the concatenation can be interpreted simply as the `` stacking '' or grouping of the variables of two noninteracting generalized open oscillators to form a larger generalized open oscillator .it is also possible to feed the output of a system to the input of system , with the proviso that and have the same number of input and output channels .this operation of cascading or loading of onto is represented by the series product defined by note that is again a generalized open oscillator with a scattering matrix , coupling operator , and hamiltonian as given by the above formula . with concatenation andseries products having been defined , we now come to the important notion of a _ reducible network with respect to the series product _ ( which we shall henceforth refer to more simply as just a _ reducible network _ ) of generalized open oscillators .this network consists of generalized open oscillators , with and , , along with the specification of a direct interaction hamiltonian ( ) and a list of series connections among generalized open oscillators and , , with the condition that each input and each output has at most one connection , i.e. , lists of connections such as are disallowed .such a reducible network again forms a generalized open oscillator and is denoted by .note that if is a reducible network defined as , then , which is equipped with the direct interaction hamiltonian , is simply given by .the notion of a reducible network was introduced in to study networks that are free of `` algebraic loops '' such as when connections like are present .the theory in is not sufficiently general to treat networks with algebraic loops ; they can be treated in the more general framework of quantum feedback networks developed in . since this workis based on , we also restrict our attention to reducible networks , but as we shall show in section [ sec : syn - theory ] this is actually sufficient to develop a network synthesis theory of linear quantum stochastic systems .a network synthesis theory can indeed also be developed using the theory of quantum feedback networks of , and this has been pursued in a separate work .two important decompositions of a generalized open oscillator based on the series product that will be exploited in this paper are where represents a static passive linear network implementing the unitary matrix .in it has been shown that for , then , and there is a bijective correspondence between the system matrices of a physically realizable linear quantum stochastic system and the parameters of an open oscillator ; see theorem 3.4 therein ( however , note that the , , and matrices are defined slightly differently from here because expresses all equations in terms of quadratures of the bosonic fields rather than their modes ) . here we shall show that allowing for an arbitrary complex unitary scattering matrix , a bijective correspondence between the system parameters of an extended notion of a physically realizable linear quantum stochastic system and the parameters of a generalized open oscillator ( in particular , ) can be established .we begin by noting that we may write the dynamics ( [ eq : goo - dyn-2 ] ) in the following way : with defined as then by defining and substituting in ( [ eq : goo - coeffs ] ) , we see that in ( [ eq : goo - dyn-1 ] ) , and constitutes the dynamics for the open oscillator with system matrices given by . since and ( cf .( [ eq : decomp-2 ] ) ) , from ( * ? ? ? * theorem 3.4 ) we see that there is a bijective correspondence between and the parameters and that one set of parameters may be uniquely recovered from the other .therefore , we may define a system of the form ( [ eq : q - ss ] ) to be physically realizable ( extending the notion in ) if it represents the dynamics of a generalized open oscillator ( this idea already appears in ( * ? ? ? * chapter 7 ) ; see remark 7.3.8 therein ) .this implies that a system ( [ eq : q - ss ] ) with matrices is physically realizable if and only if is a complex unitary matrix and are the system matrices of a physically realizable system in the sense of ( i.e. , are the system matrices of an open oscillator ) .therefore , we may state the following theorem .[ thm4.1 ] there is a bijective correspondence between the system matrices and the parameters of a generalized open oscillator . for given ,the corresponding system matrices are uniquely given by ( [ eq : goo - coeffs ] ) .conversely , for given , which are the system matrices of a generalized open oscillator with parameters , then is unitary , and and are the system matrices of some open oscillator . the parameters of the open oscillator is uniquely determined from by ( * ? ?* _ theorem _ 3.4 ) ( by suitably adapting the matrices and ) , from which the parameter of is then uniquely determined as . due to this interchangeability of the description by and by for a generalized open oscillator , it does not matter with which set of parameters one works with .however , for convenience of analysis in the remainder of the paper we shall work exclusively with the parameters .suppose that there are two independent generalized open oscillators coupled to independent bosonic fields , with output channels : an degrees of freedom oscillator with canonical operators , hamiltonian operator , coupling operator , and scattering matrix , and , similarly , an degrees of freedom oscillator with canonical operators , hamiltonian operator , coupling operator , and unitary scattering matrix . consider now a reducible quantum network constructed from and as , as shown in figure [ fig : g1g2network ] , where is a direct interaction hamiltonian term between and given by where we recall that denotes the elementwise adjoint of a matrix of operators and the second equality holds , since elements of commute with those .also note that the matrix is real . and with indirect interaction .[fig : g1g2network ] ] some straightforward calculations ( see for details ) then show that we may write where .now let us look closely at the hamiltonian term of .note that after plugging in the definition of , , , and , we may write \left [ \begin{array}{cc } r_1 & r_{12 } \\ r_{12}\trp & r_2 \end{array}\right ] \left[\begin{array}{c } x_1 \\ x_2 \end{array}\right].\end{aligned}\ ] ] letting , , and defining ,\label{eq5.1}\\ k&=&[\begin{array}{cc } s_2 k_1 & k_2 \end{array}],\label{eq5.2}\end{aligned}\ ] ] we see that therefore , , with .in other words , a reducible network formed by a bilinear direct interaction and cascade connection of two generalized open oscillators having the same number of input and output fields results in another generalized open oscillator with a degrees of freedom which is the sum of the degrees of freedom of the two constituent oscillators and having the same number of inputs and outputs . by repeated application of the above construction, we can prove the following synthesis theorem .[ th : synthesis ] let be an degrees of freedom generalized open oscillator with hamiltonian matrix , coupling matrix , and unitary scattering matrix .let be written in terms of blocks of matrices as {j , k=1,\ldots , n} ] . here is the position operator of the cavity mode ( also called the _ amplitude quadrature _ of the mode ) and is the momentum operator of the cavity mode ( also called the _ phase quadrature _ of the mode ) .if there is a transmission mirror , say , m , then losses through this mirror are modeled as having a vacuum bosonic noise field incident at this mirror and interacting with the cavity mode via the idealized hamiltonian given in ( [ eq : ideal - int ] ) with , where is a positive constant called the mirror _ coupling coefficient_. when there are several leaky mirrors , then the losses are modeled by a sum of such interaction hamiltonians , one for each mirror and with each mirror having its own distinct vacuum bosonic field .the total hamiltonian of the cavity is then just the sum of and the interaction hamiltonians .more generally , the field incident at a transmitting mirror need not be a vacuum field , but can be other types of fields , such as a coherent laser field .nonetheless , the interaction of the cavity mode with such fields via the mirror will still be governed by ( [ eq : ideal - int ] ) with a coupling operator of the form . in order to amplify a quadrature of the cavity mode , for example , to counter losses in that quadrature caused by light escaping through a transmitting mirror, one can employ a nonlinear optical crystal and a classical pump beam in the configuration of a degenerate parametric amplifier ( dpa ) , following the treatment in .the pump beam acts as a source of additional quanta for amplification and , in the nonlinear crystal , an interaction takes place in which photons of the pump beam are annihilated to create photons of the cavity mode . in an optical cavity , such as a ring cavity shown in figure [ fig : dpa - cav ] , we place the crystal in one arm of the cavity ( for example , in the arm between mirrors m1 and m2 ) and shine the crystal with a strong coherent pump beam of ( angular ) frequency given by , where is some reference frequency . here the mirrors at the end the arms should be chosen such that they do not reflect light beams of frequency .a schematic representation of a dpa ( a nonlinear crystal with a classical pump ) is shown in figure [ fig : dpa ] . ] ] [ remark6.1]in the remaining figures , black rectangles will be used to denote mirrors which are fully reflecting at the cavity frequency and fully transmitting at the pump frequency ( whenever a pump beam is employed ) , while white rectangles denote partially transmitting mirrors at the cavity frequency .let be the cavity mode , and let the cavity frequency be detuned from and given by , where is the frequency detuning .the crystal facilitates an energy exchange interaction between the cavity mode and pump beam . by the assumption that the pump beam is intense and not depleted in this interaction, it may be assumed to be classical , in which case the crystal - pump - cavity interaction can be modeled using the ( time - varying ) hamiltonian ( * ? ? ?* equation 10.2.1 ) , where is a complex number representing the _ effective _ pump intensity . by transforming to a rotating frame with respect to ( i.e. , by application of the transformation ; see for a derivation of the equations of motion of the dpa in the rotating frame ) , can be reexpressed as and be written compactly as ( recall ) , where { \displaystyle}\frac{1}{2}(\epsilon+\epsilon^ * ) & { \displaystyle}\delta-\frac{i}{2}(\epsilon-\epsilon^ * ) \end{array}\right ] \label{eq : r - dpa}\end{aligned}\ ] ] and is a real number .since merely contributes a phase factor that has no effect on the overall dynamics of the system operators , it plays no essential role and can simply be ignored ( cf .section [ sec : models ] ) . note that transformation to a rotating frame effects the following : if is the evolution of under the original time - varying hamiltonian and we define ( i.e. , is in a frame rotating at frequency ) , then coincides with the time evolution of under the time - independent hamiltonian . in other words , in this rotating frame , the dpa can be viewed as a harmonic oscillator with quadratic hamiltonian . if two cavities are positioned in such a way that the beams circulating in them intersect one another , then these beams will merely pass through each other without interacting .one way of making the beams interact is to have their paths intersect inside a nonlinear optical crystal .typically , to facilitate such an interaction , one or two auxiliary pump beams are also employed as a source of quanta / energy . for instance , in a optical crystal in which the modes of two cavities interact with an undepleted classical pump beam as depicted in figure [ fig : two - mode ] , the interaction can be modeled by the hamiltonian where is a complex number representing the effective intensity of the pump beam and is the pump frequency . transforming to a rotating frame at half the pump frequency by applying the rotating frame transformation and , can be expressed in this new frame in the time - invariant form .this type of hamiltonian is called a _ two - mode squeezing hamiltonian _ , as it simultaneously affects squeezing in one quadrature of ( possibly rotated versions of ) and and will play an important role later on in the paper .a two - mode squeezer is schematically represented by the symbol shown in figure [ fig : schem - tms ] . ] ] [ rem : ref - frame]it will be implicitly assumed in this paper that the equations for the dynamics of generalized open operators are given with respect to a common rotating frame of frequency , including the transformation of all bosonic noises according to , and that classical pumps employed are all of frequency .this is a natural setting in quantum optics where a rotating frame is essential for obtaining linear time invariant qsde models for active devices that require an external source of quanta . in a control setting , this means both the quantum plant and the controller equations have been expressed in the same rotating frame .static linear optical devices implement static linear transformations ( meaning that the transformation can be represented by a complex square matrix ) of a set of independent incoming single mode fields , such as the field in a cavity , to an equal number of independent outgoing fields .the incoming fields satisfy the commutation relations =0 ] .the incoming fields may also be vacuum bosonic fields with outgoing bosonic fields ( that need no longer be in the vacuum state ) . in the latter ,the commutation relations are =0 ] .however , to avoid cumbersome and unnecessary repetitions , in the following we shall only discuss the operation of a static linear optical device in the context of single mode fields .the operation is completely analogous for bosonic incoming and outgoing fields and requires only making substitutions such as , , =0\rightarrow [ da_j(t),da_k(t)]=0 ] , etc .the operation of a static linear optical device can mathematically be expressed as &=&q \left[\begin{array}{l } a \\a^{\ # } \end{array}\right];\ ; q=\left[\begin{array}{cc } q_1 & q_2 \\q_2^{\ # } & q_1^{\ # } \end{array}\right],\end{aligned}\ ] ] where and is a quasi - unitary matrix satisfying ^{\dag}= \left[\begin{array}{cc } i & 0 \\ 0 & -i \end{array}\right].\ ] ]a consequence of the quasi - unitarity of is that it preserves the commutation relations among the fields , that is , to say that the output fields satisfy the same commutation relations as .another important property of a quasi - unitary matrix is that it has an inverse given by , where ] , and this inverse is again quasi - unitary , i.e. , the set of quasi - unitary matrices of the same dimension form a group . in the case where the submatrix of is ,the device does not mix creation and annihilation operators of the fields , and it necessarily follows that is a complex _ unitary _ matrix .such devices are said to be _static passive _ linear optical devices because they do not require any external source of quanta for their operation .it is well known that any passive network can be constructed using only beam splitters and mirrors ( e.g. , see references 24 in ) . in all other cases ,the devices are _static active_. specific passive and static devices that will be utilized in this paper will be discussed in the following .a phase shifter is a device that produces an outgoing field that is a phase shifted version of the incoming field .that is , if there is one input field , then the output field is for some real number , called the _ phase shift _ ; a phase shifter is schematically represented by the symbol shown in figure [ fig : phase - shifter ] . by definition ,a phase shifter is a static passive device .the transformation matrix of a phase shifter with a single input field is given by .\ ] ] radians.[fig : phase - shifter ] ] a beam splitter is a static and passive device that forms a linear combination of two input fields and to produce two output fields and such that energy is conserved : .the transformation affected by a beam splitter can be written as ,\ ] ] where is a unitary matrix given by \left[\begin{array}{cc } \cos(\theta)/2 & \sin(\theta)/2 \\ -\sin(\theta)/2 & \cos(\theta)/2\end{array}\right]\left[\begin{array}{cc } e^{i\phi/2 } & 0 \\ 0 & e^{-i\phi/2 } \end{array}\right].\ ] ] here are real numbers . is called the _ mixing angle _ of the beam splitter , and it is the most important parameter . and introduce a phase difference in the two incoming and outgoing modes , respectively , while introduces an overall phase shift in both modes .a particularly useful result on the operation of a beam splitter with is that it can be modeled by an effective hamiltonian given by ( see for details ) .this means that in this case we have the representation =\exp(ih_{bs}^0 ) \left[\begin{array}{l } a \\a^{\ # } \end{array}\right]\exp(-ih_{bs}^0),\ ] ] where .more generally , it follows from this , by considering phase shifted inputs and ( being an arbitrary real number ) , that a beam splitter with and will have the effective hamiltonian , with .this is the most general type of beam splitter that will be employed in the realization theory of this paper .a beam splitter with a hamiltonian of the form is represented schematically using the symbol in figure [ fig : bs ] .] let there be a single input mode .write as , where is the _ real _ or _ amplitude _ quadrature of and is the _ imaginary _ or _ phase _ quadrature of . _ squeezing _ of a field is an operation in which the variance of one quadrature , either or , is squeezed or attenuated ( it becomes less noisy ) at the expense of increasing the variance of the other quadrature ( it becomes noisier ) . a device that performs squeezing of a field is called aan ideal squeezer affects the transformation given by ,\ ] ] where and are real parameters . we shall refer to as the squeezing parameter and as the phase angle .for , the squeezer squeezes the amplitude quadrature of ( a phase shifted version of ) while if , it squeezes the phase quadrature and then shifts the phase of the squeezed field by . a squeezer with parameters schematically represented by the symbol shown in figure [ fig : squeezer ] . ]a squeezer can be implemented , for instance , by using a combination of a parametric amplifier and a beam splitter for single mode fields or as a dpa with a transmitting mirror for bosonic fields .it is easy to see that is given by =\left[\begin{array}{cc } \cosh(s ) & -e^{i \theta } \sinh(s ) \\-e^{-i \theta } \sinh(s ) & \cosh(s ) \end{array}\right].\ ] ] it is known that an arbitrary static linear optical network can be decomposed as a cascade of simpler networks .in particular , any quasi - unitary matrix can be constructively decomposed as : \exp\left [ \begin{array}{cc } 0 & d \\ d & 0 \end{array } \right ] \exp\left [ \begin{array}{cc } a_3 & 0 \\ 0 & a_3^{\ # } \end{array}\right]\\ & = & \left[\begin{array}{cc } \exp a_1 & 0 \\ 0 & \exp a_1^{\ # } \end{array } \right ] \left[\begin{array}{cc } \cosh d & \sinh d \\ \sinh d & \cosh d \end{array } \right ] \left [ \begin{array}{cc } \exp a_3 & 0 \\ 0 & \exp a_3^{\ # } \end{array}\right],\end{aligned}\ ] ] where and are skew symmetric complex matrices and is a real diagonal matrix .the first and third matrix exponential represent passive static networks that can be implemented by beam splitters and mirrors , while the second exponential represents an independent collection of squeezers ( with trivial phase angles ) each acting on a distinct field . in summary , in any static linear optical networkthe incident fields can be thought of as going through a sequence of three operations : they are initially mixed by a passive network , then they undergo squeezing , and finally they are subjected to another passive transformation . in the special case where the entire network is passive , the squeezing parameters ( i.e. , elements of the matrix ) are zero . for example , a squeezer with arbitrary phase angle can be constructed by sandwiching a squeezer with phase angle between a phase shifter at its input and a phase shifter at its output , respectively .this is shown in figure [ fig : squeezer - imp ] . ]one degree of freedom open oscillators are completely described by a real symmetric hamiltonian matrix and complex coupling matrix .thus one needs to be able to implement both and .here we shall propose the realization of one degree of freedom open quantum harmonic oscillators based around a ring cavity structure , such as shown in figure [ fig : ring - cav ] , using fully reflecting and partially reflecting mirrors and nonlinear optical elements appropriately placed between the mirrors .the matrix determines the quadratic hamiltonian and in a one - dimensional setup such a quadratic hamiltonian can be realized with a dpa as discussed in section [ sec : dpa ] . from ( [ eq : r - dpa ] ) , it is easily inspected that any real symmetric matrix can be realized by suitably choosing the complex effective pump intensity parameter and the cavity detuning parameter of the dpa . in fact , for any particular , the choice of parameters is _unique_. for example , to realize ,\ ] ] one solves the set of equations for to yield the unique solution and .now , we turn to consider realization of the coupling operator .let us write ^t ] and the ito rules for a squeezed field that can be generated from the vacuum ( the theoretical basis for these manipulations are discussed in appendix [ sec : app - b ] ) are \left[\begin{array}{cc } dz(t ) & dz(t)^ * \end{array}\right]=q\left[\begin{array}{cc } 0 & 1 \\ 0 & 0 \end{array}\right]q^tdt.\ ] ] can be implemented in one arm of a ring cavity with a fully reflecting mirror m and a partially transmitting mirror m with coupling coefficient , with incident on m. after the interaction , an output field is reflected by m given by however , the actual output that is of interest is the output when the oscillator interacts directly with the field . to recover from , notice that since is a quasi - unitary transformation , it has an inverse which is again quasiunitary .hence can be recovered from by exploiting the following relation that follows directly from the fact that : =q^{-1}\left[\begin{array}{c } z_{out}(t ) \\z_{out}(t)^*\end{array}\right].\ ] ] that is , is the output of a squeezer that implements the quasi - unitary transformation with as its input field .the complete implementation of this linear coupling is shown in figure [ fig : diss - coup-2 ] .with and . here , and the mirror m has coupling coefficient .[fig : diss - coup-2 ] ] the second necessary ingredient to synthesizing a general generalized open oscillator according to theorem [ th : synthesis ] is to be able to implement a direct interaction hamiltonian given by ( [ eq : multi - direct ] ) between one - dimensional harmonic oscillators .the only exception to this , where field - mediated interactions suffice , is in the fortuitous instance where and and , , are such that .the hamiltonian is essentially the sum of direct interaction hamiltonians between pairs of one - dimensional harmonic oscillators of the form ( ) with a real matrix . under the assumption that the time it takes for the light in a ring cavity to make a round trip is much faster than the time scales of all processes taking place in the ring cavity ( i.e., the cavity length should not be too long ) , it will be sufficient for us to only consider how to implement for any two pairs of one - dimensional harmonic oscillators and then implementing all of them simultaneously in a network .to this end , let and for , and rewrite as for some complex numbers and .the first part can be simply implemented by a beam splitter with a mixing angle , , , and ( see section [ sec : bs ] ) . on the other hand ,the second part can be implemented by having the two modes and interact in a suitable nonlinear crystal using a classical pump beam of frequency and effective pump intensity in a two - mode squeezing process as described in section [ sec : two - mode ] .the overall hamiltonian can be achieved by positioning the arms of the two ring cavities ( with canonical operators and ) to allow their circulating light beams to `` overlap '' at two points where a beam splitter and a nonlinear crystal are placed to implement and , respectively .an example of this is scheme is depicted in figure [ fig : cav - coup ] . between the modes and of two ring cavities.[fig : cav - coup ] ]consider a two degrees of freedom open oscillator coupled to a single external bosonic noise field given by , with , } ] , and } } x_1. ] be a vacuum bosonic field , where } ] , and define the squeezed bosonic field })$ ] with as defined above .then and its adjoint are related to and by now , consider an open oscillator whose dynamics are given by the h - p qsde : where is the quadratic hamiltonian of the oscillator and is the linear coupling operator to . by using ( [ eq : app-1 ] ) and substituting this into the above qsde , we may rewrite it in terms of the and as follows : & & \hspace*{10pt}-\;\frac{1}{2}(nmm^*+(n+1)m^*m - c^*m^2-cm^*m)dt\bigg)u(t),\nonumber\end{aligned}\ ] ] where is a new linear coupling operator given by as shown in , ( [ eq : app-2 ] ) can be interpreted on its own as the unitary evolution of a harmonic oscillator and a squeezed bosonic field linearly coupled via the coupling operator , and this defines a quantum markov process on the oscillator algebra ( by projecting to the oscillator algebra ; see ) . in this interpretation of ( 2 ) , the squeezed bosonic fields and satisfy the squeezed ito multiplication rules given by that forms a basis for a quantum stochastic calculus for squeezed bosonic fields .a formal interpretation of this is that ( [ eq : app-2 ] ) defines the evolution of a system coupled to via the formal interaction hamiltonian ( see ) : where is a squeezed quantum white noise that can be formally written as and are bounded operators on the oscillator hilbert space . herewe do not concern ourselves too much with such detail and assume the optimistic view that these results can be extended to unbounded coupling operators , which are linear combinations of the canonical operators of the harmonic oscillator , in view of the fact that the left form ( cf .appendix a ) of ( [ eq : app-4 ] ) , from which the left form of ( [ eq : app-2 ] ) can be recovered , still makes sense for a quadratic and the unbounded operator associated with ( i.e. , ) .moreover , singular interaction hamiltonians of the form ( [ eq : app-3 ] ) between the unbounded canonical operators of a harmonic oscillator and a vacuum or squeezed quantum white noise are physically well motivated and widely used in the physics community .see , e.g. , ( * ? ? ? * chapters 5 and 10 ) and related references from . ]the connection with the discussion in section [ sec : one - degree - syn ] is made by identifying the field introduced therein with , and with .
the purpose of this paper is to develop a synthesis theory for linear dynamical quantum stochastic systems that are encountered in linear quantum optics and in phenomenological models of linear quantum circuits . in particular , such a theory will enable the systematic realization of coherent / fully quantum linear stochastic controllers for quantum control , amongst other potential applications . we show how general linear dynamical quantum stochastic systems can be constructed by assembling an appropriate interconnection of one degree of freedom open quantum harmonic oscillators and , in the quantum optics setting , discuss how such a network of oscillators can be approximately synthesized or implemented in a systematic way from some linear and nonlinear quantum optical elements . an example is also provided to illustrate the theory . quantum networks , quantum network synthesis , quantum control , linear quantum stochastic systems , linear quantum circuit theory 93b50 , 93b10 , 93e03 , 94c99 , 81v80 10.1137/080728652
using an automatic feynman graph calculation package , we can generate the information of all feynman graphs for given processes .sometimes it is necessary to select graphs from the set of all graphs by some conditions .however , it is not so easy to select them correctly by hand when a huge number of graphs is involved , such as higher order corrections or susy processes .a program grcsel selects out a subset of feynman graphs from the set of graphs , generated by grace , according to given selection conditions .the output information of selected graphs is written in the same format as that of the original set .this enables us to generate feynman amplitudes within grace in the same procedure as for all graphs .so we can perform cross section calculation , gauge invariance check , event generation and so on for the selected ones .grcsel helps us : 1 . to find decay - graphs and evaluate signal / background ratio , 2 . to check the accuracy of approximated calculation , 3 . to confirm precision of the calculation , 4 . to reduce the calculation time , 5 . to develop kinematics routinegrcsel consists of three parts : a steering - part defines basic functions of graph selections and reads input files , an interpreter - part parses and evaluates commands , and a utility - part handles subsets of particles , vertices and graphs .once a physics process and the order of calculation are fixed , feynman graphs are generated by grc program with specified feynman rules described in physics model file .the information on graphs generated are stored in a file named out.grf ( we call the format of this file _ ` .grf ` format _ ) .grcsel reads the physics model file and out.grf and selects graphs according to a kind of propagator , characteristics of graph topology , a type of vertex or a graph number .grcsel outputs those selected graphs in the same format as out.grf . successivelythis output file can be used as the input to source code generation for monte carlo integration or event generation .we can also use grcsel again reading output of previous execution of grcsel .the schematic view of how grcsel works in grace system is shown in fig .[ fig : grcsel ] .graph selection starts by the program grcsel : grcsel this program requires out.grf file by default .the graph selection commands are read through standard input , which may be given interactively or by a script file . with a script file where grcsel commands are prepared, we can redirect that file : grcsel < command.in to use another input .grf format file , e.g. out1.grf , instead of out.grf , we can add the filename after grcsel command as : grcsel out1.grf< command.in in a script file there are a series of grcsel commands such as declaration of variables and basic functions to specify the selection conditions or operators .grcsel has 14 basic functions in total .three of them return a subset of graphs in accordance with specified selection condition and two functions output set of graphs .they are summarized in table [ tab : func ] ..basic functions to select and output graphs . [ cols="<,<",options="header " , ]in the following example , graphs with propagator connected to the initial electron and final are selected among graphs of process .selected graphs are output into a file named out1.grf . ....% e+ e- -- > w+ w- photon % % out1.grf : with neutrino propagator % at the vertex of initial electron and % final w- .% gset gs0 , gs1 ; gs0 = ~ [ ] ; % all graphs gs1 = cutprop(gs0 , [ `` nu - e '' ] , [ 0,3 ] ) ; outgset(``out1.grf '' , gs1 ) ; quit ; .... in fig .[ fig : wwa - select ] , selected graphs are shown .grcsel has been developed in the framework of grace 2.1.7.4 .it can handle tree and 1-loop graphs and it supports standard and mssm physics model .grcsel is included in a distribution kit of grace 2.1.7.4 .we wish to thank the members of minami - tateya collaboration for continuous discussions and many kinds of support .we are also grateful to express our sincere gratitude to prof .y.shimizu for the valuable suggestions and continuous encouragements .authors appreciate prof .y.watase for the encouragements .this work was supported in part by the grant - in aid ( no . 12680363 , 10640285 , 10680366 and 11440083 ) of monbu - sho , japan .
we present a feynman graph selection tool grcsel , which is an interpreter written in c language . in the framework of grace , it enables us to get a subset of feynman graphs according to given conditions . = cmbx10 at 12pt # 1 1em _ # 1 _ # 1 1em _ # 1 _
software reliability engineering is an established area of software engineering research and practice that is concerned with the improvement and measurement of reliability . for the analysis typically stochasticsoftware reliability models are used .they model the failure process of the software and use other software metrics or failure data as a basis for parameter estimation .the models are able ( 1 ) to estimate the current reliability and ( 2 ) to predict future failure behaviour .there are already several established models .the most important ones has been classified by miller as exponential order statistic ( eos ) models in .he divided the models on the highest level into deterministic and doubly stochastic eos models arguing that the failure rates either have a deterministic relationship or are again randomly distributed .for the deterministic models , miller presented several interesting special cases .the well - known jelinski - moranda model , for example , has _constant rates_. he also stated that _ geometric rates _ are possible as documented by nagel . this geometric sequence ( or progression ) between failure rates of faultswas also observed in projects of the communication networks department of the siemens ag . in several older projectswhich were analysed , this relationship fitted well to the data .therefore , a software reliability model based on a geometric sequence of failure rates is proposed . [[ problem . ] ] problem . + + + + + + + + the problem which software reliability engineering still faces is the need for accurate models for different environments and projects. detailed models with a geometric sequence of failure rates have to our knowledge not been proposed so far .[ [ contribution . ] ] contribution .+ + + + + + + + + + + + + we describe a detailed and practical software reliability model that was motivated out of practical experience and contains a geometric sequence of failure rates which was also suggested by theoretical results .a detailed comparison shows that this model has a constantly good performance over several projects , although other models perform better in specific projects .hence , we validated the general assumption that a geometric sequence of failure rates is a reasonable model for software .[ [ outline . ] ] outline .+ + + + + + + + we first describe important aspects of the model in sec .[ sec : description ] . in sec .[ sec : evaluation ] the model is evaluated using several defined criteria , most importantly its predictive validity in comparison with established models .we offer final conclusions in sec .[ sec : conclusions ] .related work is cited where appropriate .the core of the proposed model is a geometric sequence for the failure rates of the faults .this section describes this and other assumptions in more detail , introduces the main equations and the time component of the model and gives an example of how the parameters of the model can be estimated .the main theory behind this model is the ordering of the faults that are present in the software based on their failure rates .the term failure rate describes in this context the probability that an existing fault will result in an erroneous behaviour of the system during a defined time slot or while executing an average operation .in essence , we assign each fault a time - dependent probability of failure and combine those probabilities to the total failure intensity .the ordering implies that the fault with the highest probability of triggering a failure comes first , then the fault with the second highest probability and so on .the probabilities are then arranged on a logarithmic scale to attain an uniform distribution of the points on the -axis . the underlying assumption being that there are numerous faults with low failure rates and only a small number of faults with high failure rates . in principle, we assume an infinite number of faults because of imperfect debugging and updates .as mentioned above , the logarithmic scale distributes the data points in approximately the same distance from each other .therefore , this distance is approximated by a constant factor between the probabilities .then we can use the following geometric sequence ( or progression ) for the calculation of the failure rates : where is the failure rate of the -th fault , the failure rate of the first fault , and is a project - specific parameter .it is assumed that is an indicator for the complexity of a system that may be related to the number of different branches in a program . in past projects of siemens calculated to be between and .the parameter is multiplied and not added because the distance is only constant on a logarithmic scale . the failure occurrence of a fault is assumed to be geometrically distributed .therefore , the probability that a specific fault occurred by time is the following : we denote with the random variable of the failure time of the fault . in summary, the model can be described as the sum of an infinite number of geometrically distributed random variables with different parameters which in turn are described by a geometric sequence .the two equations that are typically used to describe a software reliability model are the mean number of failures and the failure intensity .the mean value function needs to consider the expected value over the indicator functions of the faults : }(x_a ) } \right ) \\ & = \sum_{a=1}^\infty{e(i_{[0,t]}(x_a))}\\ & = \sum_{a=1}^\infty{p(x_a \leq t)}\\ & = \sum_{a=1}^\infty{1 - ( 1 - p_a)^t}. \end{array } \label{eq : fischer_mean_new}\ ] ] this gives us a typical distribution as depicted in fig .[ fig : typical_curve ] .note that the distribution is actually discrete which is not explicitly shown because of the high values used on the -axis .we can not differentiate the mean value equation directly to get the failure intensity .however , we can use the probability density function ( pdf ) of the geometric distribution to derive this equation .the pdf of a single fault is therefore , to get the number of failures that occur at a certain point in time , we have to sum up the pdf s of all the faults : an interesting quantity is typically the time that is needed to reach a certain reliability level .based on the failure intensity objective that is anticipated for the release , this can be derived using the equation for the failure intensity .rearranging eq .[ eq : failure_intensity ] gives : what we need , however , is the further required time to determine the necessary length of the test or field trial .we denote the failure intensity objective and use the following equation to determine : finally , the result needs to be converted into calendar time to be able to give a date for the end of the test or field trial . in the proposed model timeis measured in incidents , each representing a usage task of the system . to convert these incidents into calendar time it is necessary to introduce an explicit time component .this contains explicit means to convert from one time format into another .there are several possibilities to handle time in reliability models .the preferable is to use execution time directly .this , however , is often not possible .subsequently , a suitable substitute must be found . with respect to testingthis could be the number of test cases , for the field use the number of clients and so forth .[ fig : times ] shows the relationships between different possible time types .the first possibility is to use in - service time as a substitute .this requires knowledge of the number of users and the average usage time per user .then the question arises how this relates to the test cases in system testing .a first approximation is the average duration of a test case .the number of incidents is , opposed to the in - service time , a more task - oriented way to measure time .the main advantage of using incidents , apart from the fact that they are already in use at siemens , is that in this way , we can obtain very intuitive metrics , e.g. , the average number of failures per incident .there are usually some estimations of the number of incidents per client and data about the number of sold client licenses .however , the question of the relation to test cases is also open . a first cut would be to assume a test case is equal to an incident . a test case , however , has more `` time value '' than one incident because it is generally directed testing , i.e. , cases with a high probability of failure are preferred .in addition , a test case is usually unique in function or parameter set while the normal use of a product often consists of similar actions .when we do not follow the operational profile this should be accounted for .a possible extension of the model is proposed in but needs further investigation .there are two techniques for parameter determination currently in use .the first is prediction based on data from similar projects .this is useful for planing purposes before failure data is available .however , estimations should also be made during test , field trial , and operation based on the sample data available so far .this is the approach most reliability models use and it is also statistically most advisable since the sample data comes from the population we actually want to analyse . techniques such as maximum likelihood estimation or least squares estimation are used to fit the model to the actual data .[ [ maximum - likelihood . ] ] maximum likelihood .+ + + + + + + + + + + + + + + + + + + the maximum likelihood method essentially uses a likelihood function that describes the probability of a certain number of failures occurring up to a certain time .this function is filled with sample data and then optimised to find the parameters with the maximum likelihood .the problem with this is that the likelihood function of this model gets extremely complicated .essentially , we have an infinite number of random variables that are geometrically distributed , but all with different parameter . even if we constrain ourselves to a high number of variables under consideration it still results in a sum of different products .this requires to sum up every possible permutation in which failures have occurred up to time .the number of possibilities is .each summand is a product of a permutation in which different faults resulted in failures . where .an efficient method to maximise this function has not been found .[ [ least - squares . ] ] least squares .+ + + + + + + + + + + + + + for the least squares method an estimate of the failure intensity is used and the relative error to the estimated failure intensity from the model is minimised .we use the estimate of the mean number of failures for this because it is the original part of the model .therefore , the square function to be minimised in our case can be written as follows : ^ 2},\ ] ] where is the number of measurement points , is the measured value for the cumulated failures , and is the time at measurement . this function is minimised using the simplex variant of nelder and mead .we found this method to be usable for our purpose .we describe several criteria that are used to assess the proposed model .the criteria that we use for the evaluation of the fischer - wagner model are derived from musa et al .we assess according to five criteria , four of which can mainly be applied theoretically , whereas one criterion is based on practical applications of the models on real data .the first criterion is the _ capability _ of the model .it describes whether the model is able to yield important quantities .the criterion _ quality of assumptions _is used to assess the plausibility of the assumptions behind the model .the cases in which the model can be used are evaluated with the criterion _applicability_. furthermore , _simplicity _ is an important aspect for the understandability of the model .finally , the _ predictive validity _ is assessed by applying the model to real failure data and comparing the deviation .the main purpose of a reliability model is to aid managers and engineers in planning and managing software projects by estimating useful quantities about the software reliability and the reliability growth .following such quantities , in approximate order of importance , are 1 .current reliability , 2 . expected date of reaching a specified reliability , 3 .human and computer resource and cost requirements related to the achievement of the objective .furthermore , it is a valuable part of a reliability model if it can predict quantities early in the development based on software metrics and/or historical project data .the model yields the current reliability as current failure intensity and mean number of failures .it is also able to give predictions based on parameters from historical data .furthermore , the expected date of reaching a specified reliability can be calculated .human and computer resources are not explicitly incorporated .there is an explicit concept of time but , it is not as sophisticated as , for example , in the musa - okumoto model . as far as possible, each assumption should be tested by real data .at least it should be possible to argue for the plausibility of the assumption based on theoretical knowledge and experience .also the clarity and explicitness of the assumptions are important .the main assumption in the proposed model is that the failure rates of the faults follow a geometric sequence .the intuition is that there are many faults with low failure rates and only a small number of faults with high failure rates .this is in accordance with software engineering experience and supported by .moreover , the geometric sequence as relationship between different faults has been documented in a nasa study .furthermore , an assumption is that the occurrence of a failure is geometrically distributed .the geometric distribution fits because it can describe independent events .we do not consider continuous time but discrete incidents .finally , the infinite number of faults makes sense when considering imperfect debugging , i.e. , fault removal can introduce new faults or the old faults are not completely removed .it is important for a general reliability model to be applicable to software products in different domains and of different size .also varying project environments or life cycle phases should be feasible. there are four special situations identified in that should be possible to handle . 1 .software evolution 2 .classification of severity of failures into different categories 3 .ability to handle incomplete failure data with measurement uncertainties 4 .operation of the same program on computers of different performance all real applications of the proposed model have been in the telecommunications area .however , it was used for software of various sizes and complexities . moreover , during the evaluation of the predictive validity we applied it also to other domains ( see sec .[ sec : validity ] ) . in principle , the model can be used before and during the field trial .software evolution is hence not explicitly incorporated .a classification of failures is possible but has not been used so far .moreover , the performance of computers is not a strong issue in this domain .a model should be simple enough to be usable in real project environments : it has to be simple to collect the necessary data , easy to understand the concepts and assumptions , and the model should be implementable in a tool . while the concepts themselves are not difficult to understand , the model in total is rather complicated because it not only involves failures but also faults .furthermore , for all these faults the failure is geometrically distributed but each with a different probability .a main criticism is also that the assumed infinite number of faults make the model difficult to handle . in practical applications of the model and when building a tool , an upper bound of the number of faults must be introduced to be able to calculate model values .this actually introduces a third model parameter in some sense .the two parameters , however , can be interpreted as direct measures of the software .the parameter is the failure probability of the most probable fault and can be seen as a measure of system complexity .the most important and `` hardest '' criterion for the evaluation of a reliability model is its predictive validity .a model has to be a faithful abstraction of the real failure process of the software and give valid estimations and predictions of the reliability .for this we follow again and use the _ number of failures approach_. we assume that there have been failures observed at the end of test time ( or field trial time ) .we use the failure data up to to estimate the parameters of the mean number of failures .the substitution of the estimates of the parameters yields the estimate of the number of failures .the estimate is compared with the actual number at .this procedure is repeated with several . for a comparison we can plot the relative error against the normalised test time .the error will approach as approaches .if the points are positive , the model tends to overestimate and accordingly underestimate if the points are negative .numbers closer to imply a more accurate prediction and , hence , a better model . as comparison models we apply fourwell - known models : musa basic , musa - okumoto , littlewood - verall , and nhpp .all these models are implemented in the tool smerfs that was used to calculate the necessary predictions .we describe each model in more detail in the following .[ [ musa - basic . ] ] musa basic .+ + + + + + + + + + + the musa basic execution time model assumes that all faults are equally likely to occur , are independent of each other and are actually observed .the execution times between failures are modelled as piecewise exponentially distributed .the intensity function is proportional to the number of faults remaining in the program and the fault correction rate is proportional to the failure occurrence rate . [[ musa - okumoto . ] ] musa - okumoto .+ + + + + + + + + + + + + the musa - okumoto model , also called logarithmic poisson execution time model , was first described in .it also assumes that all faults are equally likely to occur and are independent of each other .the expected number of faults is a logarithmic function of time in this model , and the failure intensity decreases exponentially with the expected failures experienced .finally , the software will experience an infinite number of failures in infinite time .[ [ littlewood - verall - bayesian . ] ] littlewood - verall bayesian .+ + + + + + + + + + + + + + + + + + + + + + + + + + + this model was proposed for the first time in .the assumptions of the littlewood - verall bayesian model are that successive times between failures are independent random variables each having an exponential distribution .the distribution for the -th failure has a mean of .the form a sequence of independent variables , each having a gamma distribution with the parameters and . has either the form : ( linear ) or ( quadratic ) .we used the quadratic version of the model .[ [ nhpp . ] ] nhpp .+ + + + + various models based on a non - homogeneous poisson process are described in .the particular model used also assumes that all faults are equally likely to occur and are independent of each other .the cumulative number of faults detected at any time follows a poisson distribution with mean .that mean is such that the expected number of faults in any small time interval about is proportional to the number of undetected faults at time .the mean is assumed to be a bounded non - decreasing function with approaching the expected total number of faults to be detected as the length of testing goes to infinity .it is possible to use nhpp on time - between - failure data as well as failure counts .we used the time - between - failure version in our evaluation .we apply the reliability models to several different sets of data to compare the predictive validity .the detailed results for all of these projects can be found in .we describe only the combined results in the following .the used data sets come ( 1 ) from the _ the data & analysis center for software ( dacs ) _ of the us - american department of defence and ( 2 ) from the telecommunication department of siemens .the dacs data has already been used in several evaluations of software reliability models .hence , this ensures the comparability of our results . in particular , we used the projects 1 , 6 , and 40 and their failure data from system tests measured in execution time .the siemens data gives additional insights and analysis of the applicability of the model to these kind of projects .we mainly analyse two data sets containing the failure data from the field trial of telecommunication software and a web application .the siemens data contains no execution time but calendar time can be used as approximation because of constant usage during field trial .all these projects come from different domains with various sizes and requirements to ensure a representative evaluation .the usage of the number of failures approach for each project resulted in different curves for the predictive validity over time . for a better general comparison we combined the data into one plot which can be found in fig .[ fig : total ] .this combination is straight - forward as we only considered relative time and relative errors . to avoidthat strongly positive and strongly negative values combined give very small errors we use medians instead of average values .the plot shows that with regard to the analysed projects the littlewood - verall model gives very accurate predictions , also the nhpp and the proposed model are strong from early on .however , for an accurate interpretation we have to note that the data of the littlewood - verall model for one of the siemens projects was not incorporated into this comparison because its predictions were far off with a relative error of about 6 .therefore , the model has an extremely good predictive validity if it gives reasonable results but unacceptable predictions for some projects .a similar argument can be made for the nhpp model which made the weakest predictions for one of the dacs projects .the proposed model can not reach the validity of these models for particular projects , but has a more constant performance over all projects .this is important because it is difficult to determine which of the models gives accurate predictions in the early stages of application since there is only a small amount of data .using the littlewood - verall or nhpp model could lead to extremely bad predictions in some cases .we conclude with a summary of our investigations and give some directions for future work .we propose a software reliability model that is based on a geometric series of the failure rates of faults .this basis is suggested from the theory by miller in as well as from practice in nagel et al . in and siemens projects .the model has a state - of - the - art parameter determination approach and a corresponding prototype implementation of it .several data sets from dacs and siemens are used to evaluate the predictive validity of the model in comparison to well - established models .we find that the proposed model often has a similar predictive validity as the comparison models and outperforms most of them. however , there is always one of the models that performs better than ours .nevertheless , we are able to validate the assumption that a geometric sequence of failure rates of faults is a reasonable model for software reliability .the early estimation of the model parameters is always a problem in reliability modelling .therefore , we plan to evaluate the correlation with other system parameters .for example the parameter of the model is supposed to represent the complexity of the system .therefore , one or more complexity metrics of the software code could be used for early prediction .this needs extensive empirical analysis but could improve the estimation in the early phases significantly .furthermore , a time component that also takes uncertainty into account would be most accurate .the musa basic and musa - okumoto models were given such components ( see ) .they model the usage as a random process and give estimates about the corresponding calendar time to an execution time .further applications with other data sets and comparison with other types of prediction techniques , such as neural networks , are necessary to evaluate the general applicability and predictive validity of the proposed model .finally , we plan to use the model in an economics models for software quality and work further on a possibility to estimate the test efficiency using the proposed model .some early ideas are presented in .
software reliability models are an important tool in quality management and release planning . there is a large number of different models that often exhibit strengths in different areas . this paper proposes a model that is based on a geometric sequence ( or progression ) of the failure rates of faults . this property of the failure process was observed in practice at siemens among others and led to the development of the proposed model . it is described in detail and evaluated using standard criteria . most importantly , the model performs constantly well over several projects in terms of its predictive validity .
self - organized criticality is a paradigm of complex system . in their seminal work ,bak , tang and wiesenfeld ( 1987 ) introduced the idea of self - organized criticality ( soc ) using a computer cellular automaton as a sandpile experiment .their system assembled itself in a critical state . when the system relaxed ( recovering the stationary state ) it showed spatial and temporal self - similarities .systems exhibiting soc dynamics are open dissipative systems , involving two time scales : a slow energy income and a quick relaxation .empirical examples that have been linked to soc dynamics are earthquakes , solar flares , neuronal activity , or sand piles among others . in order to determine the physical properties of these dynamics , different modelshave been proposed .the archetypal model of soc is the _ sandpile _ model which mimics the process of adding sand grains one by one over a sand pile .the mechanical instability is simulated by a threshold height ( or height difference relative to its neighbors ) .this process allows to develop _ avalanches _ with event size distribution similar to those of the sand pile experiments . in order to model earthquakes soc dynamics olami _( 1992 ) introduced a non - conservative soc model ( ofc model ) based on 2d spring - block system connected to a rigid driving plate .their cellular - automaton displayed similar statistics and gave a good prediction of the gutenberg - richter law .( 2007 ) analyzed the ofc model in regular lattice and small world network .they reported a well - defined power - law distribution of the avalanche size and characterized the presence of criticality by the pdf of the differences between avalanche sizes at different times ( and ) .the study of complex systems employing network science framework has attracted much interest in many interacting - elements systems .several models for studying soc on complex networks have been proposed . in these modelscriticality is produced by a `` fitness '' parameter defined on the nodes or by a rewiring process .we present a simple network model that mimics the instability condition of the sandpile models imposing a local stability condition associated to an average property of its neighborhood . this neighborhood assortativity produces soc dynamics driven by the network s topology , namely , a node will become unstable when its degree is greater than a threshold condition ( like in sandpile models ) .this generalization of sandpile models can describe many interacting - elements system in which the maximum value of the node s property depends on its neighborhood s values .as far as we know this is the only soc model that is driven exclusively by the network topology ( neither rewiring requirements nor non - topologycal properties associated to the nodes have been used ) .the model definition and its main features are explained in section [ sec : model ] .numerical simulations are reported in section [ sec : results ] . in the first part of section [ sec : results ], we characterize the neighborhood topology conducted by the stability condition ( introducing the _ neighborhood assortativity _ ) . in the second part , we show a complete characterizations of soc dynamics and we compare our results with the classical ofc model . in section [ sec : statistics ] we develop the algorithms for the probability distribution by means of the markov chains for a special case where the network is restricted to a linear chain , and compare the results with numerical simulations .conclusions are summarized in section [ sec : conclusion ] .starting from a single node the network grows by adding a new node ( with a single link ) at each time step , nonetheless a topological stability condition constrains the growth . after a new addition the system can result in an unstable configuration .this unstable state leads to a relaxation process which , eventually , can end with the removal of nodes .the interplay between dynamics and topology drives the system . in this modelthe stability of a node depends on the `` support '' of its neighbors as follows : the node s degree must be less than or equal to the average degree of its neighbors plus a global constant ( hereafter _ buffering capacity _ , ) .therefore the stability condition can be written as follows : where is the -node s degree , and is the set of nearest neighbors of node .this inequation may be rewritten as an equation introducing a new local parameter : where is a grade of stability , i.e. , the higher the more stable the node is .we will identify the term as the node s _activity_. note that is negative for unstable nodes .when a node becomes unstable , one of its links is randomly removed and the smallest subnet is deleted . since the degree has changed , the stability conditions of the node and its neighbors have to be checked again in an iterative process until every node in the network is stable .the set of removals nodes performed until every node in the network is stable represents an _avalanche_. the _ size _ of the avalanche can be defined as the total number of nodes removed from the network . starting from a single node or a small networkthese dynamics make the system evolve towards a finite network whose average size , in the stationary regime , depends on the _ buffering capacity _ constant .note that this model generates _ tree - like _ networks since one link is added at each time step and there is no rewiring between nodes , thus there are no cycles in the network . moreover , starting from a full - connected network as initial seed after a long time period every cycle will break up ( for any finite ) .therefore , the average number of links is always less than .these tree - type networks can be found in the tree topology of physical connections on a lan ( hybrid bus and star topologies ) , where a switch can only distribute through a limited number of connections , and also in branching processes such as fractal trees .the node s stability condition is related to the average degree of its neighbors .thus , it will produce networks of positive _ neighborhood assortativity _ , i.e. , the node s degree tends to be similar to the average degree of its neighborhood , as it is shown in the next section .this condition implies a maximum degree in the network .this maximum degree depends on the constant .starting from a single seed , the maximum degree of the network can be increased when the new node is added to a node with the highest degree but connected to neighbors with the same maximum degree . in this casethe new stability condition can be written as : the minimum value of satisfying will be : for example , for , ; for , ; and for , .dynamically , this model shows four behaviors depending on the global parameter : i ) , whose solution is the trivial _ duple _ since adding a new node is always unstable ; ii ) , generates only linear chains , i.e. , the possible stable configurations require ( this fact will allow us to study statistically this case in section [ sec : statistics ] ) ; iii ) , produces networks with the value of restricted by eq .[ eq : kmax ] ; however , the average size of these networks is limited to a value dependent on ( as it can be seen in section [ sec : results ] , figure [ fig2:time_evol_n ] ) ; iv ) produces networks without any limit in the maximum degree ( but also with the stationary average size limited to a value dependent on ) .the limit case allows adding a new node to any node in the network ; all configurations are stable , thus there wo nt be any pruning events and the network will grow without any limit .we performed numerical simulations starting from a single node and checked that by starting from a different number of nodes , only the statistics at initial time steps change while at the stationary state they stay the same .a snapshot of a network with at time is depicted in fig .[ fig1:network_c2p5 ] . , at time .node s degree is size - coded , from to .,width=453 ] the stability condition in the model implies a new type of assortative mixing in which the nodes s tendency to link does not depend on its nearest neighbors property but on the neighborhood s average property . in order to characterize the _ neighborhood s assortativity _ we have assigned a new property to the node : the neighborhood s average degree , defined as , , where , i.e , the average degree including the . for this magnitude we can obtain an assortativity coefficient as the standard pearson correlation coefficient .the average value of the neighborhood assortativity coefficient , averaged over networks at the stationary state , is around , for in the range ] . in the critical regimen they obtained a value of . in our case ,similar results were obtained .inset of fig .[ fig7:prob_distr_energy ] shows a zoom for small `` returns '' and a suitable fitting for different values ( from to ) ; solid line corresponds to the fitting by a _q - gaussian _ curve , with an exponent , compatible with the ofc model .( blue circles ) , ( red asterisks ) , ( black pluses ) .inset : zoom of positive values and q - gaussian fit for , with , and ,scaledwidth=80.0% ] finally , we have tried to quantify the long - range spatial correlations following the fluctuation analysis introduced by rybski _( 2010 ) for networks .this approach is based on the fluctuations of degree sequence along shortest paths of length , and it can be adapted to any topological property of the network , like a node activity . in our case ,the magnitude playing the role of node activity is defined in eq .[ eq : stabil_cond_alpha ] ( the more negative the more unstable the -node is ) . following the procedure described in , we have considered all the shortest paths of length in the network and calculated the standard deviation of the averages of our activity , .figure [ fig8:f_d ] shows the fluctuation function for different values of , averaged over snapshots .a power - law tendency can be observed ( superimposed with the exponential finite - size effect ) .since the usual hurst - like exponent is related with the fitted value by , positive long - range correlations are characterized by exponents , while the negative ones are characterized by exponents . for .inset : functions for ( pink circles ) , ( red triangles ) , and ( black squares ) ; the dashed line is an eye - guide with slope .,scaledwidth=85.0% ] the main challenge of studying the networks of this model is the variation of their size . in order to overcome this difficulty we have rescaled the distance with the diameter of the network , .inset of figure [ fig8:f_d ] depicts the fluctuation function for three values of by rescaling the distance , . for different values of , we have always obtained an exponent , indicating anticorrelations in the activity .this result is in agreement with the meaning of node s activity ( ) since a node with high activity ( more negative values of ) has a higher degree than its neighbors ( in average ) .this anticorrelation can be also found employing the assortative mixing by activity , . as mentioned by newmann ( 2003 )one can compute the standard pearson correlation coefficient for any scalar variable associated to the nodes .the value of this correlation coefficient for the activity is ( where the error is calculated by the jackknife method ) in agreement with the result shown by the fluctuation function .it is worth remembering that assortative mixing by vertex degree is null like in the case of other typical random network models .in this model , there exists a special case that , due to its simplicity , can be treated statistically .when the _ buffering capacity _ constant is the node degree is limited to .therefore , the only possible result is a linear chain whose size evolves stochastically .the probability of finding a linear chain of size at time can be solved using _markov chains_. we define the _ transition matrix _ , , that contains the probabilities of transition from state to state and an initial probability vector with the probabilities of all the states at initial time ( ) . in our particular case refers to a network with nodes .after time steps the probability that the system is in state can be obtained as the power of the _ transition matrix _ : where for a single - node seed .note that a linear chain of length , at time , from a system of length and from , for any , at time can be obtained . in our workwe have studied their first time units . at stationary state , the average system size ( number of nodes ) is .this statistical value is confirmed by numerical simulations ( averaged over realizations ) , _ i.e. _ .the probability distribution at any time can be defined for any value of the system size , even for very large number of nodes .for example , the probability of finding a linear chain of size at is about .[ fig9a : evol_prop_size_c1 ] shows the time evolution of the percentage for networks with size , from to ( only up to for clarity purpose ) .dashed lines correspond to values obtained from the statistics study and dots correspond to average values from numerical simulations . from the probability transitions in the markov chainthe average event size distribution can also be estimated . in fig .[ fig9b : evol_event_distr ] the dashed lines represent the theoretical values computed from the statistics study and the dots represent the results from numerical simulations averaged over realizations . .[ fig8:evol_chain_event_distrib ] this statistical approach to the special case of linear chains can be used to gain a better insight into the event size distribution . even in this simple casethe event size can vary from to more than a half of the system size at any time , and with this approach we can obtain the probability of an event of any size , and also the probability of having a linear chain of size at any time . as can be seen in figure [ fig8:evol_chain_event_distrib ] the statistical approach andthe numerical simulations are in complete agreement .the characteristic behavior of the soc dynamics and some of its statistics properties can be analyzed with our simple network model . in this modelthe system ( the network ) is maintained out of equilibrium by a constant flux of matter ( nodes ) .the criticality appears due to a stability condition which relates one node s topological property ( its degree ) with its neighborhood ( the average degree ) .this local condition is associated to a neighborhood s assortativity .this new approach represents one step beyond the newman s assortative mixing : a node s tendency of linking does not depend on its neighbors property but on the neighborhood s average property .this assortative mixing by neighborhood s average property should be more suitable for studying social communities networks .an exhaustive study of _ neighborhood assortativity _ with real and synthetic networks is in progress .we have found that some real networks exhibit positive neigborhood assortativity and null degree assortativity . in this toy model ,the interplay between topology and dynamics drives the system to a self - organized stationary state .the only parameter in the model that controls the system size at the stationary state ( without any dynamical variable ) is the _ buffering capacity _ constant . in order to characterize the soc dynamics we have performed simulations for different values of the _ buffering capacity_. the statistics of events and time intervals between events show distributions with similar exponent ( ) to the ones observed in ofc model .moreover , all the distribution plots for different _ buffering capacity _ constants ( and for different system sizes ) can be collapsed into an universal curve , indicating that the own dynamics is tuning the phenomena in the same organized way , without external conditions .the pdf of the `` returns '' ( differences between avalanche size at time and ) can be fitted by a _q - gaussian _ curve .the fit exponent can also be compared with the exponent found in the ofc model . in general, the model exhibits a soc behavior with exponents similar to the ofc model .we have also studied the statistical model for the special case of linear chains ( ) by means of the _ markov chains_. with this procedure we have obtained the probability of finding the system in a state ( a network of nodes ) at time and moreover , it can reproduce the system size distribution obtained from simulations .we gratefully acknowledge miguel ngel ibez , ramn alonso , miguel ngel muoz and juan carlos losada for fruitful discussions .this work was supported by the project mtm2012 - 39101 and mtm2015 - 63914-p from the ministry of economy and competitiveness of spain .large simulations are supported by cesvima ( supercomputation center of the technical university of madrid ) .g. a. held , d. h. solina , d. t. keane , w. j. haag , p. m. horn , and g. grinstein , `` experimental study of critical - mass fluctuations in an evolving sandpile '' , phys .* 65 * ( 9 ) , 11201123 ( 1990 ) .f. caruso , v. latora , a. pluchino , a. rapisarda , and b. tadi , `` analysis of self - organized criticality in the olami - feder - christensen model and in real earthquakes '' , phys .e * 75 * ( 5 ) , 055101 - 14 ( 2007 ) .g. caldarelli and d. garlaschelli , `` self - organization and complex networks '' , in _ adaptive networks : theory , models and applications _ , edited by t. gross and h. sayama ( springer , necsi cambridge / massachusetts , 2009 ) , p. 115 .j. mcateer , m. aschwanden , m. dimitropoulou , m. georgoulis , g. pruessner , l. morales , j. ireland , v. abramenko , `` 25 years of self - organized criticality : numerical detection methods '' , in _space science reviews _ * 198 * , 1 , 217266 ( 2015 ) .
complex networks are a recent type of frameworks used to study complex systems with many interacting elements , such as self - organized criticality ( soc ) . the network nodes s tendency to link to other nodes of similar type is characterized by assortative mixing . real networks exhibit assortative mixing by vertex degree , however typical random network models , such as erds - rnyi or barabsi - albert , show no assortative arrangements . in this paper we introduce the _ neighborhood assortativity _ notion , as the tendency of a node to belong to a community ( its neighborhood ) showing an average property similar to its own . imposing neighborhood assortative mixing by degree in a network toy model , soc dynamics can be found . these dynamics are driven only by the network topology . the long - range correlations resulting from the criticality have been characterized by means of fluctuation analysis and show an anticorrelation in the node s _ activity_. the model contains only one parameter and its statistics plots for different values of the parameter can be collapsed into a single curve . the simplicity of the model allows performing numerical simulations and also to study analytically the statistics for a specific value of the parameter , making use of the markov chains .
social structure in various forms exists in the human society and in animals . in the middle ages ,many villages existed each of which was ruled by a feudal lord and his clan . at present , several nations dominate the world with many followers and some challengers .a key question is how to understand the universal nature in the emergence of these hierarchies which consist of a small number of winners and many losers .it is also an important question to find the mechanism for the simultaneous emergence of the villages and the hierarchy .basically , social difference occurs when two moving individuals meet and fight each other where the winner deprives the loser of wealth or power .the winning probability of a fight depends on the difference between wealth of two individuals engaging in the fight .furthermore , the wealth of an individual decays to and the negative wealth ( debt ) increases to zero when the individual does not fight .many aspects of the society can be modeled by setting rules to diffusion , fighting and relaxation processes . in this paper, we consider a challenging , or bellicose society where individuals try to challenge thier neibours if possible .we show by monte carlo ( mc ) simulation that the critical population density for emergence of the hierarchy is much lower than those in the no - preference society and in a timid society .furthermore , we show that the hierarchy and villages emerge simultaneously in this society ; in the no - preference society or in a timid society , the hierarchy emerges spontaneously but no villages are observed .namely , we show that among controlling processes , the trend of individuals challenging to stronger neighbors plays the critical role in the self - organization of the structure .we organize this paper as follows ; in sec .2 , a challenging society is modelled by setting hostile move of individuals .the results of the mc simulation is presented in sec .3 where the density dependence of the order parameter and the profile of winning probability .we also show the formation of villages in the challenging society .section 4 is devoted to discussion .bonabeau _ et al._ have shown that a hierarchical society can emerge spontaneously from an equal society by a simple algorithm of fighting between individuals who diffuse on a square lattice by a one step simple random walk .suppose individual tries to move onto the site occupied by individual and these two individuals engage in a fighting .the fighting rule is characterised by the winning probability of individual against individual which is assumed to be where is the wealth of individual and is a controlling parameter of the model . therefore, when the difference of the wealths is large , the stronger one wins all the fights , and when , the winning probability deviates from linearly in the difference .the winner occupies the lattice site and increases its wealth by 1 , and the loser moves to the site previously occupied by and reduces its wealth by 1 .when individual is not involved in any fight in one mc time step ( mc tries during which all idividuals are accessed once ) , its wealth is assumed to decay as , \ ] ] where the unit of time is one mc step .when the wealth is large , it decays by a constant amount per one mc step , , i.e. a rich person does not waste his / her wealth .when the wealth is small , it decreases at a constant rate , that is . here, is another controlling parameter of the model .the social hierarchy can be characterized by the fact that some people have won and some other people have lost more fights .suppose individual won times in fights for a given time interval .then the order parameter can be defined by the mean square deviation of from , bonabeau _ et al _ showed by mc simulation that the social hierarchy self - organizes at a critical density as the population density is increased .note that the relaxation process plays a critical role to have such a transition . in order to study the emergence of social hierarchy and villages in the society of challengers, we introduce a bellicose diffusion strategy : when an individual makes one step random walk on the square lattice , it always moves to a site occupied by some one , and when more than two sites are occupied , it always challenges the strongest among them .an individual is prohibited to fight suscessively with the same opponent . employing the same rule for the fighting and relaxation processes as bonabeau _et al_ , we examined the emergence of hierarchy and spacial structure in this society by mc simulation .mc simulation was performed for individuals on the square lattice with periodic boundary conditions from to .figure 1 shows the dependence of the order parameter on the population density .we see the transition occurs at when and , which is much lower than the critical value for no - preference society ( for the same and ) studied by bonabeau _et al_ . as a function of for and . ,height=151 ] the detailed structure in population is monitored by the profile of the winning frequency .figure 2 shows the profile of the winning frequency for four different population density ; , , and . in the egalitarian society at low densities below the critical density ,the profile shows a sharp peak at .when the density exceeds the critical value , the distribution of the winning probability becomes widespread , and at the same time individuals with winning probability above 95% and with winning probability less than 5% emerge , , , and .( and .),width=188,height=151 ] and .winners , losers and middle class . , width=188,height=151 ] we conventionally classify individuals into three groups by the number of fights which an individual won ; winners are individuals who won more than 2/3 of fights and losers are individuals who won less than 1/3 of fights .individuals between these two groups are called middle class .figure 3 shows the population of each class as a function of the population density .it is interesting to note that the emergence of the hierarchy is signified by appearance of small number of winners .this is a clear contrast to a timid society where individuals always avoid fighting . in the timid society ,the hierarchical society emerges in two steps ; the first and the second transition are signified by appearance of losers and winners , respectively . and .( a ) no villages appear at . ,( b ) one big village is formed at .( c ) many villages appear at .( d ) villages form a percolating cluster at . winners , losers and middle class are represented by red , blue and green dots , respectively.,title="fig:",width=188,height=113 ] and .( a ) no villages appear at . ,( b ) one big village is formed at .( c ) many villages appear at .( d ) villages form a percolating cluster at . winners , losers and middle class are represented by red , blue and green dots , respectively.,title="fig:",width=188,height=113 ] \(a ) ( b ) and .( a ) no villages appear at . ,( b ) one big village is formed at .( c ) many villages appear at .( d ) villages form a percolating cluster at . winners , losers and middle class are represented by red , blue and green dots , respectively.,title="fig:",width=188,height=113 ] and .( a ) no villages appear at . ,( b ) one big village is formed at .( c ) many villages appear at .( d ) villages form a percolating cluster at . winners , losers and middle class are represented by red , blue and green dots , respectively.,title="fig:",width=188,height=113 ] \(c ) ( d ) we now proceed to examine the spatial structure of each state in the steady state , which is shown in fig .4 . in the egalitarian society , no spatial structure is observed . when the population density exceeds the critical value , villages emerge , each of which consists of small number of winners and large number of middle class and losers .the size of the largest village depends strongly on the density ; at the density just above the critical value , all individuals belong to one compact cluster as shown in fig .as the density is increased , the number of clusters increases and thus the size of the largest cluster is rather small ( fig .4 ( c ) ) .when the density is larger than a critical percolation density , one large cluster appears which percolates the system ( fig .the critical percolation density is about 0.65 , which is larger than the critical percolation density 0.593 of the square lattice .this is due to the fact that in the model under consideration individuals have effectively strong attractive interaction .we see that winners ( red dots ) are near the center of the village , surrounded by people in the middle class ( green dots ) , and losers ( blue dots ) are at its perimeter . for ,we compare the population profile of winning frequency of each village , which is shown in fig . 5 . and . , width=188,height=151 ] it is interesting to observe that the profile is more or less common for all villages .this may be compared with the structure of medieval villages , where a few people dominate the village with many subordinates .the number of villages observed in the observation time depends on the population density . at higher densites ,villages form a percolating cluster , corresponding to the borderless situation .we have shown that in a bellicose society the hierarchy self - organizes at much lower population density compared with the no - preference or a pacifist societies . among the basic processes of diffusion ,fighting and relaxation , a small change in the diffusion process affects significantly the self - organiztion of the social structure . in particluar ,preference in the diffusion process plays an important role in the formation of spatial structure .the reason for the villages to be formed in the bellicose society is in the effective attraction between individuals due to the diffusion algorithm , namely an individual always stay in the visinity of other individulas .therefore the formation of villages is somewhat similar to the condensation of droplets in a gas . in this paper , we have discussed the emergence of villages in the time period of our mc simulation .it is an open question to find out the distribution of villages in the long time limit .in fact , there are no mechnism to keep the center of mass of each village at the same position and thus each village can diffuse and may collide and merge with other village . another open and important quesition is to see the effect of the range of the random walk .the distance of one step of the random walk represents the mode of transportation .therefore , as the mode of transportation advances , the effective population density is considered to increase and thus the globalization may occur at lower population density .these questions will be studied in the future .one can expect that various structures of society can be analyzed within the same frame work , which will eventually help in proposing the right policy .99 e. bonabeau , g. theraulaz , j .- l .deneubourg , physica a 217 ( 1995 ) 373 .[ bonabeau ] t.odagaki , m. tsujiguchi , physica a 367 ( 2006 ) 435 .[ oda - tsuji ] a.o .sousa , d. stauffer , int .c 11 ( 2000 ) 1063 .[ sousa ] d. stauffer , int . j. mod .c 14 ( 2003 ) 237 . [ stauffer ] j.l .duckers , r.g .ross , phys .49 a ( 1974 ) 361 .[ duckers ]
we show by monte calro ( mc ) simulation that the hierarchy and villages emerge simultaneously in a challenging society when the population density exceeds a critical value . our results indicate that among controlling processes of diffusion and fighting of individuals and relaxation of wealth , the trend of individuals challeninging to stronger neighbors plays the pivotal role in the self - organization of the hierarchy and villages . _ pacs : _ 05.65.+b , 05.70.fh , 64.60.cn , 68.18.jk + _ keywords : _ self - organization ; hierarchy ; phase transition ; social structure
the quintessential -radioactivity has been studied by many physicists so far and has opened doors for laying a rigid foundation and development of nuclear physics .gamow was the first and foremost to apply quantum mechanics to a nuclear physics problem by providing the first model to explain -decay and propounded that the process involves tunneling of an -particle through a large barrier .a profound knowledge of this quantum mechanical effect enables one to obtain the geiger - nuttall law which relates the decay constant of a radioactive isotope with the energy of the particles emitted .wkb approximations have been applied to study -decay rate of many elements .lifetimes of several heavy elements with have been estimated by theoretically calculating the quantum mechanical tunneling probability in a wkb framework and also using the effective nuclear interaction whose results have shown good agreement over a wide range of experimental datas .our acquaintance with the well known geiger - nuttall ( gn ) law is age old .in fact , the formulation of the geiger - nuttall ( gn ) law in 1911 was a landmark in itself .the gn law states that the -decay half life is related to the energy of -decay process ( -decay value ) as , where a(z ) and b(z ) are the coefficients which are determined by fitting experimental data .the gn law holds good for a restricted experimental data sets available but is invalid in general .in retrospect , we can see that although gn law gives a single straight line but if we consider the experimental data of -particle emitters including heavy and super heavy nuclei with proton numbers as large as 118 , instead of getting a single linear path we observe several linear segments with different slopes and intercepts .this problem is overcome when the experimental results in logarithm form are plotted as a function of viola - seaborg ( vs ) parameter , , where , and are constants .the striking feature of empirical viola - seaborg ( vs ) rule i.e with , , and , is its superiority over gn law as it satisfactorily results in a straight line for a wide range of experimental data . as stated before the vs rule is precise but is an empirical one .thus we strive for a formula of logarithm of half - lives in terms of well - defined parameters or coefficients so that the empirical nature of vs rule is apparently obscured .qi et al . have mentioned the validity and generalization of the gn law along with its microscopic basis .also they have incorporated the tunneling process in the r - matrix theory and put forth an extended form of gn law . to fully understand the r - matrix theory for the decay of a cluster or a particle , we look from the perspective of s - matrix theory of resonance scattering or the transition scattering from an isolated quasi - bound state to a scattering state as detailed in . for completeness , we can highlight the method as follows : in s - matrix method , resonance is considered as a pole in the complex energy plane . adding to that ,the real part of the pole signifies the resonance energy or the q - value of decay and the imaginary part represents the width which in turn gives the decay half - life of the system constituting of an -cluster and the residual nucleus . in the context of transition from quasi - bound state to a scattering state ,we can write the width in terms of wave function at resonance and coulomb functions in two ways : i)by matching the normalized regular solution u(r ) of the modified schrdinger equation and the distorted outgoing coulomb function at a distance is where and are the regular and irregular coulomb functions .thus , the decay width is expressed as ii)we express the general formula of the -decay width as follows : where is a bound initial state for the decaying nucleus and is a final scattering state for the system .the hamiltonians and h are associated with and , respectively .both the expressions are same in one way or another . precisely in the first expression on a radial distance r which is represented crudely .this results in uncertainty in measuring certain data .it is interesting at this point to consider the second formula which is more effective in deriving an analytical expression for half - life in terms of the resonant wave function of an exactly solvable potential and the regular coulomb function .ultimately , by applying approximations on the functions of the expression of half - life , we derive a condensed formula for the logarithm of half - life in terms of the decay energy and mass and charge numbers of the -emitter .it is noteworthy that the found out condensed formula bear close resemblance with the vs rule .thus we can say that finding half - lives from a derived formula is the paramount of this paper . in section ii we describe in detail the formulation and derivation of half - lives . in section iii the formulais being implemented .section iv includes the conclusion of the sought problem .we remark that the -decay process where the -cluster in the decaying nucleus is controlled by an attractive nuclear potential , and the -particle outside the nucleus by the point - charge coulomb potential i.e . in a simple picturewe represent as the difference between the potentials in the two cases viz . the nuclear potential and the point - charge coulomb potential i.e where is the coulomb potential given by in our approach, we calculate the decay width by taking into account the -decay process where there is transition of an -cluster from an isolated quasi - bound state to a scattering state .the initial system is related with the instability with the quasi - bound state of the decaying nucleus . along withthat the final state is the scattering state of the -daughter system .now , we solve the schrdinger equation using the effective potential which is the amalgamation of the nuclear potential and the electrostatic potential to get the radial part of the initial and final state of the wave function .the radial part of the initial state wave function is additionally , the final state wave function can be written considering the motion of the -particle relative to the daughter nucleus as a scattering state wave function corresponding to the -particle in point charge coulomb potential : where , stands for the center - of - mass energy , is the reduced mass of the system with giving the mass of a nucleon , represent the mass number of particle , represent the mass number of the daughter nucleus and is the regular coulomb wave function for a given partial wave .the factor is a normalization factor of the scattering wave function .the mean - field approximation is applied for the nucleon - nucleon interaction .moreover , the process of double folding of the potential along with the electrostatic term , we get a parabola at a close radial distance r. again simulation of the effective potential as a function of distance is done and is solved exactly in the schrdinger equation .consequently the solution of which is the wave function for in the interior region .based on the gell - mann - goldberger transformation , the expression for the decay width becomes for the normalization of the interior wave function the factor is used .the resonant wave function decreases rapidly with distance outside the coulomb barrier radius . for this reason, we apply the box normalization condition for the wave function for . also , we are quite familiar with the relation between decay half - life and the width : by using ( 9 ) , we get a new expression of now, the regular coulomb wave function can be expressed as where , sommerfeld parameter , in particular , for , is given by we make a lay out ( figs .1(a)-1(d ) ) to properly describe the radial dependence of the three terms in the integrand mentioned in ( 11 ) , the modulous of the resonance state wave function , , the regular coulomb wave function , for and the combined nuclear and coulomb potential , by taking the +daughter system ( ) with q - value of decay or energy mev representing the upper curve and the difference of potentials , representing the lower curve of fig .1(c ) . in fig .1(d ) the total integrand multiplied by a scaling factor of is shown .it is visible from the plot that the integrand shows a peak in the region close to the coulomb barrier radius , fm .the plots indicate that the wave function of the resonance state decreases exponentially in the barrier region .also the potential difference ie . becomes zero in the region .but the coulomb function is very small at small values of r. we also find that , although the integrand j is dependent on distance r = r but it is independent of the distance r in the region .thus the value of decay time which is dependent on j is rather independent of r in the region .we use a potential which is a function of radial variable r and can be represented in the form as follows : & { if\;\;\;\;\ ; r \le r_0,}\\ v_0s_2\rho_ 2 & { if\;\;\;\;\ ; r \ge r_0 , } \end{array } \right .\ ] ] where is the strength of the potential with value mev . accounts for the flatness of the barrier , deciding the steepness of the interior side of the barrier whereas the exterior side is judged by . is the radial position having value ; , fm .since we are considering the +nucleus system , represent the proton number of particle , represent the proton number of the daughter nucleus . moreover and are the depth and height of the potential , respectively , having values ; where is the coulomb radius parameter ; , fm , fm , mev fm . and are the distance parameters .-decay rate in s - wave of system : ( a ) the modulous of the radial wave function at resonance , ( b ) the regular coulomb wave function multiplied by ,( c ) the daughter potential , representing the upper plot and representing the lower plot in dotted line , ( d ) the integral multiplied by for the s - wave . the barrier radius fm is shown in arrows.,title="fig : " ] [ fig.1 ] as previously mentioned , we are considering the problem of system with a specific energy value value and radius , the values of sommerfeld parameter and parameter are such that and . in this context, we now use the power series expansion and write the coulomb wave function as for the case , , , and by putting the value of we find that where .therefore , instead of computing function using ( 12 ) , we go by the simple power series expansion of using ( 20 ) multiplied by a factor . fig .1 clearly show that the magnitude of this function is zero near the origin but increases predominantly at whereas the resonant wave function is very small beyond .hence , the integral j can be written in terms of at a point alongwith some multiplying factor which take care of the other contributions within the region .the integral j now changes to for a typical system , the value of is found to be for . using the above found j value, the decay half - life becomes where we take the logarithm of both sides , the expression ( 32 ) is some what similar to the vs relation mentioned previously but the difference is that in the present case the parameters and coefficients namely , , , are well defined . the value of the coefficient and constant .the parameter c ( 35 ) depends on , , and angular momentum partial wave .further , the parameter is related to ( 37 ) specifying the decay time for an particle emitting with different angular momentum . from experiments ( solid dots ) and from calculation using ( 32 ) ( solid line ) as a function of in state for emitters with .,title="fig : " ] [ fig.1 ] from experiments ( solid dots ) and from calculation using ( 32 ) ( solid line ) as a function of in state for emitters with .,title="fig : " ] [ fig.1 ]we take the solvable potential described in eqn.(16 ) and denote it as effective coulomb - nuclear potential for the system and change only the steepness of the interior side of the barrier i.e. . with the value with us and the wave function at resonance , we calculate the half - life by using ( 10 ) and denote it by . for easy handling of the problem we condense the integral j given by ( 11 ) and write in terms of , and as mentioned in eqn.(28 ) . our analysis show that for different nuclei the values of comes out to be in the range 0.4 to 1.5 for .we assign the value of ie . for and for .furthermore , using this , we estimate the values of by using the closed form expression ( 32 ) for the decimal logarithm of half - life and represent it as .at last we determine the values of from the already found out .we then compare the calculated results using ( 10 ) , experimental results and predicted values i.e using ( 32 ) and present systematically in table 1 for case . for a large assemblage of nucleistarting from z=52 to 118 , we write the values viz . , , .our findings reveal that we get a wide band of ranging from decimal logarithmic values of -7.39 s to 14.4 s. in the same way , we write the , , for by taking and present in table 2 .our findings show that by changing the effective potential the value is changing keeping in mind the results of ref. . [ cols="^,^,^,^,^,^",options="header " , ] now to clarify the ambiguity on the non linearity of g n law for various -emitters , we plot and as a function of the coulomb parameter for and present the plot in fig .2 . likewise, we also make a plot of and as a function of used in ( 32 ) and present in fig .3 . to our utter surprise, we find that the plot of and vs gives multiple straight lines whereas and vs gives a single straight line .this clearly indicate that the our measured results have great accuracy .by using the regular coulomb function , resonant wave function and the difference in potentials a general formula is being put forth for the calculation of -decay width . the +nucleus potential generated by using relativistic mean field theoryis closely reproduced by special expressions of the potential and finally we zero in to a closed formula for the logarithm of half - life as a function of q - values of various decaying nuclei of different masses and charges .this derived formula is impeccable in finding the logarithm of half - lives .the closed formula for the logarithm of half - life favorably explains the half - lives ranging from to .also this closed form expression curtains the dilemma over nonlinearity as it fairly reproduces the rectilinear alignment of the logarithm of the experimental decay half - lives as a function of the viola - seaborg parameter . with the updated vs rule with us, we only need the value , mass number and charge to predict the -decay half - life and there by future work concerning half - lives of all types of nuclei can be foresighted .we gratefully acknowledge the computing and library facilities extended by the institute of physics , bhubaneswar .99 m. a. preston,_phys .rev . _ * 71 * , 865(1947 ) .i. perlman , a. ghiorso , and g. t. seaborg,_phys .rev . _ * 77 * , 26(1950 ) .m. balasubramaniam and n. arunachalam , _ phys .rev . _ * c71 * , 014603 ( 2005 ) .y. qian , z. ren , _ phys .lett . _ * b738 * , 87 ( 2014 ) . g. gamow , _ z. phys . _* 51 * , 204 ( 1928 ) .d. n. poenaru , i. h. polonski , r. a. gherghescu , and w. greiner , _ j. phys .phys . _ * 32 * , 1223(2006 ) .d. n. basu , _ j. phys .phys . _ * 30 * , b35 ( 2004 ) . c. samanta , p. r. chowdhury , and d. n. basu , _ nucl .* a789 * , 142(2007 ) .p. r. chowdhury , c. samanta , and d. n. basu , _ phys .rev . _ * c73 * , 014612(2006 ) .m. horoi , b. a. brown , and a. sandulescu , _ j. phys .g : nucl . part .phys . _ * 30 * , 945(2004 ) .h. hassanabadi , e. javadimanesh , and s. zarrinkamar , _ int . j. mod. phys . _ * e22 * , 1350080(2013 ) . c. qi et al . , _ phys .* b734 * , 203 ( 2014 ) . c. qi et al.,_j . of phys.:conferenceseries _ * 381 * , 012131(2012 ) .y. ren and z. ren , _ phys ._ * c85 * , 044608 ( 2012 ) .d. s. delion and a. dumitrescu , _ at .data nucl .data tables _ * 101 * , 1 ( 2015 ) . c. qi , f. r. xu , r. j. liotta , and r. wyss , _ phys .lett . _ * 103 * , 072501 ( 2009 ) . c. qi , f. r. xu , r. j. liotta , and r. wyss , m. y. zhang , c. asawatangtrakuldee , and d. hu , _ phys .rev . _ * c80 * , 044326 ( 2009 ) . c. qi , a. n. andreyev , m. huyse , r. j. liotta , p. van duppen , and r. wyss , _ phys .lett . _ * b734 * , 203 ( 2014 ) .b. sahu and s. bhoi , _ phys .rev . _ * c93 * , 044301 ( 2016 ) .b. sahu , y. k. gambhir , and c. s. shastry,_mod .* a25 * , 535 ( 2010 ) . c. n. davids and h. esbensen , _ phys .* c61 * , 054302(2000 ) .r. g. lovas , r. j. liotta , a. insolia , k. varga , and d. s. delion , _ phys ._ * 294 * , 281 ( 1998 ). v. i. furman , s. holan , s. g. kadmensky , and g. stratan , _ nucl .* a226 * , 131 ( 1974 ) . s. mahadevan , p. prema , c. s. shastry , and y. k. gambhir , _ phys .rev . _ * c74 * , 057601(2006 ) . c. e. frberg , _ rev .* 27 * , 399(1955 ) .h. fiedeldey , w. e. frahn , _annls.of phys ._ * 16 * , 387 ( 1961 ) ._ , _ chin .phys . _ * c36 * , 1603 ( 2012 ) ._ , _ chin .* 16 * , 1157 ( 2012 ) .
although geiger - nuttall ( gn ) law gives a single straight line , if we consider the experimental data of alpha particle emitters including heavy and super heavy nuclei with proton numbers as large as 118 , instead of getting a single linear path we observe several linear segments with different slopes and intercepts . this problem is overcome when the experimental results in logarithm form are plotted as a function of viola - seaborg ( vs ) parameters with values of parameters set by hand . by using the fundamental principle of decay , we derive a formula of logarithm of half - lives in terms of well - defined parameters or coefficients and this replaces the empirical vs rule . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
metamaterials ( mms ) are artificial media that exhibit fascinating electromagnetic properties not present in nature .they are usually manufactured from arrays of sub - wavelength resonators .therefore , effective media theory is used to study their behavior . by analogy with natural materials ,the sub - wavelength resonators form the basic unit cells ( atoms ) of the mm .each unit cell has electric and magnetic multipoles .if the cells inter - spacing is small , they can strongly interact and hence substantially change the media properties . inspired by stereochemistry , the interaction can be controlled by changing the spatial arrangement of the resonators .this concept , widely known as _ stereometamaterials _ , was applied to different mm configurations .similarly , the media properties can change by tuning the unit cell s resonant frequency .this can be done for example , by incorporating a photo - conductive semiconductor .the analogy between the mm unit cells and atoms is utilized to visualize the interaction between mm unit cells as the _ hybridization _ of unit cells modes .this concept was first applied to the study of nano - shells and nano - spheres .later , it was adopted to qualitatively and experimentally analyze the interaction between either split ring resonators ( srrs ) , which are the building blocks of mm .the interaction between the dipole moments determines the nature and strength of coupling . due to the bi(iso / aniso)tropic property of split rings ,both electric and magnetic dipoles play roles in the process of coupling .because of the sub - wavelength nature of mm unit cells , the coupling between them was studied based on quasi - static approaches . for example , the current and charge densities are used to determine the lagrangian ( ) .the lagrangian equation of motion yields a system of coupled differential equations which determines the interaction terms . because of their importance, the analysis is usually carried out for meta - dimers .it is shown that the net effect of both the in - plane electrical and the out - of - plane magnetic dipoles determines the interaction strength .circuit models are also developed to model and to quantify split rings and their coupling .coupled mode theory ( cmt ) proves to be a very successful tool when applied to weakly coupled systems .it was used in mm , microwave filters , wireless power transfer and magnetic resonance . unlike finite element, finite difference and method of moments , cmt reduces the computational domain to the number of modes .other than reducing the computational complexity , cmt provides an intuitive picture of how a complex system behaves in terms of the interaction of its relatively simpler subsystems .this makes cmt a very useful and powerful tool for studying mm . with the aim of qualitatively and quantitatively analyzing the hybridization of coupled systems including mms , an _ `` ab - initio '' _ coupled mode formalism is developed in the current article .a similar analysis was developed to study an electron paramagnetic probe consisting of a dielectric resonator and a cavity .however , it was limited to the studied case and bounded to te modes .nevertheless , it was shown that a coupled mode formalism is still capable of describing the system behavior even though the coupling coefficient can be substantially large ( ) .besides the ability to calculate the eigen - frequencies , field dependent parameters such as the quality factor and resonator efficiency were accurately determined .the current article provides a systematic derivation of the general coupled mode equation in the form of an eigenvalue problem .once solved , the eigenvalues determine the resonant frequencies ( eigen - frequencies ) , while the eigenvectors are used to find the fields .the eigenvalue equation is proven to obey the energy conservation principle and hence is named energy coupled mode theory ( ecmt ) .it reduces to well known formulae when applied to special cases , such as the hybridization of meta - dimers .moreover , ecmt provides a complimentary approach to the dipoles coupling widely used in examining the interaction of srrs .it also gives a numerical procedure for calculating frequencies and fields .two double srrs ( dsrrs ) configurations are numerically studied .the results are explained by assessing the effect of system parameters on the coupling coefficient .the paper is partitioned as follows : section ii presents the theoretical background with emphasis on field expansions and notations used .section iii is devoted to the theoretical derivation .section iv presents the results and discusses the hybridization of split - ring resonators .finally , the conclusion follows in section v.the fields of a system of coupled resonators are expanded in terms of the fields of the uncoupled subsystems , which are regarded as a basis set and they are not necessarily orthogonal . in general thisset is infinite . for practical purposes, it can be truncated to a finite one of a suitable size .therefore and here and are the expansion coefficients of the electric and magnetic components respectively .the modes can be equal to or greater than the number of resonators .so a valid coupled mode can be the linear combination of the first mode of resonator one and the first mode of resonator two , or it can be the first mode of resonator one , the second mode of resonator one and the first mode of resonator two , etc .this procedure is very helpful , for instance , if the frequencies of two modes of one resonator are very close ( or degenerate ) , so they both couple with a third nearby mode of a different resonator . expansions ( [ efieldexp ] ) and ( [ hfieldexp ] ) are equivalent to the linear combination of atomic orbitals ( lcao ) in molecular orbital theory . at the conductors surfaces , the surface current density is equal to where is the unit normal .therefore , the current density is expanded in terms of the uncoupled fields as each uncoupled mode satisfies the sinusoidal time - varying maxwell s equations , therefore the curl of the fields can be written as : here and are the angular frequency ( ) , permeability , permittivity and current density of the uncoupled resonator respectively . in general , and change with position .similarly , for the coupled system where and are the corresponding symbols for the coupled system . in the current article, the dirac bra - ket notation is used to represent the inner product .for example , the inner product of two vector fields and is denoted by and by definition it is equal to where is the total volume .in this subsection , the resonance condition of the coupled system is derived in terms of the uncoupled parameters . for practical resonators ,the losses are small and ignored . later on after determining the fields , the losses can be obtained .therefore , the time - averaged poynting vector and are imaginary and the complex power equation is written as here and are the time - average stored magnetic and electrical energy respectively . at resonance the power equation ( [ poynting ] ) is simplified to be by using expansions ( [ efieldexp ] ) , ( [ hfieldexp ] ) , ( [ jexpand ] ) and taking the complex conjugate , ( [ poysimp ] ) is written as here and . the integral factors and in ( [ matresonance ] ) can be expressed in terms of the stored energy quantities by expanding , using the identity and integrating over the total volume relation ( [ useful ] ) can be re - written as where and . because and depend on the uncoupled parameters , they are called the uncoupled magnetic and electric energy respectively .the full matrix expression of ( [ useful2 ] ) is where equations ( [ useful2 ] ) and ( [ usefulmat ] ) relate the complex conjugate of the reactive power components and to the bulk stored energy components and . to find the eigenvalue equation ,the total fields and are projected on the field components . projecting the total magnetic field onto the electric field component , one can write expanding and in terms of the fields and of the uncoupled systems , using ( [ efieldexp ] ) and ( [ hfieldexp ] ) , and integrating over the whole volume , where is the coupled electrical energy . equation ( [ firstcoupled ] ) relates the and coefficients to one another .one needs to find another relation between the two coefficients and substitute in term of to obtain the eigenvalue equation .this can be achieved by projecting the total electric field onto the magnetic field component , and noting that the electric field is either zero at infinity for open structures or it is normal to the bounding surface if the system is enclosed in a shield or a cavity , .thus one can arrive at , where is the coupled magnetic energy .the two relations ( [ firstcoupled ] ) and ( [ secondcoupled ] ) can be written in matrix form as these two equations represent the projection of the coupled total fields onto the uncoupled ones . from here , the eigenvalue equation can be derived by noting that from ( [ secondmatrix ] ) substituting ( [ secmatrix2 ] ) back in ( [ firstmatrix ] ) one arrives at , equation ( [ eigen1 ] ) is the required eigenvalue equation , where the eigenvalues are the square of the angular frequency and the eigenvectors are the coefficients of the fields given by ( [ efieldexp ] ) .it represents a _ numerical recipe _ which mixes the _ ingredients _ ( uncoupled modes ) in specific _ quantities _ ( determined by the strength of overlap between the fields ) to obtain the coupled frequencies and fields .it is interesting to verify that , at resonance , the eigenvalue problem ( [ eigen1 ] ) satisfies the energy conservation principle given by ( [ resonance ] ) or ( [ matresonance ] ) .this is done by expanding the energy expressions ( [ wm ] ) and ( [ we ] ) in and according to ( [ efieldexp ] ) and ( [ hfieldexp ] ) . after some algebraic manipulation one can find that , using ( [ secmatrix2 ] ) the eigenvalue problem ( [ eigen1])can be rewritten as therefore , relation ( [ energyproof ] ) is identical to the resonance condition in ( [ resonance ] ) .this verifies that the eigenvalues and eigenvectors found using ( [ eigen1 ] ) guarantee that the system obeys the law of conservation of energy . in this subsection, the eigenvalue problem ( [ eigen1 ] ) is solved to find the modes of a dsrr based on the hybridization of the two rings fundamental modes .it is worth mentioning that the analysis is applicable to a broad class of interacting resonators such as loop - gap resonators , degenerate meta - dimers , asymmetric meta - dimers and dsrrs .because the rings are thin , all the permittivity functions can be approximated by .thus , and ( [ eigen1 ] ) reduces to if the integrations volume is large and contains the near fields , then .moreover , the coupling is weak such that .therefore , the eigenvalue problem ( [ eigendsrr ] ) is simplified to furthermore , the operator and can be symmetrized by setting the complex amplitudes of the eigen - fields and ( and ) to be real ( imaginary ) .using ( [ useful ] ) , ( [ simp1 ] ) simplifies to solving ( [ dsrr_simp ] ) , the coupled frequencies are found to be and where and are the angular frequencies of the symmetric ( bonding ) and anti - symmetric ( anti - bonding ) modes respectively .the _ strength _ of coupling can be quantified by defining the coupling coefficient as due to the interaction ( off - diagonal terms ) between the uncoupled modes , the coupled frequencies and are different from and .the interaction is due to the term .this power interaction can be also explained in terms of the electric and magnetic field overlaps as given by ( [ useful ] ) and ( [ simp1 ] ) . an important special case is when .this , for example , represents the meta - dimer studied in using magnetic and electric dipole interactions . for meta - dimers , ( [ simp1 ] ) simplifies to eq .( [ degen ] ) says that can be regarded as the difference between a magnetic ( ) and an electric ( ) components .this result was previously derived using the lagrangian equation of motion and the perturbation method .it is also consistent with lumped circuit models , where coupling is modelled by a mutual inductance and a mutual capacitance . when , the modes decouplethis reinforces the findings of that there is no avoided crossing whenever as was before attributed to the higher order electric multipolar interactions .the decoupling of modes even though and are not negligibly small may seem counter intuitive . visualizingthe coupling as a hybridization of two atomic structures at which the electric - electric dipole and the magnetic - magnetic dipole interactions counteract , alleviate the confusion .the eigenvalue problem ( [ dsrr_simp ] ) gives an alternative physical explanation by taking advantage of ( [ kappa ] ) .the condition is equivalent to or equivalently .this means that there is no interaction between the uncoupled modes , whenever the relative position and orientation of the meta - dimer atoms were meticulously tuned such that is orthogonal to . in another words, there is no energy transfer , or a pathway , between the two uncoupled modes and hence no split in frequency . in the following ,two configurations , _ a _ and _ b _ , are treated separately .a _ consists of two coaxial circular srr which have resonant frequencies of 10.3 ghz and 15 ghz ( fig.[circhybrid ] ) .the net coupling strength ( reflected in the magnitude of the frequency split ) is calculated as a function of the angle between the two gaps .that of the large ring ( resonator 2 ) : ghz .the right hand side shows the coupled frequencies ; symmetric ( bonding ) : and anti - symmetric ( anti - bonding ) : .,width=302 ] fig .( [ recthybrid ] ) shows configuration _b_. it consists of two co - axial square split rings .the outer ring has a fixed capacitive gap and hence a fixed resonant frequency of .the inner ring s gap is allowed to change from 2 to 16 which is translated to a frequency range of . .the capacitive gap of the inner ring .the right hand side shows the coupled frequencies ; symmetric ( bonding ) : and anti - symmetric ( anti - bonding ) : .,width=302 ] the two configurations _ a _ and _ b _ are quantitatively studied by solving the eigenvalue problem ( [ eigendsrr ] ) .as a first step and due to the lack of analytical expressions , the fields and frequencies of the single uncoupled srrs are computed using hfss^^ eigenmode solver ( ansys corporation , pittsburgh , pa , usa ) .the fields are exported to a matlab^^ code where the matrices and are calculated and hence ( [ eigendsrr ] ) is solved to determine the coupled frequencies . finally , the frequency values are compared to the ones obtained by another hfss eigenmode simulation of the complete dsrr systems . for the hfss calculations ,the conductivity of the srr was assumed to be infinite .the solution domain was enclosed in an airbox which is 7 times larger than the srr width .the structure is considered to be embedded in open space .therefore , the airbox was subjected to a perfectly matched layer ( pml ) boundary condition .( [ freqcircdsrr ] ) shows the calculated frequency of configuration _ a _ using both ( [ eigendsrr ] ) and hfss eigenmode solver .it is clear from the figure that as the angle increases , the coupled frequencies deviate more from the uncoupled ones ( and ) .this can be explained by referring to coupling coefficient expression ( [ kappa ] ) .both uncoupled angular frequencies and are constant . the interactions and are the only terms that change .thus , is always proportional to the reactive powers where is the surface of the ring .the electric field of the uncoupled modes is concentrated in the gap of the srr . at the same timethe conduction current attains its maximum at the farthest side .therefore when increases , increases .( [ csrr_ov ] ) clarifies this by superimposing the calculated electric field distribution of the inner srr ( ) on the same plot of the calculated magnitude of the current density ( ) of the outer srr when .clearly as decreases , decreases and so . ) and anti - symmetric ( ) modes for different values.,width=302 ] ) superimposed on the plot of the magnitude of the surface current density ( ).,width=302 ] configuration _ b _ is more interesting. not only are the resonant frequencies in the far infrared , but also the uncoupled frequency does change .accordingly and from ( [ kappa ] ) , is now a function of both the interaction terms and the frequency .the calculated frequencies are presented in fig .( [ freqrectdsrr ] ) where again the values computed using ( [ eigendsrr ] ) are compared to those obtained by hfss eigenmode solver .the results confirm the applicability of ( [ eigendsrr ] ) .it is also observed that as increases , the shift in frequency of the anti - symmetric mode decreases . from ( [ freq_dsrr2 ] ) ,the frequency shift is a function of the product , which , as estimated in the _ appendix _ , decreases whenever increases . to clarify why decreaseswhen increases , one refers to fig .( [ srr_ov1 ] ) which illustrates how the hybridization can be visualized in terms of the interaction between and of the uncoupled modes . from the figure the current density values are maximum near the inner ring s gap .therefore when increases ( increases ) , is distributed over a larger width and thus reduces .it is worth noticing that the frequency shift is significantly large ( ) . ) and anti - symmetric ( ) modes of the rectangular double split - ring resonator.,width=302 ] ) superimposed on the plot of the magnitude of the surface current density ( ) .when the capacitive gap of the inner ring is extended , spreads across a larger area ., width=302 ] unlike , does not significantly change with .this can not be explained by simply referring to ( [ freq_dsrr1 ] ) which was derived based on the assumption that higher order terms are negligibly small .in fact , ( [ freq_dsrr1 ] ) and ( [ freq_dsrr2 ] ) predict that which does not comply with the curves depicted in fig .( [ freqrectdsrr ] ) . to better understand why behaves as shown in fig .( [ freqrectdsrr ] ) , the higher order terms in the on - diagonal elements are retained .the on - diagonal terms are modified by subtracting from .for the dsrr shown in fig .( [ recthybrid ] ) , and hence .this is because the angle between and is . , the coupled induced frequency shift coefficient , was theoretically described for coupled optical cavities .the expressions ( [ freq_dsrr1 ] ) and ( [ freq_dsrr2 ] ) for the coupled frequencies are then modified by replacing each with . accordingly , ( [ omega - omega ] ) becomes ( i.e.,the shift between and is smaller than that between and . ) thus , the effect of is to _ pull _ up toward and counteracts the influence of the off - diagonal cross coupling term . with a similar argument to the one presented in the _ appendix _, it can be shown that decreases as increases , which keeps curve approximately flat as fig .( [ freqrectdsrr ] ) shows . to determine the fields using the coupled mode formalism , the eigenvectors for the coupled modesare computed and the expansion ( [ efieldexp ] ) is used .( [ srr_fields ] ) shows the electric field of configuration _ b _ when . because the total electric field does not satisfy the boundary conditions at the rings surface , the field calculated is not exactnevertheless , the eigenvalue problem ( [ eigendsrr ] ) still gives very reasonable results as it shows the contribution of each of the uncoupled modes to the total dsrr fields .( ) . ( a ) the electric field of the symmetric mode .( b ) the electric field of the anti - symmetric mode.,width=302 ]a general coupled mode equation in the form of an eigenvalue problem is derived .the eigen - frequencies are determined after finding the eigenvalues .the eigenvectors are used to find the electromagnetic fields .if resonators are compared to atoms , the eigenvalue problem can be considered as the electromagnetic analog of molecular orbital theory .this conceptual view agrees with the way meta - materials unit cells are treated .it is shown that the eigenvalue equation obeys the energy conservation principle . as an immediate application ,the behavior of meta - dimers and dsrr was explained using the interaction between and .thus , the eigenvalue problem provides an intuitive view to how resonators interact .the interaction picture is equivalent to other well known methods of analysis such as the dipole interactions and lumped circuit models .two configurations were formulated and numerically solved and the results were compared to finite element simulations . to illustrate the versatility of the coupled mode formalism , the numerical findings were explained using the interaction picture .it was shown that the coupled induced frequency shifts terms is very essential to correctly explain and quantify the dsrr behavior .consider a simple lc circuit , with a capacitive gap , resonating at angular frequency . in terms of the voltage on the capacitor ,the average power is where the relation was used .the capacitance , .thus , for a fixed power , {w_g}}.\ ] ] is the integral of and .therefore {w_g}}$ ] j. b. pendry , `` negative refraction makes a perfect lens , '' _ physical review letters , _ vol .3966 - 3969 , 2000 . g. v. viktor,``the electrodynamics of substances with simultaneously negative values of and , '' _soviet physics uspekhi , _ vol .509 , 1968 .m. kafesaki , n. h. shen , s. tzortzakis , and c. m. soukoulis , `` optically switchable and tunable terahertz metamaterials through photoconductivity , '' _ journal of optics , _ vol .11 , pp . 114008 , 2012 .e. ekmekci , a. c. strikwerda , k. fan , g. keiser , x. zhang , g. turhan - sayan , and r. d. averitt , `` frequency tunable terahertz metamaterials using broadside coupled split - ring resonators , '' _ physical review b , _ vol .19 , pp . 193103 , 2011 .g. r. keiser , a. c. strikwerda , k. fan , v. young , x. zhang , and r. d. averitt , `` decoupling crossover in asymmetric broadside coupled split - ring resonators at terahertz frequencies , '' _ physical review b , _ vol .2 , pp . 024101 , 2013 .h. guo , n. liu , l. fu , t. p. meyrath , t. zentgraf , h. schweizer , and h. giessen , `` resonance hybridization in double split - ring resonator metamaterials , '' _ optics express , _ vol .12095 - 12101 , september 17 , 2007 .b. lahiri , s. g. mcmeekin , r. m. de la rue , and n. p. johnson , `` resonance hybridization in nanoantenna arrays based on asymmetric split - ring resonators , '' _ applied physics letters , _ vol .15 , pp . 1 - 3 , 2011 .yang , z .- s .zhang , z .- h .hao , and q .- q .wang , `` strong bonding magnetic plasmon hybridizations in double split - ring resonators , '' _ optics letters , _ vol .3675 - 3677 , september 1 , 2012 .i. sersic , m. frimmer , e. verhagen , and a. f. koenderink , `` electric and magnetic dipole coupling in near - infrared split - ring metamaterial arrays , '' _ physical review letters , _ vol .21 , pp . 213902 , 2009 .j. d. baena , j. bonache , f. martin , r. m. sillero , f. falcone , t. lopetegi , m. a. g. laso , j. garcia - garcia , i. gil , m. f. portillo , and m. sorolla , `` equivalent - circuit models for split - ring resonators and complementary split - ring resonators coupled to planar transmission lines , '' _ ieee trans .theory tech , _ vol .1451 - 1461 , 2005 .m. shamonin , e. shamonina , v. kalinin , and l. solymar , `` resonant frequencies of a split - ring resonator : analytical solutions and numerical simulations , '' _ microwave and optical technology letters , _ vol .133 - 136 , 2005 .h. van nguyen , and c. caloz , `` generalized coupled - mode approach of metamaterial coupled - line couplers : coupling theory , phenomenological explanation , and experimental demonstration , '' _ ieee trans .theory tech , _ vol .5 , pp . 1029 - 1039 , 2007 .a. a. sukhorukov , a. s. solntsev , s. s. kruk , d. n. neshev , and y. s. kivshar , `` nonlinear coupled - mode theory for periodic plasmonic waveguides and metamaterials with loss and gain , '' _ optics letters , _ vol .462 - 465 , february 1 , 2014 .i. awai , and z. yangjun , `` separation of coupling coefficient between resonators into magnetic and electric components toward its application to bpf development , '' _ microwave conference , 2008 china - japan joint , _ pp .61 - 65 .a. kurs , a. karalis , r. moffatt , j. d. joannopoulos , p. fisher , and m. soljai , `` wireless power transfer via strongly coupled magnetic resonances , '' _ science , _ vol .5834 , pp .83 - 86 , july 6 , 2007 .a. p. sample , d. a. meyer , and j. r. smith , `` analysis , experimental results , and range adaptation of magnetically coupled resonators for wireless power transfer , '' _ industrial electronics , ieee transactions on , _ vol .544 - 554 , 2011 .s. m. mattar , and s. y. elnaggar , `` analysis of two stacked cylindrical dielectric resonators in a te102 microwave cavity for magnetic resonance spectroscopy , '' _ journal of magnetic resonance , _ vol .174 - 82 , 2011 .s. y. elnaggar , r. tervo , and s. m. mattar , `` coupled modes , frequencies and fields of a dielectric resonator and a cavity using coupled mode theory , '' _ journal of magnetic resonance , _ vol .238 , no . 0 ,pp . 1 - 7 , 2014 .s. y. elnaggar , r. tervo , and s. m. mattar , `` general expressions for the coupling coefficient , quality and filling factors for a cavity with an insert using energy coupled mode theory , '' _ journal of magnetic resonance , _ vol .242 , no . 0 ,57 - 66 , 2014 .s. y. elnaggar , r. tervo , and s. m. mattar , `` optimal dielectric and cavity configurations for improving the efficiency of electron paramagnetic resonance probes , '' _ journal of magnetic resonance , _ vol .245 , no . 0 ,50 - 57 , 2014 .i. levie , and r. kastner , `` reduced integral equations for coupled resonators related directly to the lumped equivalent circuit , '' _ ieee trans .theory tech , _ vol .4021 - 4028 , 2013 .i. awai , s. iwamura , h. kubo , and a. sanada , `` separation of coupling coefficient between resonators into electric and magnetic contributions , '' _ electronics and communications in japan , _ vol .1033 - 1039 , 2006 .
there is recent interest in the inter / intra - element interactions of metamaterial unit cells . to calculate the effects of these interactions which can be substantial , an _ `` ab - initio '' _ general coupled mode equation , in the form of an eigenvalue problem , is derived . the solution of the master equation gives the coupled frequencies and fields in terms of the uncoupled modes . by doing so , the problem size is limited to the number of modes rather than the , usually large , discretized spatial and temporal domains obtained by full - wave solvers . therefore , the method can be considered as a _ numerical recipe _ which determines the behavior of a complex system once its simpler _ ingredients _ are known . besides quantitative analysis , the coupled mode equation proposes a pictorial view of the split rings hybridization . it can be regarded as the electromagnetic analog of molecular orbital theory . the solution of the eigenvalue problem for different configurations gives valued information and insight about the coupling of metamaterials unit cells . for instance , it is shown that the behavior of split rings as a function of the relative position and orientation can be systematically explained . this is done by singling out the effect of each relevant parameter such as the coupling coefficient and coupled induced frequency shift coefficients .
the purpose of the work is to study the information properties of an analogue communication channel , constructed by a two - layer neural network , receiving data from a gaussian source .this data is corrupted with gaussian noise with a known variance and the output signals are affected by some random uncorrelated output noise .contrarily to what happens in the case of a linear gaussian channel , which can be easily solved even in presence of noise , , the exact calculation in the case of analogue channel requires some assumptions on the relation between the non - linear term and the level of noise .in particular , we suppose a small non - linearity , compared to the output noise .this corresponds to the case where the sigmoidal transfer function is relatively flat and the channel is noisy . under this assumption, the mutual information between the output and the input of the channel can be evaluated analytically .the perturbative approach by means of feynman diagrams , , developed in this paper , allows to represent in a direct and elegant way the perturbative corrections in first order of perturbation theory for every kind of non - linearity .comparing with the extreme case of the binary transfer function , where special mathematical techniques , , are introduced for the calculation of the mutual information , the present analysis deals mainly with the effect of the non - linearity on the mutual information and the rational way of investigating it .the problem of its maximization with respect to the coupling matrix will be considered elsewhere .the paper is organized as follows : in section 2 we introduce the model and in section 3 the mutual information is derived in the case of a general non - linear function . in section 4we present the results for the typical case of cubic non - linearities . in section 5we develop the rules to express the perturbative series in terms of feynman diagrams in the case of the same cubic non - linearity . in section 6we discuss the case of a general non - linearity . in section 7we present shortly the calculation of the mutual information in the case of a generic non - local cubic nonlinearity and explain how the diagram technique is modified .we conclude with some final remarks and with future developments of this work .we consider a two layer network with continuous inputs = which are gaussian distributed and correlated trough the matrix : {ij } , \,\,\,\, \forall i , j\in 1,2, .. n . \label{input}\end{aligned}\ ] ] the signals are corrupted by uncorrelated gaussian input noise =\{ , with the output vector is a function of the noisy input transformed via the couplings : we also assume that the output signals are affected by some random uncorrelated output noise =\{ , with the following gaussian distribution : the transfer function is a smooth continuous function , which typically has a sigmoidal shape in the case of analogue neuronal devices , , .one possible choice is : where the parameter modulates the steepness of the curve .a linear input - output relationships has already been considered in the context of the mutual information in previous works . herewe examine the contribution to information transmission given by a small non - linear term in the channel transfer function . assuming that the argument of the transfer function is small , a taylor expansion of eq.([gain ] ) gives : where the higher order terms are all odd powers of .thus the output of the channel can be written as : where is a generic non - linear term .for example it could be the cubic term or a higher order term in the expansion of the ) in terms of = .we are interested on the mutual information between the input and the output signals : it is easy to show that can be written as the difference between the output entropy and the `` equivocation '' between the output and the input : where and in the next section we present the calculation of the mutual information separately for the output entropy and for the equivocation , in the considered case of a non - linear channel .let us consider the probability for the output signals in eq .( [ entropy ] ) . if the non - linear term , present in eq .( [ output ] ) , were equal to zero , the evaluation of would be trivial , as would be a linear combination of gaussian variables . in order to extract explicitely the dependence of on the non - linear term , we introduce the conditioned probability : expanding to the first order in , assuming a small non - linearity , compared to the variance of the output noise , we obtain : , \label{pi_0}\ ] ] where and we have assumed that higher order terms in the ratio are negligible . substituting eq .( [ pi_0 ] ) in the expression for the output entropy we obtain at the first order in : here is the output entropy in the case of a linear channel , .we remind that is the probability for the output when = . in this case is a linear combination of zero mean gaussian variables and its distribution is a gaussian centered in with a covariance matrix given by : {ij}=[a+bi]_{ij } , \label{amatrix}\ ] ] where we have set . in the definition of whenever we change basis from to .] can be explicitely written also in the following way : ^{-1 } ( { \mbox{\boldmath }}- { \mbox{\boldmath }})+ { \mbox{\boldmath }}^t [ a+bi]^{-1 } ( { \mbox{\boldmath }}-{\mbox{\boldmath }})\nonumber\\ & & + ( { \mbox{\boldmath }}- { \mbox{\boldmath }})^t[a+bi]^{-1}{\mbox{\boldmath }}^t- { \mbox{\boldmath }}^t [ a+bi]^{-1 } { \mbox{\boldmath }},\end{aligned}\ ] ] which now can be easily integrated over the gaussian distributions and .since is an odd power function of like any term in the expansion of the transfer function ( [ gain ] ) , only the second and the third term in the sum in eq.([i1trick ] ) give non zero contributions .thus , the expression of the integral in the expression for the output entropy eq.([outfin ] ) becomes : ^{-1 } ( { \mbox{\boldmath }}-{\mbox{\boldmath }})+ ( { \mbox{\boldmath }}- { \mbox{\boldmath }})^t[a+bi]^{-1}{\mbox{\boldmath }}^t\right ] .\label{diagentropy}\ ] ] the integration over leads to the final expression for the output entropy in terms of a general non - linearity : ^{-1}{\mbox{\boldmath }}. \label{outentropy}\ ] ] the evaluation of the integral in requires a specific choice for the non - linearity . before introducing it ,we show how to obtain a similar expression for the equivocation term .we remind the expression of the equivocation term : the evaluation of this term can be carried out in a very similar way to the output entropy .we use the equivalence : then , expanding in powers of up to the first order as in eq.([pi_0 ] ) we obtain : , \label{pi_0_x}\ ] ] where substituting eq.([pi_0_x ] ) in the expression of the equivocation term , we obtain : here the conditional probability is : }}\cdot e^{-({\bf v}- j{\bf x})^t[b+bi]^{-1}({\bf v}- j{\bf x})/2},\ ] ] where is the correlation matrix between the outputs in absence of signals at . from eq .( [ output]),([inputnoise]),([outputnoise ] ) one can derive : )(v_j- [ j{\mbox{\boldmath }}]_j)\rangle= [ b+bi]_{ij}\nonumber\\ & & b = b_0jj^t \label{bmatrix}\end{aligned}\ ] ] the expression for the equivocation term becomes : where ^{-1}({\mbox{\boldmath }}\!-\ !j{\mbox{\boldmath } } ) .\ ] ] the integration over is carried easily as is gaussian and by using the replacement : the final expression to be integrated over , and becomes : ^{-1 } ( { \mbox{\boldmath }}- j{\mbox{\boldmath }})+({\mbox{\boldmath }}-j { \mbox{\boldmath }})^t[b+bi]^{-1}({\mbox{\boldmath }}- { \mbox{\boldmath }})\right ] .\label{diagequiv } \end{aligned}\ ] ] the integration over gives for the equivocation term : ^{-1}{\mbox{\boldmath } } , \label{equivoc}\ ] ] where we have changed variable from .this expression is our final result for the equivocation term in the case of a general non - linear function . combining eqs.([outentropy ] ) and( [ equivoc ] ) , the mutual information reads : ^{-1}{\mbox{\boldmath } } -\int\!\ ! d{\mbox{\boldmath }}\!\!\int\!\ ! d{\mbox{\boldmath } } p({\mbox{\boldmath } } ) p({\mbox{\boldmath } } ) { \mbox{\boldmath }}({\mbox{\boldmath }}+j{\mbox{\boldmath }})^t[b+bi]^{-1}{\mbox{\boldmath } } , \label{finalinfo}\ ] ] where is the mutual information in absence of non - linearities .is different from : both distributions are gaussian , but with different variances , as ; , while .matrices and are given respectively in eq.([amatrix ] ) and ( [ bmatrix ] ) . ]the final expression for the mutual information has been obtained in the case of a generic non - linearity . to carry further on the calculation , we have to specify its shape .let us consider the first non - linear term in the expansion of the sigmoidal transfer function ( [ gain ] ) : where we have set = . by using the wick theorem , and , the integration over in eq.([finalinfo ] )can be carried out quite easily and the final expression for the output entropy for this special choice of is : ^{-1}_{i j } a_{i j } .\ ] ] the evaluation of the integrals over and in eq.([finalinfo ] ) for the equivocation term can be carried out with the same procedure . as only even powers of both variablesgive non zero contribution , only the terms [j{\mbox{\boldmath }}]_i ] is represented by a dashed square + ( 20,5 ) 4 .the integration over , corresponds to the contraction of two solid lines coming out of vertices , , which produces the matrix element + ( 20,1 ) + .the integration over , corresponds to the contraction of two wiggly line coming out of vertices , , which produces a term + ( 20,1 ) + .let us consider the case of the cubic non - linearity and let us set .following the rules listed above we can identify each factor in the integrand as a diagram : ^{-1}_{ij}h_j\nonumber\\ \parbox{20mm}{\begin{fmfgraph*}(20,15)\fmfpen{thin}\fmfleft{i1}\fmfright{o1 } \fmfpoly{shaded , tension=0.5}{k1,k2,k3,k4}\fmf{plain , label=,l.side = right}{i1,k1 } \fmf{photon , label=,label.side = right}{k3,o1}\end{fmfgraph*}}\,\,\ , & \longrightarrow & h_i[a+bi]^{-1}_{ij}(v_j - h_j)\nonumber\end{aligned}\ ] ] the result of the integrations is expressed as a series of diagrams obtained connecting the lines in the first diagram with the lines in the second and in the third diagram in order to construct all the topologically distinct and connected diagrams : \left[\parbox{20 mm } { \begin{fmfgraph*}(20,10)\fmfpen{thin}\fmfleft{i1 } \fmfright{o1}\fmfpoly{shaded , tension=0.5}{k1,k2,k3,k4}\fmf{photon , label=,label.side = right } { i1,k1}\fmf{plain , label=,label.side = right}{k3,o1}\end{fmfgraph*}}+ \parbox{20mm}{\begin{fmfgraph*}(20,10)\fmfpen{thin}\fmfleft{i1}\fmfright{o1 } \fmfpoly{shaded , tension=0.5}{k1,k2,k3,k4}\fmf{plain , label=,label.side = right } { i1,k1}\fmf{photon , label=,label.side = right}{k3,o1}\end{fmfgraph*}}\right]\right>_{c}\ ] ] each of the three solid lines coming out from the first diagram can be connected with the solid line coming out from the second diagram and similarly from the third diagram , while the remaining two solid lines are contracted in a loop ; thus we have at the end 6 times the same diagram : diag7 it s easy to check that applying the rules for the contractions of wiggly and solid lines one obtains the expression of the output entropy which coincides with eq.([i1cub ] ) .now we introduce analogous graphic rules for the evaluation of the equivocation ( [ diagequiv ] ) ; some rules are the same as the ones listed , but we need a new element in the graph to represent the vector .the full prescription is given below : 1 .each term is represented by a wiggly line + ( 20,1 ) 2 .each term is represented by a solid line + ( 20,1 ) 3 .each term ] is represented by an empty square + ( 20,5 ) 5 .the integration over , corresponds to the contraction of two solid lines coming out of vertices , , which gives the matrix element + ( 20,1 ) 6 .the integration over , corresponds to the contraction of two wiggly lines coming out of vertices , , which gives the term + ( 20,1 ) 7 .the integration over corresponds to the contraction of two dashed lines coming out of vertices , , which gives the matrix element ^t_{ij} ] .then )^3\rightarrow h_{i}^{3 } + 3h_{i}[j{\mbox{\boldmath }}]_i[j{\mbox{\boldmath }}]_i\ ] ] as odd powers of and give zero contribution to the integral .as in the case of the output entropy we can identify the different factors multiplied in the integrand in eq.([diagequiv ] ) with different diagrams : [j{\mbox{\boldmath }}]_i(v_i - h_i)\nonumber\\ \parbox{20mm}{\begin{fmfgraph*}(20,15)\fmfpen{thin}\fmfleft{i1}\fmfright{o1 } \fmfpoly{empty , tension=0.5}{k1,k2,k3,k4}\fmf{photon , label=,l.side = right}{i1,k1}\fmf{plain , label=,l.side = right}{k3,o1}\end{fmfgraph*}}&\longrightarrow & ( v_i - h_i)[b+bi]^{-1}_{ij}h_j\nonumber\\ \parbox{20mm}{\begin{fmfgraph*}(20,15)\fmfpen{thin}\fmfleft{i1}\fmfright{o1 } \fmfpoly{empty , tension=0.5}{k1,k2,k3,k4}\fmf{plain , label=,l.side = right}{i1,k1 } \fmf{photon , label=,l.side = right}{k3,o1}\end{fmfgraph*}}&\longrightarrow & h_i[b+bi]^{-1}_{ij}(v_j - h_j)\nonumber\end{aligned}\ ] ] thus the expression for the equivocation can be written in the following way : \left[\parbox{20mm}{\begin{fmfgraph*}(20,15)\fmfpen{thin}\fmfleft{i1 } \fmfright{o1}\fmfpoly{empty , tension=0.5}{k1,k2,k3,k4}\fmf{photon , label=,label.side = right } { i1,k1}\fmf{plain , label=,label.side = right}{k3,o1}\end{fmfgraph * } } + \parbox{20mm}{\begin{fmfgraph*}(20,15)\fmfpen{thin}\fmfleft{i1}\fmfright{o1 } \fmfpoly{empty , tension=0.5}{k1,k2,k3,k4}\fmf{plain , label=,label.side = right } { i1,k1}\fmf{photon , label=,label.side = right}{k3,o1}\end{fmfgraph*}}\right ] \right>_{c}\nonumber . \end{aligned}\ ] ] now we have to connect both the first and the second diagram to the third and to the fourth diagram in all possible ways to obtain fully connected diagrams .it s easy to see that the contraction of the first diagram with the third and the fourth ones gives times the diagram already obtained in the case of the output entropy ( [ i1cub ] ) .the contraction of the second diagram with the third and with the fourth diagrams gives a new contribution : writing together the two contributions we obtain the expression for the equivocation , which is equal to eq .( [ i2cub ] ) as it was expected .we show here how to obtain the diagrammatic expansion and the final expression for the mutual information in the case of higher order non - linearities .this allows eventually to evaluate the contribution given by each term in the expansion of the transfer function ( [ gain ] ) .let us consider a generic term .the constants depending on parameter not to introduce too many parameters ] the evaluation of the integrals ( [ diagentropy ] ) and ( [ diagequiv ] ) can be carried out in a very similar way .we make the following substitutions : diag6 )^3&\rightarrow & ( h_i+[j{\mbox{\boldmath }}]_i)^{2n+1 } \rightarrow \sum_{l=0}^{n}\left(\begin{array}{c}2n+1\\2l\end{array}\right ) ( h_i)^{2n+1 - 2l } ( [ j{\mbox{\boldmath }}]_i)^{2l } .\label{rep}\end{aligned}\ ] ] here the binomial expansion of )^{2n+1} ] because odd powers give zero contribution when integrated over .these changes correspond to analogous replacements in the basic diagrams : the double solid line in the upper diagram on the rhs is a short notation for a set of solid lines . in the lower diagram onthe rhs the double dotted line stands for a set of single dotted lines and the double solid line represents a set of single solid lines .then the diagrammatic equation for the output entropy and for the equivocation in the case of a generic order non - linearity can be written as follows : \left[\parbox{20mm}{\begin{fmfgraph*}(20,15)\fmfpen{thin}\fmfleft{i1}\fmfright{o1 } \fmfpoly{shaded , tension=0.5}{k1,k2,k3,k4 } \fmf{photon , label=,label.side = right}{i1,k1}\fmf{plain , label=,label.side = right } { k3,o1}\end{fmfgraph*}}+ \parbox{20mm}{\begin{fmfgraph*}(20,15)\fmfpen{thin}\fmfleft{i1}\fmfright{o1 } \fmfpoly{shaded , tension=0.5}{k1,k2,k3,k4 } \fmf{plain , label=,label.side = right}{i1,k1}\fmf{photon , label=,label.side = right } { k3,o1}\end{fmfgraph*}}\right]\right>_{c}\ ] ] \left[\parbox{20mm}{\begin{fmfgraph*}(20,15)\fmfpen{thin}\fmfleft{i1}\fmfright{o1 } \fmfpoly{empty , tension=0.5}{k1,k2,k3,k4 } \fmf{photon , label=,label.side = right}{i1,k1}\fmf{plain , label=,label.side = right } { k3,o1}\end{fmfgraph * } } + \parbox{20mm}{\begin{fmfgraph*}(20,15)\fmfpen{thin}\fmfleft{i1}\fmfright{o1 } \fmfpoly{empty , tension=0.5}{k1,k2,k3,k4}\fmf{plain , label=,label.side = right}{i1,k1 } \fmf{photon , label=,label.side = right}{k3,o1}\end{fmfgraph*}}\right ] \right>_{c}\nonumber \end{aligned}\ ] ] constructing all the topologically distinct diagrams , according to the rules given above , one can derive the final expression for the mutual information : ^{-1}_{i j}a_{i j}\nonumber\\ & + & g_0 \sum_{l=0}^{2n+1 } ( \begin{array}{c}2n+1\\2l\end{array } ) ( 2n+1 - 2l)\frac{1}{n}(\begin{array}{c}2n\\2\end{array})\frac{1}{l } ( \begin{array}{c}2l\\2\end{array } ) \sum_{i j } [ b+b]_{i j}^{-1 } b_{i j } b_{i i}^{n - l}[jcj^{t}]^{l}_{i j}.\end{aligned}\ ] ] eq .( [ igenfull ] ) is the final expression in the case of a generic non - linear function of the type , for which the diagrammatic techniques provide an easy and direct way to calculate the mutual information .since in the case of the sigmoidal function ( [ gain ] ) the expansion includes only odd powers in , the derivation of the diagrammatic series for the whole taylor expansion is straightforward , at least up to the first order in /b .this shows how the diagrammatic technique provides a compact and easily readable expression for the mutual information in the case of a non - linear noisy analogue channel .let us now investigate the case of a non local non - linearity which depends on the local fields of all outputs .this could correspond to the case where , for example , the global output of the network is constrained in such a way that the local outputs of the single units depend on the total structure of the connectivities .the general case of -order non - linearities is quite complex , but the analysis can be carried out quite easily in the case of a cubic non local non - linearity .the most general third order term can be written as : substituting eq.([gencub ] ) in eqs.([diagentropy ] ) and ( [ diagequiv ] ) it s easy to check that the output entropy and the equivocation can be written as diagrammatic equations .the definitions for lines and vertices given in the previous section remain valid in this more complex case as well .it s enough to replace the basic diagrams derived for the cubic local non - linearity : the diagrammatic equations for the output entropy and for the equivocation become : \left[\parbox{20 mm } { \begin{fmfgraph*}(20,10)\fmfpen{thin}\fmfleft{i1 } \fmfright{o1}\fmfpoly{shaded , tension=0.5}{k1,k2,k3,k4}\fmf{photon , label=,label.side = right } { i1,k1}\fmf{plain , label=,label.side = right}{k3,o1}\end{fmfgraph*}}+ \parbox{20mm}{\begin{fmfgraph*}(20,10)\fmfpen{thin}\fmfleft{i1}\fmfright{o1 } \fmfpoly{shaded , tension=0.5}{k1,k2,k3,k4}\fmf{plain , label=,label.side = right } { i1,k1}\fmf{photon , label=,label.side = right}{k3,o1}\end{fmfgraph*}}\right]\right>_{c}\ ] ] \left[\parbox{20mm}{\begin{fmfgraph*}(20,15)\fmfpen{thin}\fmfleft{i1 } \fmfright{o1}\fmfpoly{empty , tension=0.5}{k1,k2,k3,k4}\fmf{photon , label=,label.side = right } { i1,k1}\fmf{plain , label=,label.side = right}{k3,o1}\end{fmfgraph * } } + \parbox{20mm}{\begin{fmfgraph*}(20,15)\fmfpen{thin}\fmfleft{i1}\fmfright{o1 } \fmfpoly{empty , tension=0.5}{k1,k2,k3,k4}\fmf{plain , label=,label.side = right } { i1,k1}\fmf{photon , label=,label.side = right}{k3,o1}\end{fmfgraph*}}\right ] \right>_{c}\nonumber \end{aligned}\ ] ] following the rules for the contraction of the wiggly and solid lines it s easy to derive the final expression for the mutual information : ^{-1}_{m \varrho}a_{\varrho i } - [ b+bi]^{-1}_{m \varrho}b_{\varrho i}]\ ] ] we list some specific cases arising from this generic nonlinearity and the correspondent final expression for the mutual information : + 0.5 cm * case 1 * leading to the case already analyzed: .the expression for the mutual information is given by eq.([icub ] ) .+ 0.5 cm * case 2 * \ ] ] where ^{-1 } - [ i+bb^{-1}]^{-1}$ ] .+ 0.5 cm * case 3 * + 0.5 cm * case 4 * + 0.5 cm * case 5 * {m i } + [ da]_{m i})\right).\ ] ] + 0.5 cm * case 6 * the present paper we have developed a perturbative approach for the calculation of the mutual information in the case of a generic non - linear channel by means of feynman diagrams .as far as we know , this is the first attempt to use this techniques in the context of the mutual information .we show systematically how the consecutive steps to calculate the mutual information can be easily performed introducing proper diagrammatic rules , in analogy to other standard perturbative approaches .we investigate more in detail the case of _ local _ non - linear transfer functions , when the output of each unit depends only on its local field .previous works have shown that this regime provides an optimal information transfer .then we apply the same techniques to the more general case of _ non - local _ non - linearities , restricted to cubic powers of , where the output of each unit depends on the total structure of the connectivities .this regime corresponds to the case where the total output of the network is constrained in such a way that the state of each output unit can be modified by any pair interaction .further developments of this analysis include the maximization of the mutual information with respect to the coupling matrix in order to find the optimal structure of the connectivities .this should hopefully provide more interesting results , compared to the linear case , , and it will be the object of future investigations . 0.5 cm * acknowledgments * e.k .warmly thanks for hospitality and support the abdus salam international center for theoretical physics , trieste , italy , where this work was completed .v.d.p . thanks a.treves , stefano panzeri , giuseppe mussardo and ines samengo for useful discussions . the work is also supported by the spanish dges grant pb97 - 0076 and partly by contract f608 with the bulgarian scientific foundation . 99 j.h.van herten , j.comp.physiology , * a171*(1992)157 .a.campa , p.del giudice , n. parga and j .- p.nadal , network * 6*(1995)449 .a.abrikosov , l.gorkov and i.dzyaloshinskij , _ quantum field theoretical methods in statistical physics ._ , oxford , pergamon press , 1965 .p.nadal and n.parga , network , * 4*(3)(1993)295 .e.korutcheva , j.p.nadal and n.parga , network * 8*(1997)405 .a.turiel , e.korutcheva and n.parga , j. phys.a : math.gen . * 32*(1999)1875 .r.linsker , advances in neural information processing systems * 5*(1993)953 .. e.korutcheva and v.del prete , in preparation .d.amit _ modelling brain function _ , cambridge univ.press , 1989 . j. hertz , a. krogh and r. palmer , _ introduction to the theory of neural computation _, santa fe institute , lecture notes vol.1 , 1991 .c.marcus and r. westervelt , phys . rev .* a40*(1989)501 .j.p.nadal and n.parga , network * 4*(1994)295 .r. blahut , _ principles and practice of information theory _ , addison - wesley , cambridge ma , 1988 .
we evaluate the mutual information between the input and the output of a two layer network in the case of a noisy and non - linear analogue channel . in the case where the non - linearity is small with respect to the variability in the noise , we derive an exact expression for the contribution to the mutual information given by the non - linear term in first order of perturbation theory . finally we show how the calculation can be simplified by means of a diagrammatic expansion . our results suggest that the use of perturbation theories applied to neural systems might give an insight on the contribution of non - linearities to the information transmission and in general to the neuronal dynamics . pacs : 05.20 ; 87.30 keywords : information theory , mutual information , infomax , feynman diagrams miramare , september 2000
since the introduction in 1937 and its popularization in 1975 , the photoplethysmography ( ppg ) has become an essential optical technique in the healthcare and its mechanism has been extensively studied .it is noninvasive , economic , comfortable , and easy - to - install . in recent years ,due to the advances of sensor technologies , different types of ppg signals are available via non - contact sensors .furthermore , the ppg has became a standard equipped sensor in diverse mobile devices for the healthcare , and an important component in the internet of things .in addition to its healthcare application , it is widely applied to diverse problems , like monitoring the hemodynamics under the hyper- or microgravity environment , music therapy , etc .the ppg contains a lot of physiological dynamic information , ranging from the peripheral oxygen saturation , information about the autonomic system , the cardiac and the respiratory dynamics , to the hypovolemic status . in the past decades, several indices have been proposed for clinical needs and extensively applied .examples include the heart rate and respiratory rate monitoring , pleth variability index for the fluid status assessment , surgical pleth index for the stress evaluation , sleep apnea detection , to name but a few .see for a review and more information . in recent years, the ppg was embraced by different scientific communities .a common research interest is learning physiological information from the ppg , particularly the fine physiological dynamic like the heart rate variability ( hrv ) and the respiratory rate variability ( rrv ) that traditionally were studied directly from the electrocardiogram or the breathing flow signal .this kind of information , when combined with the widely installed ppg sensors , has a great potential in different healthcare markets that opens a channel to the next generation medical care equipment .however , analyzing the hrv and rrv from the ppg is not as easy as deriving the above - mentioned indices , particularly when we have only one channel ppg sensor . the fundamental step toward this analysis is obtaining the instantaneous heart rate ( ihr ) and instantaneous respiratory rate ( irr ) from the ppg , which requires more sophisticated tools beyond standard signal processing tools .the main difficulty comes from the time - varying heart rate and respiratory rate , and the non - sinusoidal ppg oscillation .the time - varying heart and respiratory rates broaden the spectrum , and the non - sinusoidal oscillation inevitably mixup the spectral information for the cardiac activity and respiratory activity .the broadened and mixup spectrum prohibits us from applying the standard signal processing techniques .the problem is even more challenging since the signal is often contaminated by nonstationary uncertainties , like noise and motion artifacts , particularly in the daily environment . in the past few years, several methods have been proposed to solve this challenge , but to the best of our knowledge , methods with solid mathematical supports that are able to extract ihr and irr simultaneously from the single - lead ppg signal are limited , except some ad hoc approaches .thus , a new mathematical pipeline for this challenging signal processing problem is needed . also , a comparison of the method on multiple publicly available databases is also less reported . in this work ,we provide a systematic solution to this difficult task in a unified way .we propose a new signal processing technique based on a _ nonlinear masking technique _ to accurately learn the ihr and irr simultaneously from the ppg , compare our results with existing methods , and report the state - of - the - art results on two publicly available databases , capnobase benchmark database ( http://www.capnobase.org ) and icassp 2015 signal processing cup ( http://www.zhilinzhang.com/spcup2015/ ) .the proposed algorithm is composed of two steps .first , a novel nonlinear mask is designed from the short time fourier transform ( stft ) of the recorded ppg , which is applied to enhance the ihr and irr information .the ihr and irr information is further sharpened by taking the phase information of the stft of the recorded ppg into account .we call the resulting information the _ de - shaped spectrogram_. second , we extract the ihr and irr from the de - shaped spectrogram .we call the proposed algorithm .the algorithm is illustrated in figure [ flowchart ] .the nonlinear mask is novel in the sense that it is determined directly from the recorded ppg , so the information in the ppg can be accurately preserved to the maximum .the detailed description of the algorithm is given in section [ section : methods ] .the mathematical foundation of the nonlinear mask used in have been reported in , and we have summarized the theoretical material in section [ section : reviewcepstrum ] in the online supplementary information ( si ) for the interested reader .= [ draw , -latex ] = [ thick,->,>=stealth ] ( start ) input ppg + the flow chart of the proposed algorithm , , to extract the instantaneous heart rate ( ihr ) and instantaneous respiratory rate ( irr ) from the recorded ppg signal .a typical recorded ppg signal lasting for 20 seconds is shown on the left .the short time fourier transform ( stft ) , and hence the spectrogram , of the input ppg signal are then evaluated .the intensity of the spectrogram at a point in the time - frequency plane indicates how strong the signal oscillates at time and frequency .the dark curve around 1.6hz represents the ihr , while the gray curve around 3.2hz ( and 4.8hz , 6.4hz , etc .the frequency axis above 4hz is not shown ) comes from the non - sinusoidal oscillation of the cardiac activity .similarly , the dark curve around 0.3hz represents the irr , while the gray curve around 0.6hz comes from the non - sinusoidal oscillation of the respiratory activity . with the stft and the spectrogram of the ppg signal , the nonlinear maskis then designed from the spectrogram and the phase function is determined from the stft .the intensity of the phase function at a point in the time - frequency plane indicates the angle of the complex value of the stft at time and frequency , which ranges from to . by applying the nonlinear mask and the phase function of the stft to the spectrogram, the spectrogram is improved and we obtain the de - shaped spectrogram . the darker curve around 1.6 hz represents the ihr and the lighter curve around 0.3 hz represents the irr .the curves corresponding to the ihr and irr are extracted from the de - shaped spectrogram , which are shown as the red and blue curves respectively superimposed on the de - shaped spectrogram.,title="fig : " ] ; ( stftamp ) spectrogram + the flow chart of the proposed algorithm , , to extract the instantaneous heart rate ( ihr ) and instantaneous respiratory rate ( irr ) from the recorded ppg signal .a typical recorded ppg signal lasting for 20 seconds is shown on the left .the short time fourier transform ( stft ) , and hence the spectrogram , of the input ppg signal are then evaluated .the intensity of the spectrogram at a point in the time - frequency plane indicates how strong the signal oscillates at time and frequency .the dark curve around 1.6hz represents the ihr , while the gray curve around 3.2hz ( and 4.8hz , 6.4hz , etc .the frequency axis above 4hz is not shown ) comes from the non - sinusoidal oscillation of the cardiac activity .similarly , the dark curve around 0.3hz represents the irr , while the gray curve around 0.6hz comes from the non - sinusoidal oscillation of the respiratory activity . with the stft and the spectrogram of the ppg signal , the nonlinear maskis then designed from the spectrogram and the phase function is determined from the stft .the intensity of the phase function at a point in the time - frequency plane indicates the angle of the complex value of the stft at time and frequency , which ranges from to . by applying the nonlinear mask and the phase function of the stft to the spectrogram, the spectrogram is improved and we obtain the de - shaped spectrogram . the darker curve around 1.6 hz represents the ihr and the lighter curve around 0.3 hz represents the irr .the curves corresponding to the ihr and irr are extracted from the de - shaped spectrogram , which are shown as the red and blue curves respectively superimposed on the de - shaped spectrogram.,title="fig : " ] ; ( istct ) nonlinear mask + the flow chart of the proposed algorithm , , to extract the instantaneous heart rate ( ihr ) and instantaneous respiratory rate ( irr ) from the recorded ppg signal .a typical recorded ppg signal lasting for 20 seconds is shown on the left .the short time fourier transform ( stft ) , and hence the spectrogram , of the input ppg signal are then evaluated .the intensity of the spectrogram at a point in the time - frequency plane indicates how strong the signal oscillates at time and frequency .the dark curve around 1.6hz represents the ihr , while the gray curve around 3.2hz ( and 4.8hz , 6.4hz , etc .the frequency axis above 4hz is not shown ) comes from the non - sinusoidal oscillation of the cardiac activity .similarly , the dark curve around 0.3hz represents the irr , while the gray curve around 0.6hz comes from the non - sinusoidal oscillation of the respiratory activity . with the stft and the spectrogram of the ppg signal , the nonlinear maskis then designed from the spectrogram and the phase function is determined from the stft .the intensity of the phase function at a point in the time - frequency plane indicates the angle of the complex value of the stft at time and frequency , which ranges from to . by applying the nonlinear mask and the phase function of the stft to the spectrogram, the spectrogram is improved and we obtain the de - shaped spectrogram . the darker curve around 1.6 hz represents the ihr and the lighter curve around 0.3 hz represents the irr .the curves corresponding to the ihr and irr are extracted from the de - shaped spectrogram , which are shown as the red and blue curves respectively superimposed on the de - shaped spectrogram.,title="fig : " ] ; ( stftphase ) phase function + the flow chart of the proposed algorithm , , to extract the instantaneous heart rate ( ihr ) and instantaneous respiratory rate ( irr ) from the recorded ppg signal .a typical recorded ppg signal lasting for 20 seconds is shown on the left .the short time fourier transform ( stft ) , and hence the spectrogram , of the input ppg signal are then evaluated .the intensity of the spectrogram at a point in the time - frequency plane indicates how strong the signal oscillates at time and frequency .the dark curve around 1.6hz represents the ihr , while the gray curve around 3.2hz ( and 4.8hz , 6.4hz , etc .the frequency axis above 4hz is not shown ) comes from the non - sinusoidal oscillation of the cardiac activity .similarly , the dark curve around 0.3hz represents the irr , while the gray curve around 0.6hz comes from the non - sinusoidal oscillation of the respiratory activity . with the stft and the spectrogram of the ppg signal , the nonlinear maskis then designed from the spectrogram and the phase function is determined from the stft .the intensity of the phase function at a point in the time - frequency plane indicates the angle of the complex value of the stft at time and frequency , which ranges from to . by applying the nonlinear mask and the phase function of the stft to the spectrogram, the spectrogram is improved and we obtain the de - shaped spectrogram . the darker curve around 1.6 hz represents the ihr and the lighter curve around 0.3 hz represents the irr .the curves corresponding to the ihr and irr are extracted from the de - shaped spectrogram , which are shown as the red and blue curves respectively superimposed on the de - shaped spectrogram.,title="fig : " ] ; ( dssst ) de - shaped spectrogram + the flow chart of the proposed algorithm , , to extract the instantaneous heart rate ( ihr ) and instantaneous respiratory rate ( irr ) from the recorded ppg signal .a typical recorded ppg signal lasting for 20 seconds is shown on the left .the short time fourier transform ( stft ) , and hence the spectrogram , of the input ppg signal are then evaluated .the intensity of the spectrogram at a point in the time - frequency plane indicates how strong the signal oscillates at time and frequency .the dark curve around 1.6hz represents the ihr , while the gray curve around 3.2hz ( and 4.8hz , 6.4hz , etc .the frequency axis above 4hz is not shown ) comes from the non - sinusoidal oscillation of the cardiac activity .similarly , the dark curve around 0.3hz represents the irr , while the gray curve around 0.6hz comes from the non - sinusoidal oscillation of the respiratory activity . with the stft and the spectrogram of the ppg signal , the nonlinear maskis then designed from the spectrogram and the phase function is determined from the stft .the intensity of the phase function at a point in the time - frequency plane indicates the angle of the complex value of the stft at time and frequency , which ranges from to . by applying the nonlinear mask and the phase function of the stft to the spectrogram, the spectrogram is improved and we obtain the de - shaped spectrogram . the darker curve around 1.6 hz represents the ihr and the lighter curve around 0.3 hz represents the irr .the curves corresponding to the ihr and irr are extracted from the de - shaped spectrogram , which are shown as the red and blue curves respectively superimposed on the de - shaped spectrogram.,title="fig : " ] ; ( ce ) curve extraction + the flow chart of the proposed algorithm , , to extract the instantaneous heart rate ( ihr ) and instantaneous respiratory rate ( irr ) from the recorded ppg signal .a typical recorded ppg signal lasting for 20 seconds is shown on the left .the short time fourier transform ( stft ) , and hence the spectrogram , of the input ppg signal are then evaluated .the intensity of the spectrogram at a point in the time - frequency plane indicates how strong the signal oscillates at time and frequency .the dark curve around 1.6hz represents the ihr , while the gray curve around 3.2hz ( and 4.8hz , 6.4hz , etc .the frequency axis above 4hz is not shown ) comes from the non - sinusoidal oscillation of the cardiac activity .similarly , the dark curve around 0.3hz represents the irr , while the gray curve around 0.6hz comes from the non - sinusoidal oscillation of the respiratory activity . with the stft and the spectrogram of the ppg signal , the nonlinear maskis then designed from the spectrogram and the phase function is determined from the stft .the intensity of the phase function at a point in the time - frequency plane indicates the angle of the complex value of the stft at time and frequency , which ranges from to . by applying the nonlinear mask and the phase function of the stft to the spectrogram, the spectrogram is improved and we obtain the de - shaped spectrogram . the darker curve around 1.6 hz represents the ihr and the lighter curve around 0.3 hz represents the irr .the curves corresponding to the ihr and irr are extracted from the de - shaped spectrogram , which are shown as the red and blue curves respectively superimposed on the de - shaped spectrogram.,title="fig : " ] ; ( output ) output ihr and irr ; ( start.east ) (stftamp.west ) ; ( start.east ) to [ out=290,in=170 ] ( stftphase.west ) ; ( start.east ) to [ out=70,in=190 ] ( istct.west ) ; ( istct.south ) (dssst.175 ) ; ( stftamp.east ) (dssst.west ) ; ( stftphase.north ) (dssst.185 ) ; ( dssst.east ) (ce.west ) ; ( ce.south ) (output.north ) ;it is well known that `` how fast the heart beats '' and `` how fast one breathes '' provide important physiological information , and we commonly use the terminologies , heart rate ( hr ) and respiratory rate ( rr ) , to refer to that information . however , in general , the hr and rr are not scientifically well defined if the the measurement scale is not specified .when the measurement scale is infinitesimal , the hr and rr become ihr and irr needed for the physiological variability analysis . on the other hand , the commonly encountered quantities regarding `` how frequent the heart beats and how frequent one breathes '' are the averaged hr ( ahr ) and averaged rr ( arr ) which are derived by counting how many beats or breaths take place over a provided window .in other words , if we view the ihr and irr as continuous time series , the ahr and arr could be viewed as the low - pass filtered ihr and irr that comes from the window smoothing technique . to specifically evaluate the performance of the algorithm, we have to specify the measurement scale . in this section ,we report the analysis result of the algorithm .the numerical implementation details of the algorithm are available in section [ section : numericaldetails ] in the online si .the capnobase benchmark database ( http://www.capnobase.org ) consists of recordings of spontaneous or controlled breathing in static patients . the icassp 2015 signal processing cup ( http://www.zhilinzhang.com/spcup2015/ ) , instead , contains recordings from pulse oximeter positioned on the wrist of running subjects . in addition to reporting the ihr and irr estimation results, we also provide an up - to - date summary of existing reported results for a fair comparison .the capnobase benchmark database includes forty - two 8-minutes segments from 29 pediatric and 13 adults cases containing reliable recordings of spontaneous or controlled breathing . for each subject ,the ecg , capnometry , and ppg signals were recorded at 300 hz , 25 hz , and 100 hz respectively .all signals were recorded with s/5 collect software ( datex ohmeda , finland ) .the ppg and capnometry were automatically up - sampled to be of 300 hz sampling rate .furthermore , the database contains a reference arr as well as the information regarding the beginning of each expiration , both derived from the capnogram waveform identified by experts .moreover , reference ahr as well as r - peak locations derived from the ecg waveform are also provided , which are also determined by the experts .the method provides estimates for the ihr and irr , which are instantaneous in nature .however , we point out here that , to the best of our knowledge , the methods proposed so far in the literature for the capnobase benchmark datasets provide ahr and arr information ; that is , they do not focus on computing _ instantaneous rates _ , but _ average rates _ over a time window , which is in most cases set to be around 60 seconds . thus , to have a fair comparison, we also provide the result based on the ahr and arr that are evaluated from smoothing the estimated irr and ihr over a 60 seconds window shifted of 30 seconds each time .we refer to this variation of the method as _ - 60s_. denote , where is the number of observations , to be the reference information , that are either the ihr or irr ( respectively the ahr and arr ) which are either provided directly by the experts and included in the database , or evaluated from the r peaks and the beginning timestamps of expirations , and denote to be the estimated ihr or irr determined by ( respectively the estimated ahr or arr determined by - 60s ) .following what has been done in the literature , we assess the performance of the proposed algorithm using the unnormalized root mean square error ( rms ) and the mean absolute error ( mae ) , which are defined as in addition to simultaneously estimate the ihr and irr , one challenge we face in studying ppg signals is the possible presence of artifacts .the capnobase benchmark database include in each dataset information regarding potential intervals in the ppg , ecg and capnometry waveforms that contain artifacts .the methods reported in the literature use this information to skip the measurement errors over windows that are considered unreliable , if not even entire datasets ; see , for example , . instead , to mimic the real scenario ,when we evaluate the proposed method , we do not exclude any of the 42 datasets and we do include all their intervals , even those supposed to contain artifacts . for such intervals the provided ground truthis simply given as interpolation of the nearby reliable values ; see the right panel in figure [ fig : capno_artifacts2 ] .this of course introduce a bias in the error values we compute .particularly , the performance of the proposed method can be consistently improved if we remove intervals containing artifacts .we discuss more in details this aspect in section [ supp : capnobase ] in the online si .the rms and mae of the and - 60s are then evaluated and reported . in tables [tab : capno_stats ] , we provide the summary statistics of the rms and mae for the respiratory and heart rate obtained from the proposed algorithm .the performance of other methods proposed so far in the literature , and their chosen windows for averaging , are also included for a comparison .it is clear that the provides a satisfactory ihr and irr estimation , while the ahr and arr provided by - 60s perform better than the other methods proposed in the literature ..summary of root mean square error ( rms ) and mean absolute error ( mae ) of the respiratory rate ( rr ) and heart rate ( hr ) estimation for the capnobase benchmark database .the unit for the rr is breaths per minute , and the unit for the hr is beats per minute . except ,the methods proposed so far in the literature do not focus on computing _ instantaneous rates _ , but _ average rates _ over a time window .n / a : not available .std : standard deviation . : first quartile . : third quartile . [ cols="^,^,^,^,^,^,^,^,^,^,^ " , ]
despite the population of the noninvasive , economic , comfortable , and easy - to - install photoplethysmography ( ppg ) sensor , a mathematically rigorous and stable algorithm to simultaneously extract the fundamental physiological information , including the instantaneous heart rate ( ihr ) and the instantaneous respiratory rate ( irr ) , from the single - channel ppg signal is lacking . a novel signal processing technique , called the de - shape synchrosqueezing transform , is provided to tackle this challenging task . the algorithm is applied to two publicly available batch databases , one is collected during intense physical activities , for the reproducibility purpose and the state - of - the - art results are obtained compared with existing reported outcomes . the results suggest the potential of the proposed algorithm to analyze the signal acquired from the widely available wearable devices , even when a subject exercises . * keyword : * de - shape synchrosqueezing transform , photoplethysmography , instantaneous heart rate , instantaneous respiratory rate
graph theory finds many applications in the representation and analysis of complex networked systems . in most cases ,the utility of the graph abstraction comes from its inherent ability to represent binary transitive relations ( i.e. transitive relations between two objects ) , which due to the transitivity property gives raise to key concepts , such as walks , paths , and connectivity .this graph conceptual framework allowed the emergence of basic algorithms , such as breadth first search ( bfs ) and depth first search ( dfs ) .these basic graph algorithms , in their turn , made possible the development of more sophisticated algorithms for the analysis of specific properties of complex networks , such as network centrality or network robustness , and also the analysis of dynamic processes in complex networks , such as network generative processes or information diffusion .several generalizations of the basic graph concept have been proposed for modelling complex systems that can be represented by layers of distinct networks and also complex systems in which the network itself evolves with time . in our previous work ,we formalize the multiaspect graph ( mag ) structure , while also stating and proving its main properties .the adopted adjacency concept in mags is similar to the one found in simple directed graphs , where the adjacency is expressed between two vertices , leading to a structure in which an edge represents a binary relation between two composite objects .moreover , in , we show that mags are closely related to simple directed graphs , as we prove that each mag has a simple directed graph , which is isomorphic to it .this isomorphism relation between mags and directed graphs is a consequence of the fact that both mags and directed graphs share a similar adjacency relation .mags find application in the representation and analysis of dynamic complex networks , such as multilayer or time - varying networks ; or even networks that are both multilayer and time - varying as well as higher - order networks .examples of such networks include face - to - face in - person contact networks , mobile phone networks , gene regulatory networks , urban transportation networks , brain networks , social networks , among many others .in particular , we have previously applied the mag abstraction from to different purposes , such as modeling time - varying graphs , studying time centrality in dynamic complex networks , and investigating social events based on mobile phone networks . to illustrate the mag concept in more details in this paper ,we present in section [ sec : alg_rep ] an example of modeling a simple illustrative multimodal urban transportation network . in this paper, we build upon the basic mag properties presented in and show that mags can be represented by matrices in a form similar to those used for simple directed graphs ( i.e. , those with no multiple edges ) .moreover , we here show that any algorithm ( function ) on a mag can be obtained from its matrix representation .this is an important theoretical result since it paves the way for adapting well - known graph algorithms for application in mags , thus easing the effort to develop the analysis and application of mags for modelling complex networked systems .we then present the most common matrix representations that can be applied to mags , although we do not detail all the properties of these matrices , since they are well established in the literature .further , we introduce in detail the construction of mag algorithms for computing degree , bfs , and dfs to exemplify how mag algorithms can be derived from traditional graph algorithms , thus providing an illustrative guideline for developing other more sophisticated mag algorithms in a similar way . as a further contribution, we also make available python implementations of all the algorithms presented in this paper at the following url : http://github.com/wehmuthklaus/mag_algorithms .this paper is organized as follows . section [ sec : mag ] briefly presents the basic mag definitions and properties derived from in order to allow enough background of the current paper .section [ sec : mag ] also presents illustrative examples of mags and its adjacency notion .section [ sec : alg_rep ] shows the representation of mags by means of algebraic structures , such as matrices .emphasis is given to matrix representations , which are derived from the isomorphism relation between mags and simple directed graphs . in particular , we also introduce in section [ sec : comptuple ] the companion tuple , which is a complement to the mag matrix representations . in section[ sec : alg_algth ] , we present basic mag algorithms which are derived from well - known simple graph algorithms . further , in section [ subsec : univ ] , we show that any algorithm ( function ) that can be defined for a mag can be also obtained from its adjacency matrix and companion tuple , establishing the theoretical basis for deriving mag algorithms from well - known simple graph algorithms .finally , section [ sec : fin ] presents our final remarks and perspectives for future work .in this section , we present a formal definition of a mag , as well as some key properties , which are formally stated and proved in .we define a mag as , where is a set of edges and is a finite list of _ aspects_. each aspect is a finite set , and the number of aspects is called the order of .each edge is a tuple with elements .all edges are constructed so that they are of the form , where are elements of the first aspect of , are elements of the second aspect of , and so on , until which are elements of the -th aspect of .note that the ordered tuple that represents each mag edge is constructed so that their elements are divided into two distinct groups , each having exactly one element of each aspect , in the same order as the aspects are defined on the list . as a matter of notation, we say that is the aspect list of and is the edge set of .further , ] is the number of elements in ] . from the definition, it can be seen that the function is not injective .hence , the function for a given sub - determination can be used to define a equivalence relation in , where for any given composite vertices , we have that if and only if . from the sub - determination of order , we can also construct the set ,\ ] ] where is the order of the sub - determination , and is the set of all possible sub - determined edges according to .we then define the function where and , a_{\zeta_2},2_{\zeta_2 } \in a_{\zeta}(h)[2 ] , \dots , a_{\zeta_m},b_{\zeta_m } \in a_{\zeta}(h)[m] ] of composite vertices and edges , such that and for .it follows from this definition that in a walk , consecutive composite vertices as well as consecutive mag edges are adjacent .we show in that an alternating sequence of composite vertices and edges in a mag is a walk on if and only if there is a corresponding walk in the composite vertices representation of .this means that a walk on a mag has a isomorphic walk on the directed graph .since trails and paths also are walks , we also show that the same isomorphism concept extends to them as well .figure [ fig : mag_edges ] can also exemplify a mag path .the two edges and can also be seen as part of the alternating sequence , which characterizes a two - hops path from the composite vertex to the composite vertex .from the concept that walks , trails , and paths on a mag have a isomorphism relation to their counterparts on the directed graph , it follows that analysis and algorithms based on walks , trails , and paths can be formulated on the directed graph .these properties will be extensively used in the current work .in this section , we discuss ways to represent mags by means of algebraic structures . as a consequence to the isomorphism between mags and traditional directed graphs , it is straightforward to construct matrix - based representations of mags .this section addresses these representations , using the mag depicted in figure [ fig : mag_ex1 ] as an illustrative example . of a simple urban transit system.,scaledwidth=75.0% ]figure [ fig : mag_ex1 ] shows an example of a three aspect mag .it can be seen as the representation of a time - varying multilayer network , showing a small and simplified section of an urban transit system .more specifically , figure [ fig : mag_ex1 ] depicts the mag in its composite vertices representation , , which is the directed graph defined in expression ( [ func : iso ] ) . aligned with this view , the aspects of mag can be interpreted in the following way : the first aspect represents three distinct locations , labeled 1 , 2 and 3 .specifically , location 1 represents a subway station , location 2 a subway station with a bus stop , and location 3 a bus stop .the second aspect represents two distinct urban transit modes depicted as layers , namely bus and subway . finally , the third aspect represents three time instants .the mag edges can be seen with the following meaning : location 1 has no edges on the bus transit mode , since it is a subway station .similarly , location 3 has no edges on the subway mode , since it is a bus stop .the eight black edges represent bus and subway trips between locations . as a simplificationall trips are assumed to have the same duration .the red ( dotted ) edges represent the possibility of staying at a bus stop or subway station and not taking a transit .the six blue ( dashed ) edges show that it is possible to change between bus and subway layers at all times at location 2 . as a simplification, the connection between the bus and subway layers is assumed to take no time .we recognize that the decision of making these edges with time length generates cycles of length in instances of location . in real network analysis , length cycles ( and also negative length cycles ) can cause problems .however , we choose to let these cycles present in this toy example since they will cause no harm for the analysis conducted in this thesis , and also , they make the toy example more compact and readable .further , we remark that if desired , these length cycles could be broken by adding new composite vertices , or by making the subway / bus transition to have the same length as a subway / bus trip . in this model ,walks represent the ways the urban transit system can be used to travel from one location to another .for instance , starting at location 1 on the subway layer at time t1 , it is possible to reach location 3 on the bus layer at time t3 .it can be done by taking a subway trip to location 2 at time t2 , switching from subway to bus layer at location 2 , time t2 and finally taking a bus trip from location 2 bus layer arriving at location 3 on the bus layer at time t3 .the presence of unconnected occurrences of location 1 at bus layer and location 3 at subway layer can be viewed as artefacts of the mag construction .we call these vertices trivial components of the mag . this subject will be further addressed in this section .we remark that a python implementation of all the algorithms presented in this section is available at the following url : http://github.com/wehmuthklaus/mag_algorithms .although we show that every mag is isomorphic to a directed graph designated , it is important to note that the set of vertices of this graph is , as shown in expression ( [ func : iso ] ) .since the set is the cartesian product of all the aspects in the mag , it is possible to reconstruct the mag s aspect list from , which is a step necessary to obtain the mag from the directed graph .when the vertices of the directed graph associated with a given mag are not the composite vertices themselves , it is necessary to provide a mechanism to link each vertex of the directed graph to its corresponding composite vertex on the mag .this mechanism can be , for instance , a bijective function between and . in the current work ,we construct representations for , such as matrices , which do not directly carry the tuples that characterize the mag s composite vertices . in this kind of representation ,a vertex is associated with a row or column of a matrix .therefore , additional information has to be provided in order to properly link each row ( column ) of a matrix to its corresponding composite vertex on the mag represented by this matrix .this is done by a bijective function , defined in section [ sec : asp_vt_ord ] , where takes a composite vertex to a natural number , which is the row ( column ) number in the matrix .the implementation of presented in this work is based on the concept of a _ companion tuple _ , which complements the matrix representation of a given mag .for a mag with aspects , its companion tuple has the form | ] |) ] , since each element of each aspect has to be counted .we remark that , in either case , the time complexity for building the companion tuple is less than , which is the order of the set of composite vertices of the mag . for a given mag and a sub - determination , we also define the sub - determined companion tuple , which is obtained by multiplying each entry of the original companion tuple by the equivalent entry of the tuple representation of , as shown in algorithm [ alg : tauzeta ] .the sub - determined companion tuple has the same value as the original companion tuple for the aspects that have value in and otherwise . in general, the order of the composite vertices and aspects on a mag is not relevant . that is , changing the order in which the aspects or their elements are presented does not affect the result of any algorithm or analysis performed on a mag , since the mag obtained by such changes is isomorphic to the original one .the definition of the mag isomorphism adopted in this work can be found in section [ subsec : magiso ] .however , in order to show the mag s algebraic representation in a consistent way , it is necessary to link the mag s composite vertices to rows and columns of matrices , which is achieved by the bijective function , defined in this section at equation ( [ eq : cn_cv ] ) .we now show the preliminary steps necessary for the definition of function , as implemented in this work .the aspect order is adopted as the same in which the aspects are placed on the mag s companion tuple . for the ordering of composite vertices , we define the numerical representation of each composite vertex from its tuple . in order to obtain the composite vertex numerical representation , we first translate the composite vertex into a numerical tuple .this is done by applying a family of indices , one for each aspect on the composite vertex , where for every aspect the corresponding index ranges from to , where is the number of elements on the -th aspect of the mag .since this is a simple index substitution , we do not use a distinct notation for the composite vertex on its numerical tuple form .we , however , reserve the notation ] is the -th component of the composite vertex .figure [ fig : mag_ex1_ids ] shows the mag with its composite vertices , and their numerical representations ranging from to . in order to illustrate how the numerical representations are obtained , we show examples based on the mag . with composite vertices numerical representations.,scaledwidth=75.0% ] for this representation , we adopt aspect indices such that for aspect we have and . for aspect , we have and , while for aspect , and .since the companion tuple of mag is , the weights are , and . therefore , the composite vertex has numerical representation , while and .algorithm [ alg : d ] determines the numerical representation of a composite vertex represented by its numerical tuple .the presented implementation extends the concepts presented in equations ( [ eq : weight_cv ] ) and ( [ eq : cn_cv ] ) , so that this algorithm can also be used to determine the numerical representation of sub - determined composite vertices . in order to determine the numerical representation of a sub - determined vertex , function in algorithm [ alg : d ] receives the full composite vertex tuple ( not sub - determined ) and the sub - determined companion tuple .the seen at line of algorithm [ alg : d ] makes that the entries found in a sub - determined companion tuple are discarded for the construction of the sub - determined numerical representation of the composite vertex .the time complexity for this algorithm is , where is the number of aspects on the mag in question .given the numerical representation of any composite vertex , it is possible to reconstruct its tuple . in order to do this, we calculate the numerical value of the index of each element on the tuple , as where is the composite numerical representation , is the position of the composite vertex tuple to be calculated , is the mag s companion tuple , is the modulus ( division remainder ) operation and is the floor operator , which for any corresponds to the largest integer such that . note that for calculating for a mag with aspects , it is necessary to calculate . considering the definition of from equation ( [ eq : weight_cv ] ) , it follows that , the number of composite vertices on the mag . for instance, taking the composite vertex with numerical representation of the mag , we have that we can therefore define the inverse of function as which reconstructs the composite vertex tuple in its numerical form . from this , we can see that , for instance , , which corresponds to the composite vertex . algorithm [ alg : invd ] shows the implementation of .the relation between the composite vertex numerical representation and its tuple can also be seen as a consequence of the natural isomorphism between the mag and its composite vertices representation , .the role of this relation will become clear in sections [ sec : adj_mat ] to [ sec : lap_mats ] , where the matrix forms of the mag are presented .in the mag shown in figure [ fig : mag_ex1_ids ] the composite vertices of numerical representation and are trivial components ( i.e. unconnected composite vertices ) .they are created in consequence of the regularity needed on the mag to build the set .this type of padding is not necessary in a directed graph and its algebraic representation .therefore , it is possible to remove the trivial components from the composite vertices representation and its associated matrices .however , it is important to bear in mind that the graph resulting from this transformation may no longer be isomorphic to the mag and neither are the matrices associated with it .the only case in which the isomorphism is preserved is when there are no trivial components on the mag and nothing is removed .nevertheless , this kind of transformation can be helpful for application , by reducing the number of composite vertices present on the graph and so simplifying its construction and manipulation .the same sort of padding is discussed in , where authors suggest that this padding may cause problems in the computing of some metrics , such as mean degree or clustering coefficients , unless one accounts for the padding scheme in an appropriate way . in this subsection , we show that the padding with the trivial components may be eliminated , if desired . anyway ,if needed , it suffices to be cautious in computing the metrics of interest on mags by considering the existence of the padding scheme , as suggested by . in particular , the mag algorithms we discuss in section [ sec : alg_algth ] remain unaffected by this padding issue . for a given mag , we define its main components graph as the mag s composite vertices representation with all its trivial components removed .figure [ fig : main_ex1 ] shows the main components graph for the mag .it is worth noting that numerical representations are not defined for . of the examplemag ,scaledwidth=75.0% ] this can be achieved algebraically for any mag with the help of a matrix constructed from the identity , where is the number of composite vertices on the mag .the matrix is obtained from this identity by removing the columns which match the numerical representations of the trivial components of the mag . therefore , assuming that the mag has trivial components , the matrix has rows and columns .in particular , in the cases where the mag has no trivial components , we have that .it is also worth noting that the matrix is a matrix akin to the identity , but the diagonal entries corresponding to the trivial components ( removed in ) have value .therefore , multiplying a matrix by to the left has the effect of turning all entries on the rows corresponding to the trivial components to .similarly , multiplying by to the right has the effect of turning the entries of the columns corresponding to the trivial elements to . as an example, we show the matrix , which is obtained from the identity matrix by removing the columns and that correspond to the trivial components of the mag . as a direct consequence of the isomorphism between mags and traditional directed graphs , it is expected that a mag can be represented in matrix form .in fact , such representations can be achieved directly by the composite vertices representation of mags , presented in section [ subsec : magdef ] . since for any given mag its composite vertices representation is a traditional directed graph , it can be represented in matrix form .one of such representations is the mag s adjacency matrix .this matrix is obtained from the mag s composite vertices representation , , and its companion tuple .in fact , the mag s adjacency matrix is the adjacency matrix of the composite vertices representation , where the order of the rows and columns is given by the numerical representation of the composite vertices of . since the set of composite vertices of a given mag is obtained by the cartesian product of all aspects of the mag ( as shown in expression ( [ eq : comp_verts ] ) ), it follows that the number of composite vertices on a given mag with aspects is where is the -th element of the mag s companion tuple , i.e. the number of elements on the mag s -th aspect . the general form of any entry of the matrix is given by where means that is an edge on the composite vertices representation of the mag , so that are composite vertices of .it follows from the definition of and its natural isomorphism to , that if and only if there is an edge such that and .it is important to note , however , that the notation is in fact a shorthand for , where is the row number and the column number of the matrix entry .this ties the construction of the adjacency matrix of a mag with its companion tuple , since it is used in the determination of the numerical representation of a composite vertex ( ) .therefore , the adjacency matrix of any given mag is always presented with its companion tuple .the adjacency matrix of a given mag is constructed by algorithm [ alg : j_h ] , where is the number of composite vertices in , which can be calculated using equation ( [ eq : nro_comp_verts ] ) , and are the numerical representation of the origin and destination composite vertices of edge , respectively , as defined in section [ sec : asp_vt_ord ] . considering that a sparse matrix with all entries can be created in constant time , and that both functions and ( see algorithm [ alg : tau ] and algorithm [ alg : d ] ) have time complexity , we conclude that algorithm [ alg : j_h ] has time complexity , where is the number of aspects of mag and the number of edges . as an example, the adjacency matrix of the mag is shown in expression ( [ eq : j(t ) ] ) .this adjacency matrix has entries , of which just are non - zero .\ ] ] it is important to note that the order of the columns and rows of is given by the numerical representation of the composite vertices .thus , for instance , the at row 2 , column 8 represents the edge between the composite vertices with numerical representations and , witch in turn represents the edge of the mag . in this way , although is presented in matrix form , together with the companion tuple , it fully represents the mag , carrying the proper adjacency notion used to define transitive constructions , such as walks and paths on the mag . for an arbitrary mag ,its main components graph is obtained by removing the mag s trivial components , as stated in section [ sec : triv_comp_elim ] .the matrix is then obtained with the use of the matrix , presented in section [ sec : triv_comp_elim ] . is obtained as where is the adjacency matrix containing only the main components of the mag .it is also possible to obtain the adjacency matrix from .this follows from the fact that on the adjacency matrix the rows and columns corresponding to trivial components are already zero . therefore , where .then , we have that expression ( [ eq : j(m(t ) ) ] ) shows , the adjacency matrix of .this matrix is obtained from the adjacency matrix by removing the rows and columns which represent the trivial components of the mag . in this case , the trivial components are the composite vertices with numerical representations and .this matrix is calculated as , so that .\ ] ] in general , the adjacency matrices associated with a mag are sparse , meaning that for an adjacency matrix the number of non - zero entries of the matrix is of the order .since the non - zero entries on the mag adjacency matrix corresponds to the edges present on the mag , the adjacency matrix being sparse means that the number of edges on the mag is of the same order of the number of composite vertices , i.e. is of order .therefore , these matrices can be stored efficiently using sparse matrices representations , such as compressed sparse column ( csc ) or compressed sparse row ( csr ) . assuming that the number of edges is larger than the number of composite vertices , these representations provide a space complexity of for storing the mag s adjacency matrices .further , they also provide efficient matrix operations , which will be explored in the algorithms presented in section [ sec : alg_algth ] .given that every mag is isomorphic to a directed graph , it follows that it can be represented by an incidence matrix ( and its companion tuple ) . for any given mag , this matrix is constructed from the composite vertices and the companion tuple , adopting the vertex order induced by the numerical representation presented in section [ sec : asp_vt_ord ] .the mag s incidence matrix , where is the number of edges in the mag and is the number of composite vertices on the mag , is defined then as where is an edge in mag and is a composite vertex in mag . here, the notation is a shorthand for , where is an numerical index for each edge and is the numerical representation of the composite vertex .note that the use of the composite vertex numerical representation ties the incidence matrix to the mag s companion tuple .although the order of the composite vertices is defined by each composite vertex numerical representation , the order used to represent the mag edges in the incidence matrix is not relevant .the incidence matrix of a directed graph has several well - known properties , among which , the property that the incidence matrix of a directed graph with connected components has rank , where is the number of vertices of the graph .this property is useful for defining other matrices based on the incidence matrix . for a given mag ,the incidence matrix is built by algorithm [ alg : c_h ] , where and are the numerical representation of the origin and destination composite vertices of edge , respectively , as defined in section [ sec : asp_vt_ord ] , and is a unique numerical index for the edge , ranging from to . considering that a sparse matrix with all entries can be created in constant time , and that both functions and ( see algorithm [ alg : tau ] and algorithm [ alg : d ] ) have time complexity of , we conclude that algorithm [ alg : c_h ] has time complexity of , where is the number of aspects of mag and the number of edges . given the incidence matrix of a mag , it is possible to obtain the incidence matrix of the main components graph using the matrix defined in section [ sec : adj_mat ] .the incidence matrix of is given by further , given the incidence matrix of the mag s main components graph and the matrix , it is possible to recover the mag s incidence matrix , as this is only possible because the columns of , which are forced to by the multiplication by , were already , as the composite vertices represented by them have no edges incident to them .the incidence matrix of the example mag is shown in expression ( [ eq : c(t ) ] ) .the vertices ( columns ) order is determined by the vertices numerical representation , while the edge order remains unconstrained .the trivial components correspond to columns and , which have all entries with value .\ ] ] the main components incidence matrix is depicted in expression ( [ eq : c(m(t ) ) ] ) .it is obtained from the matrix by removing the trivial components , as shown in expression ( [ eq : ct_from_cm ] ) .\ ] ] in general , the incidence matrices related to mags are sparse , and therefore can be efficiently stored using sparse matrices representations , such as csc or csr . assuming that the number of edges on the mag is larger than the number of composite vertices , the use of these representation lead to a memory complexity of , where is the number of edges on the mag .we construct the combinational laplacian matrix of a given mag from its incidence matrix , as since is an matrix , it follows from this construction that , as expected , the laplacian is a positive semidefinite matrix .further , since the rank of is , where is the number of connected components of , it follows that the rank of is also .consequently , the dimension of the nullspace of is , the number of connected components on the mag , a well - known property of the laplacian matrix . in the case of the laplacian , each one of the trivial components of the mag counts as a distinct connected componenttherefore , for a mag with trivial components , we have that , the equality happening in the case where the mag only has trivial components , i.e. when the mag has no edges .the laplacian of the example mag is given by \cdot\ ] ] since the sum of all columns of is and six of the columns are , it follows that the dimension of the nullspace of is , which is the expected value , as the mag has trivial components and a single main component .the entries with value reflect the fact that in this directed graph there are pairs of opposing directed edges , which can be interpreted as a bi - directional connection . as is shown in section [ subsec : wei_lap ] , this can also be seen as the weight associated with this connection .the laplacian can also be constructed for the main components of a given mag . in this case , the laplacian is constructed as or the main component laplacian for the mag is .\ ] ] since the six trivial components were eliminated , the dimension of the nullspace of is .the weighted laplacian matrix of a mag is obtained in a similar way to the combinational laplacian . however, an additional diagonal weights matrix is used to associate a weight to each of the edges of .we denote a weights matrix for a given mag as , where is the number of edges in . given a mag and a weights matrix , the weighted laplacian is defined as in general , the entries on the main diagonal of a weights matrix are positive real values . in this case, is a symmetric positive - definite matrix and , therefore , is symmetric positive - semidefinite .hence , the rank of is the same as the rank of , so that the nullspace of has the same dimension as the nullspace of .it can be seen that this matrix represents the same object as the supra - laplacian described in .nevertheless , here it is obtained directly from the mag s representation by matrices and further , distinct weights can be directly assigned to each edge if the application needs it . as an example of weighted laplacian for the mag ,consider a weight matrix , where the values of the entries on the main diagonal are given by .\ ] ] this weights matrix assigns weight 0.5 to all six edges that form the bidirectional connection between layers on the example mag .this effectively converts this edge pairs into a undirected edge . by doing this ,the obtained weighted laplacian matrix has the more familiar structure associated with the laplacian of undirected graphs . for this weights matrix, we have \cdot\ ] ] if considering only the main components of a given mag , we have for the case of the example mag and the weights matrix described by equation ( [ eq : diagw ] ) , \cdot\ ] ] another form of applying weights to the laplacian matrix on a given mag leads to the equivalent of the normalized laplacian matrix . in this case, the weights are applied to the composite vertices instead of the edges , as in the weighted laplacian . in order to obtain the normalized laplacian ,weights are applied to the non - zero columns of the incidence matrix , which correspond to the composite vertices of that are not trivial components ( i.e. unconnected composite vertices ) .the weights applied to the non - zero columns are such that the vector represented by each column becomes an unitary vector .this leads to a diagonal weights matrix , where , for which where is the -th column of the incidence matrix and is the euclidean norm of .the normalized laplacian is then obtained by since , where is the degree of the composite vertex corresponding to column , it follows that the formulation for shown in equation ( [ eq : normlap ] ) coincides with the one proposed in . as with the other kinds of laplacian matrices ,the trivial components of can be eliminated using the matrix as mag algorithms covered in this section are based on the mag s adjacency matrix or on its adjacency list . since in generalwe expect the adjacency matrix to be represented using sparse csr ou csc formats , it follows , due to the structure of the csr and csc formats , that the adjacency matrix and adjacency list can be seen as very closely related representations .the algorithms used in mags are directly derived from the basic well - known algorithms used with directed graphs . in this sense ,the purpose of this section is not to propose new algorithms , but to show how known algorithms may be adapted for application in mags .we remark that a python implementation of all the algorithms presented in this section is available at the following url : http://github.com/wehmuthklaus/mag_algorithms .when operating upon a matrix representation , a few auxiliary matrices and vectors are necessary to express the desired operations .we now define these vectors , which are used on the remainder of this section : 1 . all 0s +we denote the column vector with all entries equal to .usually we assume that has the right dimension ( i.e. number of rows ) for the indicated operation .when necessary to improve readability , we indicate the dimension by sub - script as in .all 1s + we denote the column vector with all entries equal to .usually we assume that has the right dimension ( i.e. number of rows ) for the indicated operation .when necessary to improve readability , we indicate the dimension by sub - script as in . in all cases , we assume the vectors have the dimension necessary for the operation where they is applied. moreover , specially constructed matrices are used to build sub - determined algebraic algorithms for mags .these matrices provide reduction / aggregation operations needed for sub - determined algorithms .although these matrices are specially constructed for the mag and the sub - determination in question , they have distinct properties and can be constructed by a general algorithm .in fact , the construction of sub - determined algorithms relies on the use of functions to aggregate / reduce results according to the applied sub - determination . in some cases , this function can be as simple as just summing up values obtained in composite vertices , which are reduced to the same sub - determined vertex .however , depending on the algorithm being constructed , this aggregation may need a more elaborate function , which may not be expressed in terms of matrix multiplications . given a mag and a sub - determination , the sub - determination matrix is a rectangular matrix , where is the number of composite vertices of and is the number of composite vertices of the sub - determination applied to the mag .since a sub - determination is a ( proper ) subset of the aspects of a mag , it follows that , i.e. the number of composite vertices of a mag is a multiple of the number of composite vertices in any of its sub - determinations .further , has the property of having exactly one non - zero entry in each column , and the position of this entry is determined by the numerical value of the sub - determined composite vertex .algorithm [ alg : m_zeta ] shows the construction of the sub - determination matrix for a given mag and sub - determination . the function takes a composite vertex to its numerical representation andthe function takes a composite vertex to its sub - determined form , i.e. it drops the aspects not present in the sub - determination . to determine the time complexity of algorithm [ alg : m_zeta ], we consider that the count of composite vertices in line is , the same is the case for the count on line , the construction of companion tuple at line is , the construction of an empty sparse matrix at line is , and , finally , the * for * loop initiated at line is also .since the number of aspects , we conclude that the time complexity of algorithm [ alg : m_zeta ] is . ) for instance , consider the example mag and a sub - determination , which drops the third aspect of .the aspect dropped is the aspect of time instants and , therefore , the two aspects present in are location and transit layers . since in there are 3 locations and 2 transit layers , it follows that .hence , constructed according to algorithm [ alg : m_zeta ] is given by .\ ] ] as a further example , consider the mag and a sub - determination , which drops the location and transit layer aspects , leaving only the time instants aspects . since there are time instants in , it follows that is .\ ] ] note that in these cases the multiplication by the sub - determination matrices performs the sum of the distinct composite vertices that are reduced to a same sub - determined vertex .for instance , given the sub - determination , the matrix is used to aggregate values found in composite vertices into a single sub - determined vertex .the aggregation function in this case is a simple sum .the same is done by the matrix for the sub - determination , where in this case each sub - determined result is the sum of values obtained for composite vertices . in this section ,we show that every function that can be obtained from a mag to a given co - domain set can also be obtained from a matrix representation of the mag . herethe set is the quotient set of finite mags under isomorphism defined in section [ subsec : magdef ] .note that a permutation of a given adjacency matrix , together with the function , represents the same mag as , so that permutations of adjacency matrices are isomorphic .thus , we have the set , which is a quotient set of pairs of adjacency matrices and association functions , under adjacency matrix permutations .therefore , an element of is an equivalence class of adjacency matrices and functions .since we consider the pair as the canonical adjacency matrix representation of the mag , we assign this pair as the class representative of the mag in .the adjacency matrix and companion tuple obtained from the mag by algorithm [ alg : j_h ] are isomorphic to the mag h. [ theo : j ] we show that algorithm [ alg : j_h ] can be seen as a function that takes a given mag to its adjacency matrix and companion tuple , and that this function preserves the adjacency structure of the original mag .further , we show that , from the adjacency matrix and companion tuple , we can construct a mag that is isomorphic to mag .* + given the sets and , algorithm [ alg : j_h ] can be seen as a function considering the loop depicted at lines 5 to 9 in algorithm [ alg : j_h ] , it can be seen that every edge is converted in a pair of composite vertices ( and ) and then represented as an edge on the adjacency matrix .therefore , if the composite vertices and are adjacent in mag , then a entry is present at the intersection of row and column of , indicating the corresponding adjacency in the matrix . hence , the adjacency structure of the mag is preserved by the function .* + given the adjacency matrix and companion tuple , we construct mag , which we then show to be isomorphic to the mag .we obtain from by constructing a list with elements , in which every element of this list is a set such that | = \tau[i] ] are natural numbers ranging from to ] ; ; and .+ since , we know that there is a bijective function from to .further , we also have the bijective function , which takes a composite vertex into a natural number , assigning a unique and distinct natural number to each element of and .moreover , since and by construction of , we have that the range of for and is the same , i.e. . from this, we conclude that , for every composite vertex , there is one unique composite vertex such that .we thus define the bijective function + as the function is bijective , for every edge , we have an edge , and also , for every edge , we have the corresponding edge .this fulfils the conditions for isomorphism between and .+ since is a quotient set under the mag isomorphism relation and is isomorphic to , it follows that and correspond to the same element in , making the function bijective . also ,since each entry with value in the adjacency matrix corresponds to an edge in the mag , it follows that also preserves the mags adjacency structure , establishing the isomorphism relation as desired .every function that can be obtained from a mag to a given co - domain set can also be obtained from a matrix representation of the mag .[ theo : univ ] consider the diagram depicted in figure [ fig : diag ] . in this figure, is the set of all mags ( up to isomorphism ) , is the set of pairs of adjacency matrices and companion tuples ( up to permutation ) , is an arbitrary function from to , where is a codomain consistent with the definition of function , and is the identity function in .since the function is arbitrary , it can represent any function or algorithm , such as searches or centrality computations , which take mags to a result expected from this function .\(m ) [ matrix of math nodes , row sep=3em , column sep=4em , minimum width=2em ] & + & + ; ( m-1 - 1 ) edge node [ left ] ( m-2 - 1 ) edge node [ below ] ( m-1 - 2 ) ( m-2-1.east|-m-2-2 ) edge node [ below ] node [ above ] ( m-2 - 2 ) ( m-1 - 2 ) edge node [ right ] ( m-2 - 2 ) ; as both functions ( equation ( [ func : upsilon ] ) in theorem [ theo : j ] ) and represent isomorphisms , it follows that the depicted diagram commutes , so that for every function there is a function , which produces the same result . as a consequence of theorem [ theo : univ ] , it follows that , from the adjacency matrix and companion tuple of a mag , one can obtain any possible outcome that can be obtained from a mag or from any other representation equivalent to it , such as high order tensors , as those presented in recent related works .the definition of degree in a traditional graph stems from the number of edges incident to a given vertex .this concept can be generalized for mags , so that degrees can be defined for composite vertices , sub - determinations , or elements of a given aspect .further , since mag edges are considered to be directed , the degrees are also divided into out - degree and in - degree . in this section ,we present algorithms for calculating these distinct degree definitions .the degree of composite vertices of a given mag can be obtained directly from its composite vertices representation , .since the composite vertices representation is a traditional directed graph isomorphic to the mag , it follows that the degree determination is done with the traditional algorithm for directed graphs with minor changes . for a given mag and its companion tuple ,the degrees of the composite vertices can be determined by algorithm [ alg : comp_dg ] , where and stand for the numerical representation of the origin and destination composite vertices of edge , as defined in section [ sec : asp_vt_ord ] .another way for calculating the degrees of the composite vertices is computing it algebraically from the adjacency matrix of the mag , as given by and further , the total degree of the composite vertices can be obtained by summing up their indegrees and outdegrees . to determine the time complexity of algorithm [ alg : comp_dg ], we consider that lines , , and have each time complexity , the determination of the companion tuple at line has complexity , where is the number of aspects of the mag , so that . finally ,since the determination of the numerical representation of vertices has complexity , we have that the * for * loop initiated at line has complexity , so that the time complexity of algorithm [ alg : comp_dg ] is .if we consider that in a given case the order of the mag does not vary , so that is a constant , then the algorithm s time complexity is . in the case of the example mag ( figure [ fig : mag_ex1_ids ] ) ,whose companion tuple is , it can be seen that the composite vertex has outdegree and indegree , while the composite vertex has outdegree and indegree . since and , it follows that = 1 ] , = 2 ] . we can determine the degree for sub - determined composite vertices in a similar way to the degree of composite vertices . given a mag and a sub - determination , the degree of the sub - determined composite vertices can be obtained by algorithm [ alg : subcomp_dg ] , where is the number of sub - determined composite vertices on mag , is the function that takes a composite vertex to its sub - determined form , and is the function that takes the sub - determined composite vertex to its numerical representation . it can be seen that the time complexity of algorithm [ alg : subcomp_dg ] is the same as the time complexity of algorithm [ alg : comp_dg ] .it is important to note that two distinct composite vertices may have the same sub - determined form .this happens when the two composite vertices differ only on aspects which are dropped by the sub - determination . in this case, the degree of each of these composite vertices is summed for obtain the sub - determined degree . from this, it can also be seen that some edges in the sub - determined form may become self - loops .the degrees calculated by algorithm [ alg : subcomp_dg ] include the self - loop edges .this algorithm can be modified to count the self - loops separately , as shown in algorithm [ alg : subcomp_sep_dg ] .this algorithm is similar to algorithm [ alg : comp_dg ] and has the same time complexity .the sub - determined composite vertices degree can also be determined algebraically with and where is the sub - determination matrix and is the all column vector , both defined in section [ sec : auxvect ] .note that the multiplication by adds the degrees of the composite vertices that are collapsed to the same sub - determined vertex .the degrees calculated by equations ( [ eq : subcompindg ] ) and ( [ eq : subcompoutdg ] ) include the self - loop edges . to obtain the separate self - loop degrees , first note that this follows from the fact that since is a rectangular matrix andhas the property that each row has exactly one non - zero entry of value .furthermore , note that the matrix is the adjacency matrix of the sub - determined mag .since the composite vertices representation of a sub - determined mag is a multigraph , each non - zero entry shows the number of superposed edges in the sub - determination .therefore , the main diagonal of has the self - loop degree of each vertex .hence , for example , consider the example mag ( figure [ fig : mag_ex1_ids ] ) and the sub - determination defined in section [ sec : auxvect ] .we have that ,\ ] ] ,\ ] ] ,\ ] ] and .\ ] ] this means that , for instance , the sub - determined composite vertex has outdegree , indegree , and self - loops .this sub - determination corresponds to the aggregation of all time instants , which means that the edges in which only the time instant changes become self - loops .these edges are shown in red ( dotted ) in figure [ fig : mag_ex1_ids ] .note that , so that , making it correspond to the second element of the degree column vector .the single aspect degree is a particular case of sub - determined degree in which the sub - determination applied is such that only a single aspect remains .therefore , the determination of single aspect degrees is done in the same way presented in section [ subsec : subdegree ] .we , however , present an additional example illustrating the time instant degree , which is obtained by the sub - determination defined in section [ sec : auxvect ] .this sub - determination has only the third aspect of the mag ( figure [ fig : mag_ex1_ids ] ) , which corresponds to the three time instants present on mag . in this case, we have that ,\ ] ] ,\ ] ] ,\ ] ] and .\ ] ] therefore , we have that , so that , , and . considering the composite vertices representation of mag , depicted in figure [ fig : mag_ex1_ids ], it can be seen that each time instant has self - loop edges ( in blue - dashed ) , which is consistent with equation ( [ eq : dm_zetat(t ) ] ) .further , there are edges from to ( in red - dotted and black ) and edges from to .this is consistent with the adjacency matrix shown in equation ( [ eq : m_zetat(t ) ] ) .further , the indegrees and outdegrees of each time instant are consistent with equations ( [ eq : subindgzt ] ) and ( [ eq : suboutdgzt ] ) .the breadth - first search ( bfs ) is an important graph algorithm that can be seen as a primitive for building many other algorithms .the goal of this section is to illustrate how the bfs algorithm can be adapted for being used in mags , both in its full composite vertices representation and in its sub - determined forms . in the not sub - determined form ,the adaptation is very simple , since the composite vertices representation of a mag is a directed graph . in this case , all that is needed is to convert the composite vertices representation from its tuple to numerical form , and then apply the traditional bfs algorithm .the adaptation to the sub - determined forms also does not require major changes on the algorithm .as with many graph algorithms , bfs can be expressed in combinational or in algebraic forms , which are presented in the following related subsections .the non sub - determined bfs in its combinational form is constructed directly upon the mag s adjacency matrix , .considering algorithm [ alg : cvbfs ] and the standard form of the bfs algorithm encountered in , it can be seen that the difference is that the starting composite vertex has to be transformed from its tuple representation to its numerical representation , as shown in lines 8,9 and 10 of algorithm [ alg : cvbfs ] .therefore , from the analysis provided in , we can conclude that the time complexity of algorithm [ alg : cvbfs ] is .bfs is also closely related to matrix multiplication .this stems from the well - known property of the powers of the adjacency matrix , in which the entry of the -th power of the adjacency matrix shows the number of existing walks of length from vertex to vertex . from this, we could think that for a given mag , the series would produce a matrix , such that the entry indicates the number of walks of any length from vertex to vertex .this is indeed the case when happens to be an acyclic mag , making a nilpotent matrix .the existence of cycles in makes that , for some vertices , there will exist walks of arbitrary length connecting them ( namely , the cycles ) , making the series of equation ( [ eq : bfs_non_conv ] ) divergent .however , since the objective is not to know the number of walks between each pair of vertices , but simply to know which vertices are reachable from each other ( i.e. there is at least a path between them ) , this technical problem can be solved by multiplying the adjacency matrix by a scalar , such that where is the spectral radius of the matrix .this leads to the matrix so that the spectral radius of the matrix .this results that equation ( [ eq : bfs_non_conv ] ) constructed with the matrix converges .since the convergence of the series is assured , equation ( [ eq : bfs_non_conv ] ) can be re - expressed as the matrix defined in equation ( [ eq : bfs_conv ] ) has the property that , for any given composite vertex , the row of has non - zero entries in every column that corresponds to a composite vertex , such that is reachable from .hence , for a given composite vertex , the row corresponds to the result of a bfs started at that composite vertex .although the matrix carries the bfs of all composite vertices of the mag , it is important to note that this matrix may not be sparse , which for large mags can lead to difficulties in memory allocation . in order to avoid such difficulties ,it is also possible to express a bfs for a single composite vertex as where is the row vector with entries for which all entries except are and the entry is .considering the example mag , shown in figure [ fig : mag_ex1_ids ] , the result of the bfs using algorithm [ alg : cvbfs ] for the composite vertex , whose numerical representation is , is \\ \notag distances & = [ \infty , 0 , \infty,\infty , 1 , \infty , \infty , 1 , 1 , 2 , 2 , \infty , \infty , 2 , 2 , 3 , 3 , \infty]\\ \notag pred & = [ nil , nil , nil , nil , 2 , nil , nil , 2 , 2 , 5 , 5 , nil , nil , 8 , 8 , 10 , 10 , nil ] , \\\notag\end{aligned}\ ] ] where the list shows the composite vertices accessible from , which in this example represent all locations , transit modals , and time instants reachable from this initial point .the list carries the distances in hops from the initial composite vertex to all possible destinations ( with meaning that a destination is not reachable ) .the list shows the predecessors of each composite vertex , making possible to construct a bfs tree .it is possible to obtain a sub - determined form of the bfs algorithm for mags .it is important , however , to realize that this sub - determined bfs algorithm is not equivalent to applying the bfs algorithms presented in section [ subsec : compbfs ] to a sub - determined mag .a sub - determination is a generalization of the idea of aggregating multilayer and time - varying graphs , as shown in section [ subsec : magsub ] . as with the aggregation process, the sub - determination of a mag can cause the presence of paths and walks on the sub - determined mag that do not actually exist on the original mag . to illustrate this, we present figures [ fig : mag_ex2_ids ] and [ fig : mag_ex2_sub_ids ] , which show a small two aspects mag and its sub - determined form , obtained by the sub - determination .first , note that , in the mag shown in figure [ fig : mag_ex2_ids ] , there is no path originating from the composite vertices or to the composite vertices or .nevertheless , in figure [ fig : mag_ex2_sub_ids ] , there is a path connecting the sub - determined vertex to the sub - determined vertex , even though such connection is not possible on the original mag shown in figure [ fig : mag_ex2_ids ] .therefore , in order to obtain the proper result , the sub - determined bfs should not be evaluated directly using the sub - determined mag .[ fig : mag_r2 ] 0.4 and its sub - determined form.,title="fig : " ] 0.33 and its sub - determined form.,title="fig : " ] such a case can be seen algebraically by noting that given a mag and a sub - determination , in general to see that the inequality ( [ eq : subdif ] ) holds , note that an arbitrary power of the matrix is given by where is multiplied times .note , however , that since is a retangular matrix and , so that the rank of the matrix is less or equal to , while the rank of the identity is . since inequality ( [ eq : neqmz ] ) holds , so does the inequality ( [ eq : subdif ] ) . here , the left hand side of the inequality ( [ eq : subdif ] ) corresponds to the sub - determination of the bfs calculated for the mag , while the right hand side corresponds to the bfs calculated for the sub - determined mag . in the case of the mag , shown in figure [ fig : mag_ex2_ids ], we have that the sub - determination is given by and the adjacency and sub - determination matrices are ,\ ] ] ,\ ] ] and .\ ] ] therefore , we have that ,\ ] ] while \ ] ] and .\ ] ] remembering that the entries of the matrices in equations ( [ eq : bfsr ] ) , ( [ eq : bfsr1 ] ) , and ( [ eq : bfsr2 ] ) are to be considered only as zero or non - zero , it can be seen that the matrix at equation ( [ eq : bfsr ] ) has a at entry , while the matrices at equations ( [ eq : bfsr1 ] ) and ( [ eq : bfsr2 ] ) have a non - zero entry at this same position .this illustrates the situation in which a bfs is done on the sub - determined ( aggregated ) mag , as in equations ( [ eq : bfsr1 ] ) and ( [ eq : bfsr2 ] ) , i.e. paths that are not present on the original mag can appear on the sub - determined form , potentially altering the results obtained by algorithms applied to it . for instance , considering the mag , depicted in figure [ fig : mag_ex1_ids ] , for a sub - determination , which drops the time aspect , and considering so that , we have that .\ ] ] algorithm [ alg : subbfs ] shows a combinational version of the sub - determined bfs .this procedure ensures that only paths present on the original mag are considered on the sub - determined bfs .the sub - determination of the results obtained from the bfs is done in the internal * if * , comprising lines to of algorithm [ alg : subbfs ] . after applying algorithm [ alg : subbfs ] to the mag with initial vertex and sub - detemination , the obtained result is \\ \notag distances & = [ 0 , 1 , \infty]\\ \notag pred & = [ nil , 1 , nil ] , \\ \notag\end{aligned}\ ] ] which is consistent with the result obtained by equation ( [ eq : bfsr ] ) .further , applying algorithm [ alg : subbfs ] to mag , shown in figure [ fig : mag_ex1_ids ] , with starting composite vertex and applying the sub - determination , which drops the time aspect , the obtained result is \\ \notag distances & = [ \infty , 0 , 1 , 2 , 1 , \infty]\\ \notag pred & = [ nil , nil , 2 , 5 , 2 , nil ]. \\ \notag\end{aligned}\ ] ] considering that and , , and , this means that disregarding time , starting from it is possible to reach in step , in steps , and in step .it is not possible to reach because there is no bus stop at location , neither because there is no subway station at location . from the predecessor list ( )it is possible to build a bfs tree , where is the root , and are children of , and is a child of .note that and are leaves .it can be seen that the result obtained in equation ( [ eq : bfssubz ] ) is consistent with the results obtained by algorithm [ alg : subbfs ] . comparing algorithm [ alg : subbfs ] to algorithm [ alg : cvbfs ], it can be seen that the main difference is the additional * for * loop at line of algorithm [ alg : subbfs ] . since the time complexity of this loop is , we then conclude that the time complexity of algorithm [ alg : subbfs ] is .the single aspect bfs is a special case of the sub - determined bfs . as such, it is evaluated using the same algorithms presented in section [ subsec : subbfs ] for the sub - determined case .considering the example mag ( figure [ fig : mag_ex1_ids ] ) , a sub - determination , which drops the time and transit mode aspects ( thus leaving only the locations aspect ) , and making so that , we have that ,\ ] ] indicating that disregarding the aspects of time instants and transit modes , all locations can be reached from any location . applying algorithm [ alg : subbfs ] to the mag , with starting composite vertex and employing the sub - determination , which drops the aspects of the transit mode and time instants , the obtained result is \\ \notag distances & = [ 0 , 1 , 2]\\ \notag pred & = [ nil , 1 , 2 ] , \\ \notag\end{aligned}\ ] ] which is consistent with the result obtained by equation ( [ eq : bfssubz ] ) . in this section ,we show the adaptation of the depth - first search ( dfs ) algorithm for use with mags .the dfs algorithm exposes many properties of the mag structure and can be used as a primitive for the construction of many other algorithms .we present dfs algorithms for both the full composite vertices representation of the mag as well as for the sub - determined form .we remark that in the sub - determined algorithm the full information of the mag is used , in the sense of preventing the use of paths that may exist in the sub - determined form of the mag , while not actually existing in the original mag .the composite vertices implementation is constructed using the mag s adjacency matrix and companion tuple .the implementation shown is very similar to the traditional implementation presented in , which is expected since the composite vertices representation of the mag is indeed a directed graph , so that the original algorithm applies .the proposed implementation can be seen in algorithm [ alg : dfs_cv ] is similar to the original implementation .therefore , considering the analysis provided in , we conclude that the time complexity of algorithm [ alg : dfs_cv ] is . when applied to mag , shown in figure[ fig : mag_ex1_ids ] , the dfs algorithm generates the result \\ \notag f & = [ 1 , 21 , 23 , 25 , 18 , 27 , 29 , 16 , 20 , 11 , 17 , 31 , 33 , 9 , 15 , 6 , 10 , 35 ] \\\notag pred & = [ nil , nil , nil , nil , 2 , nil , nil , 11 , 2 , 5 , 5 , nil , nil , 17 , 8 , 10 , 10 , nil ] , \\ \notag\end{aligned}\ ] ] where the list carries the discovery time of each composite vertex , the list the respective finish time of each composite vertex , and the predecessor list of each composite vertex .the sub - determined dfs algorithm is presented in algorithm [ alg : dfs_sub ] and is similar to the non sub - determined one .the main differences are at the procedure visit - dfs - sub and the call to a sub - determined bfs at line 15 of the dfs - sub function .this version for a sub - determined bfs is considered in order to determine reachability of sub - determined vertices from the root of each sub - determined dfs tree .this is necessary to prevent including vertices not reachable from the tree root in the non sub - determined mag into the dfs trees constructed by procedure visit - dfs - sub .an example of this is provided in equation ( [ eq : res_dfs_subr ] ) .the difference in procedure visit - dfs - sub is that in addition to the root vertex for the dfs tree it also receives the reachability vector produced by the bfs .this reachability vector has one entry for each sub - determined vertex .this entry has value when corresponding to a reachable vertex , while entries corresponding to unreachable vertices carry value . in order to determine the time complexity of algorithm [ alg : dfs_sub ] , we consider that the sub - determined bfs executed at line 15 of function dfs - sub is done once for the root vertex of each sub - determined dfs tree .since it is executed only once for each dfs tree , we conclude that the total time expended in the sub - determined bfs algorithm is . since the reachability check included in function visit - dfs - subis done by verifying the content of one entry of the reachability vector , it is done in and therefore does not affect the overall time complexity of the visit - dfs - sub function .therefore , since the dfs is run upon the sub - determined mag , it follows that the time complexity of doing the dfs part of the algorithm is . since and , we conclude that the time complexity is dominated by the bfs used for the reachability determination , making the overall time complexity of algorithm [ alg : dfs_sub ] to be .when applying the sub - determined dfs algorithm to the example mag shown in figure [ fig : mag_ex1_ids ] with a sub - determination , which drops the time aspect , the obtained result is \\ \notag f & = [ 1 , 9 , 4 , 7 , 8 , 11 ] \\\notag pred & = [ nil , nil , 2 , 5 , 2 , nil ] , \\ \notag\end{aligned}\ ] ] where the list carries the discovery time of each sub - determined composite vertex , the list its finish time and its predecessor . considering the mag shown in figure [ fig : mag_ex2_ids ] with a sub - determination ,the result obtained by algorithm [ alg : dfs_sub ] is \\\notag f & = [ 3 , 2 , 5 ] \\\notag pred & = [ nil , 1 , nil ] .\end{aligned}\ ] ] it can be seen that even though in the mag sub - determined by ( see figure [ fig : mag_ex2_sub_ids ] ) there is a path from vertex to , vertex is not in the same dfs tree as vertices and , even with the dfs starting at vertex , as can be seen in $ ] .this occurs because in mag ( with no sub - determination ) there is no path connecting the composite vertex to the composite vertex .the single aspect dfs is a special case of the sub - determined dfs . as such, it is evaluated using the same algorithms presented for the sub - determined case in section [ subsec : subdfs ] .applying algorithm [ alg : dfs_sub ] to mag ( figure [ fig : mag_ex1_ids ] ) , and employing the sub - determination , which drops the aspects of transit modes and time instants , the obtained result is \\ \notag f & = [ 5 , 4 , 3 ] \\\notag pred & = [ nil , 1 , 2 ] , \\\notag\end{aligned}\ ] ] where the list carries the discovery time of each sub - determined composite vertex , the list its finish time , and its predecessor .in this paper , we have presented the algebraic representation and basic algorithms of multiaspect graphs ( mags ) .the key contribution has been to show that models based on the mag abstraction ( formally defined in ) can be represented by a matrix and a companion tuple .furthermore , we have also shown that any possible mag function ( algorithm ) can be obtained from this matrix - based representation .this is an important theoretical result because it paves the way for adapting well - known graph algorithms for application in mags . in this sense , we have presented the adaptation for the mag context of basic graph algorithms , such as computing degree , bfs , and dfs . in particular , we have also presented the sub - determined versions of the same basic algorithms , showing that such versions disregard spurious paths that usually result from the sub - determination process , thus avoiding the pollution of the results with the consideration of such paths . as future work, we intend to build upon the results here obtained for the algebraic representation and basic algorithms of mags to analyze mag properties , such as the centrality of edges , composite vertices , and aspects .we also intend to consider the dynamics encountered in these properties in the cases where one of the mag aspects represents time .finally , we are also targeting the application of the mag concept for the better understanding , modeling , and analysis of different complex networked systems found in real - world applications .this work was partially funded by the brazilian funding agencies capes ( stic - amsud program ) , cnpq , finep , and faperj as well as the brazilian ministry of science , technology , innovations , and communications ( mctic ) .k. wehmuth and a. ziviani , `` distributed location of the critical nodes to network robustness based on spectral analysis , '' in _ proc . of the latin american network operations and management symposium ( lanoms ) _ , pp . 18 , ieee , oct .2011 .a. guimares , a. b. vieira , a. p. c. da silva , and a. ziviani , `` fast centrality - driven diffusion in dynamic networks , '' in _ proc . of the workshop on simplifying complex networks for practitioners ( simplex ) , www 2013 _ , pp . 821828 , acm , may 2013 .j. leskovec , j. kleinberg , and c. faloutsos , `` graphs over time : densification laws , shrinking diameters and possible explanations , '' in _ proc . of the acm sigkdd int .conf . on knowledge discovery in data mining ( kdd )177187 , acm , aug .2005 .i. scholtes , n. wider , and a. garas , `` higher - order aggregate networks in the analysis of temporal networks : path structures and centralities , '' _ the european physical journal b _ , vol .89 , pp . 115 , mar .2016 .j. c. lucet , c. laouenan , g. chelius , n. veziris , d. lepelletier , a. friggeri , d. abiteboul , e. bouvet , f. mentre , and e. fleury , `` electronic sensors for assessing interactions between healthcare workers and patients under airborne precautions , '' _ plos one _ , vol . 7 , p. e37893 , may 2012. f. h. z. xavier , l. m. silveira , j. m. almeida , a. ziviani , c. h. s. malab , and h. t. marques - neto , `` analyzing the workload dynamics of a mobile phone network in large scale events , '' in _ proc .of the first workshop on urban networking ( urbane ) , acm conext _ , pp . 3742 , acm , dec .2012 .m. szell , r. lambiotte , and s. thurner , `` multirelational organization of large - scale social networks in an online world , '' _ proceedings of the national academy of sciences ( pnas ) _ , vol .107 , pp . 1363613641 , aug .2010 . c. sarraute , j. brea , j. burroni , k. wehmuth , a. ziviani , and j. i. alvarez - hamelin , `` social events in a time - varying mobile phone graph , '' in _ proc . of the int .conf . on the scientific analysis of mobile phone datasets ( netmob ) _ , ( cambridge , ma , usa ) , apr .2015 .a. sol - ribalta , m. de domenico , n. e. kouvaris , a. daz - guilera , s. gmez , and a. arenas , `` spectral properties of the laplacian of multiplex networks , '' _ physical review e _ ,88 , p. 032807, sept . 2013. m. de domenico , a. sol - ribalta , e. cozzo , m. kivel , y. moreno , m. porter , s. gmez , and a. arenas , `` mathematical formulation of multilayer networks , '' _ physical review x _ , vol . 3 , p. 041022m. d. domenico , c. granell , m. a. porter , and a. arenas , `` the physics of spreading processes in multilayer networks , '' _ nature physics _ , aug .article in press , doi : http://doi.org/10.1038/nphys3865 .
we present the algebraic representation and basic algorithms for multiaspect graphs ( mags ) . a mag is a structure capable of representing multilayer and time - varying networks , as well as higher - order networks , while also having the property of being isomorphic to a directed graph . in particular , we show that , as a consequence of the properties associated with the mag structure , a mag can be represented in matrix form . moreover , we also show that any possible mag function ( algorithm ) can be obtained from this matrix - based representation . this is an important theoretical result since it paves the way for adapting well - known graph algorithms for application in mags . we present a set of basic mag algorithms , constructed from well - known graph algorithms , such as degree computing , breadth first search ( bfs ) , and depth first search ( dfs ) . these algorithms adapted to the mag context can be used as primitives for building other more sophisticated mag algorithms . therefore , such examples can be seen as guidelines on how to properly derive mag algorithms from basic algorithms on directed graph . we also make available python implementations of all the algorithms presented in this paper .
several investigators have proposed the presence of two temporal clusters of very large earthquakes during the past century , e.g. .the first cluster occurred in the middle of last century and included the 1952 mw 9.0 kamchatka earthquake , the 1960 mw 9.5 chile earthquake and the mw 9.2 alaska earthquake ( ) .the second aparent cluster began with the occurrence of the mw 9.15 sumatra earthquake of 26 december 2004 and has continued with the mw 8.8 chile earthquake on 27 february 2010 and the mw 9.0 tohoku earthquake on 11 march 2011 .this recent cluster has given rise to debate about whether the observed temporal clustering of these very large earthquakes has some physical cause or has occurred by random chance . used three statistical tests to conclude that the global clustering can be explained by the random variability in a poisson process .his first test was an analysis of inter - event times using a one - sided kolmogorov - smirnov test .the second test showed that the occurrence of very large earthquakes is not correlated with the occurrence of smaller events .the third test demonstrated that temporal clustering in seismic moment release occurs in about of the samples when the number of events is drawn from a poisson distribution and is not constrained as in the modeling of . in another article , reach the same conclusions testing for the poissonian hypothesis using a different set of statistical quantities .the purpose of this paper is to discuss the power of traditional statistical tests to establish unequivocally the existence or not of earthquake clusters for catalogues with small numbers of events and not amenable to experimental repeatability . in general , to study the power of statistical tests we need to enunciate an alternative hypothesis and calculate the probability of correctly rejecting a false hypothesis ; this is not the case for most studies of earthquake clusters since , to our knowledge , no stochastic process other than poisson has been widely hypothesized and tested for in the earthquake catalogue .the objective of our study is to determine the probability with which a random sample of a contrived non - poissonian process is rejected in a test in which the null hypothesis is a poisson process . to aid the discussionwe have devised a stochastic process which is clustered by construction and whose samples play the role of earthquake catalogues with a given magnitude threshold and de - clustered to remove aftershocks .to each one of these samples we apply a specific statistical test and use the set of p - values obtained in this way to calculate their probability distribution .this distribution will inform us the probability that any random sample of this process will pass or fail a test for poissonian statistics . to justify the merits of our analysis , we start by observing that poisson is the unique discrete stochastic process that satisfies two conditions : lack of memory ( markov assumption ) and a constant probability of event occurrence through time .the exceptional character of this process makes it a valuable tool in the natural sciences since lack of memory and time independence can be inferred either _ a priori _ or _ a posteriori _ - in this case by showing that the observed data fits well a poisson distribution .statistical inference of this sort is usually obtained through consensus of a large number of independent experiments , sometimes aided by theoretical models ) ] on the other hand , there is an infinite number of processes not satisfying one or both conditions .this becomes relevant when the available data is limited and a consensus view can not be established since short data series could be explained adequately by more than one stochastic process .it is granted that , even in such occasions a poisson distribution can be postulated on arguments of simplicity and plausibility , which , while scientific valid does not constitute objectively an explanation for a phenomenon .otherwise , a poisson process should be regarded as only one among many possible explanations .one obvious question that arises from these considerations regards how much data is enough so that an inference exercise can assert beyond reasonable doubt which model explains the data observed .the answer to this question lies in the scale of the stochastic process as compared with the length of the observed , which is illustrated by study .the stochastic process that we use to assess the skill of poissonian properties tests was devised as a theoretical artefact and not as a statistical model for the earthquake catalogue or associated with any particular physical reality .it was designed to convey in a synthetic manner the features of clustered data in which clusters may occur randomly and at relative low frequencies .this process is constructed by generating a poisson series at low event rates ( from 2 to 3 cluster per century ) equivalent to 110 years of observations .a cluster is a period of increased rate of event occurrence ; we express this by inserting in each cluster occurrence a poisson sample with a 10-fold increase in frequency ( 3 to 4 events per decade ) and a duration of 15 years .clusters are not allowed to overlap , but can neighbour each other to form mega - clusters of years .each particular choice of parameter will give rise to a particular distribution of the 110-years average event rate .we have chosen the parameters above to coincide with the general scale of the observed global earthquake catalogue : average event frequencies will range from 1 to 2 evens per decade , with a large variability for the averages derived from any single 110 years sample ( standard deviation ) . as a reference ,the global earthquake catalogue for a cut - off magnitude of 8.3 is approximately 2 events per decade .the samples generated by this process will produce clusters which are aperiodic and could be interpreted either as due to self sustained triggering ( one event increases the probability of another ) or as an overall increase in event rates due to a single underlying cause . in either case , the samples are , on average , clustered enough so that the the p - value distribution is that of a non - uniform distribution and skewed to the left .to best represent the variability in the genesis of earthquakes both clusters and events within clusters are subjected to full variability of poisson process - this means a non - zero probability of entire centuries with no clusters .we do not consider the problem of cluster detection for catalogues where a true poisson noise or another independent clustered process was added to the background , nor of a periodic cyclic process - we assume it to be self evident that this entails a greater similarity with a poissonian process .the approach we take here is conservative insofar as the process we envisage produces samples which are more clustered than a true poissonian process .p - values distribution were obtained by generating 10,000 independent samples of our process and by performing three different statistical tests in each independent sample to obtain the corresponding p - value .the tests we have chosen are three : ( a ) kolmogorov - smirnov test on inter - event time distribution ( b ) pearson -square test on event count and ( c ) same same test on multiple event inter - time distribution .test ( a ) was performed under the null hypothesis that inter - event times follow an exponential distribution characteristic of a poisson process where is the average event rate per year and is the time measured in years .the annual event rate is re - calculated for each sample , simulating our ignorance of the true event rate .test ( b ) is the usual pearson -square that tests for similarities in the histogram distribution between the samples and a poisson distribution .test ( c ) performs the same test as ( b ) on the inter - event time distribution for multiple events using the corresponding poisson distribution for null hypothesis .these computations were performed using mathematica software package .our procedure was tested by performing these same tests with poisson generated samples , which correctly output uniform distributions of p - values .a sample result is the p - value distribution shown in figure [ figure1 ] .it represents the probability of measuring a given p - value for test ( a ) for a single 110 years clustered sample .we have adopted the most common view of rejecting a hypothesis for p - values smaller than 5% a criterion that we return to in later discussions .we have repeated the same process by varying the parameters of our process to estimate the power of detection of test ( a ) as the frequency of clusters and of events within clusters vary .the process parameters for the result of figure [ figure1 ] are 3 clusters per century and 4 events per decade over a 15 year cluster period or 3-by-4 in short . with these parameters ,the average annual frequency of events is 0.12 events / year , with a 70th and 90th percentiles above median of 0.15 and 0.2 events / year approximately . in the histogram of figure [ figure1 ] bins are plotted in intervals of 5% and we can see that the probability of a p - value smaller than is . if the 5% significance level is strictly adopted , this is the chance that an observer would correctly reject the hypothesis and implies a type - ii error probability of 60% .we performed test ( a ) varying the parameters of the generating poisson processes and show in figure [ figure2 ] the probabilities of obtaining a p - value smaller than 5% .as expected , for low frequency of events per cluster the process looks more like a poisson process and is less frequently rejected ; the samples of this process are maximally non - poissonian for high cluster and event frequencies , with probability of a correct `` reject '' above 70% .average event frequencies for these values vary from 2 to 1 events per decade in the 5-by-5 and 3-by-4 cases respectively ( see figure [ figure1 ] ). we will not discuss test ( b ) , which has shown to be the less skilled of all , with probability of rejection on the order of 20% .the most successful test among those we studied is ( c ) . in it, we generated the null hypothesis distribution from the inter--event time distribution from samples of a poisson process , against which the clustered samples were tested . in this test, the poisson hypothesis distribution is assumed to have a _ known average event frequency _ given by the long term mean over _ all samples_. a more sensible approach is to take into consideration the probability that a single 110 years average event rate will be that of the long term average .this can be done , brute force , by generating one null hypothesis for each sample to be tested against all samples , thus accounting for the chance that a particular 110-years and the long term average event rate are the same . in the interest of focusing on the essential points , we show the distributions for this test for the cases where the poisson frequency is the long term average ( over all samples ) , the 70% and 90% quantiles above median for the same parameters as those shown in figure [ figure1 ] ( 3 clusters per century 4 events per decade within clusters ) .the results are explained in figures [ figure3 ] from ( a ) through ( c ) . in itwe see that , if we pick a sample whose value is the same as the average event rate , the test will detect correctly the non - poissonian nature of this process 80% of times . for samples whose average event rate is above the 90% quantile , the probability of correctly rejecting the null hypothesis drops to about 70% .the true value of the power for this test will depend on the degree of confidence in the estimation of the average event frequency and our belief of how accurately a single 110-years sample will inform on the `` true '' long term sample .this reflects the fact that this process unfolds on time - scales greater than 110 years .we stress that our presentation is not a claim that the stochastic process we devised intends to be a realistic model of the genesis of mega - earthquakes on a global scale .the results we have show are solely an illustration of the pitfalls of statistical tests and of type ii errors . at the heart of these issueslies the statistical variability of the process we used , which can be plainly expressed by saying that some of the samples are more poissonian than others .it is the assessment of differences _ between _ trajectories enables us to determine the falsehood of the poissonian hypothesis .it remains to be seen the results of a similar study using plausible non - poissonian processes , and the effects of the introduction of a poissonian background . as we noted before ,arguments of plausibility and simplicity based on markovian and time - independence assumptions provide solid grounds for hypothesising a poisson process as a likely candidate to explain the global earthquake catalogue . however , when viewed _ only on the merits of the observed data _ , the probability of type ii statistical error , such as those we computed here , must be taken into account .our degree of belief in a given premise is explicitly manifest in bayesian inference through the assignment of prior probabilities ; any argument that deems to inquire the data _ alone _ should state clearly its prior , whether equal probabilities or weighted towards a poisson distribution .[ such argument can be made not only as a matter of scientific clarity but as as an aid to scientific imagination . ]another aspect we discuss here regards the levels of significance commonly used in statistics .we have used 5% as the _ par excellence _standard in rejecting an hypothesis .economics can provide the basis for a rational approach to choosing levels of significance by considering the costs of taking an erroneous decision based on a failed test .such type of argument in the case of earthquake clusters is not straightforward nor scientifically objective . regarding the unequivocal establishment of a scientific statement, the setting of a level of significance should take into consideration the probabilities such as those we have derived here . in figure[ figure1 ] , for example , the probability of a p - value above 20% is non - trivial ( ) . in light of this discussion, we can consider the recent results of earthquake cluster detection .test ( a ) corresponds to the first test of which was applied to the global catalogue at a large cut - off magnitude of , which corresponds to a frequency of approximately 0.04 event / year , and he reports p - values as low as 0.12 . we have shown that the same test would not be accurate event at much lower cut - off implying an event frequency of 0.1 - 0.2 event / year ( corresponding magnitude thresholds between 8.4 and 8.3 ) . from , we are mostly interested in their multinomial test as it is equivalent to our case ( c ) , which we assessed as the most powerful ; in their work , the p - values reported for a magnitude cut - off range between 35 to 25% depending on the de - clustering undertaken .the average event rate for these magnitudes is well above any we have analysed here ( 0.8 - 0.7 event / year ) . more relevant tothis discussion is assertion that : `` ( ... ) the null hypothesis that times of large earthquakes follow a homogeneous poisson process would not be rejected by any of the tests '' . based on our discussion ,the criteria to accept ( or reject ) a hypothesis is not a clear - cut line .these considerations go beyond the specifics of cluster detection ( e.g. this work does not suggest that clustering is a real phenomenon considering that our test - process is highly contrived .this is a tentative way to introduce some objectivity into assertions such as `` random variability explains earthquake catalogue '' : what would really be meant by `` explains '' ?the fact that a given series of events has a `` reasonable '' probability according to such a process ( and we have yet to define what we mean by reasonable ) at mostwe could say it is consistent when favouring some prior , but as far a such limited data set is available such strong conclusions must not be taken for granted .the author acknowledges the contributions of profs paul somerville , rob van den honert and john mcaneney to this paper , and the financial support of lloyd s of london .ammon , c. j. , r. c. aster , t. lay and d. w. simpson ( 2011 ) , the tohoku earthquake and a 110 year spatiotemporal record of global seismic strain release , _ seismological society of america meeting , memphis , april 14 , 2011 , _ bufe , c. g. and d. m. perkins ( 2005 ) , evidence for a global seismic moment release sequence , _ bull .am . , _ _ 95 _ , 833-843 bufe , c. g. , and d. m. perkins ( 2011 ) , the 2011 tohoku earthquake : resumption of temporal clustering of earth s megaquakes , _ seismological society of america meeting , memphis , april 14 , 2011 , _ kerr , r. ( 2011 ) .more earthquakes on the way ?_ science , _ _ 332 _ 411 michael , a. ( 2011 ) , random variability explains apparent global clustering of large earthquakes , _ geo .lett . , _ _ 38 _ , l21301 .shearer , p. m. and stark p. b. ( 2011 ) , global risk of big earthquakes has not recently increased , _ pnas _, published ahead of print december 19 , 2011 , doi:10.1073/pnas.1118525109 merril w. c. and k. a. fox , introduction to economic statistics _john wiley _ , 1970 gardiner c. w. , handbook of stochastic methods _ springer verlag _ , 2003 mackay , d. j. c. ( 2003 ) , information theory , inference , and learning algorithms , _ cambridge press _mathematica software documentation _ http://reference.wolfram.com/mathematica/ref/pearsonchisquaretest.html _stumpf , michael p. h. and porter , mason a. ( 2012 ) , critical truths about power laws , _ science _ , _ 335 _ , 665666
testing the global earthquake catalogue for indications of non - poissonian attributes has been an area of intense research , especially since the 2011 tohoku earthquake . the usual approach is to test statistically for the hypothesis that the global earthquake catalogue is well explained by a poissonian process . in this paper we analyse one aspect of this problem which has been disregarded by the literature : the power of such tests to detect non - poissonian features if they existed ; that is , the probability of type ii statistical errors . we argue that the low frequency of large events and the brevity of our earthquake catalogues reduces the power of the statistical tests so that an unequivocal answer for this question is not granted . we do this by providing a counter example of a stochastic process that is clustered by construction and by analysing the resulting distribution of p - values given by the current tests .
neutrinos are one of the fundamental particles , occur in three flavours mainly ; electron neutrino , muon neutrino and tau neutrino .while propagating , a neutrino could change its flavour and hence oscillates from one flavour to other .this change of flavour is due to the mass possesed by them .many experiments around the globe are going on to measure various neutrino properties . on the similar lines `` india - based neutrino observatory '' ( ino ) with its magnetised detector iron calorimeter ( ical ) will be able to contribute to one such problem like mass hierarchy besides complementing the long baseline neutrino experiments data .the ino is the proposed underground facility to be built in theni at bodi west hills of south india .one of the experiments at the ino site for studying atmospheric neutrinos will be ical .the ical detector comprises of three identical modules each having dimension of 16 m 16 m 14.45 m. the total number of layers in ical will be 150 with 5.6 cm thick iron plates interleaved by resistive plate chambers ( rpcs ) .the magnetic field of 1.3 t is applied in ical which is generated by passing current through copper coils , that pass through coil slots in the plates and is distributed non - uniformly , dividing the whole ical ( each module ) in three main regions .the ical will have good energy , direction resolution and good reconstruction and charge identification efficiencies .the ical being the target for neutrinos to interact produce muons and hadrons so we need to have an efficient tracking in the detector by using resistive plate chambers ( rpcs ) .these muons leave a track that can be captured in one such detector called `` resistive plate chambers '' ( rpcs ) .so understanding of rpcs in terms of operational characteristics needs to be understood to get the maximum efficiency . in the next sectionwe give the details of the rpc detectors .the rpc is a low cost detector which is being used in various experiments like belle , cms and would be used in the near future experiments ical at ino .[ rpc_view ] shows schematic of rpc detector .the rpcs are the parallel plate gas detectors built using electrodes of high bulk resistivity such as glass or bakelite .so , glass is the most important component in rpcs as any damage to them can effect the properties of particles . based on excellent position and time resolutionsthey have been used for charged particle track detection , time of flight experiments .the rpc is fabricated by assembeling two glass or bakelite plates having bulk resistivity of -cm , forming gap , filled with a certain gas mixture . across this gapa high voltage is applied .a thin layer of graphite is coated over the external surface of the electrodes to permit uniform application of the high voltage .the electrodes are kept apart by means of small cylindrical spacers having a diameter of around 10 mm and a bulk resistivity greater than -cm ( refer fig .[ rpc_view ] ) .a gas mixture could consist of _ argon _ which acts as target for ionising particles while _ isobutane _ , being an organic gas , helps to absorb the photons that result from recombination processes thus limiting the formation of secondary avalanches far from the primary ones . an electronegative gas like _ freon ( r134a ) _ may serve the purpose of limiting the amount of free charge in the gas and acts as a quenching gas .electric signals are induced on pick - up strips , oriented in orthogonal directions and are placed on the outer surfaces of the glass electrodes .these pick - up strips are used to read the signal generated by rpc .[ [ modes - of - operation ] ] modes of operation + + + + + + + + + + + + + + + + + + there are two modes of operation for rpc depending on the operating voltage and gas composition used . *avalanche mode * operates at a lower voltage and the gain factor is .it occurs when the external field opposes the electric field of the ionising particles and the multiplication process stops after sometime .then the charges drift towards the electrodes and are collected there .this mode is used in those experiments where the event rate is low .* streamer mode * occurs when the secondary ionisation continues until there is a breakdown of the gas and a continuous discharge takes place .this mode operates at a high voltage and the gain factor is .this mode is used in those experiments where the event rate is high .either operated in avalanche or streamer mode it has been reported that the efficiency of rpc deteriorates and the reason accounted for such a low efficiency is asigned for aging of glass based rpcs . to ensure rpcs working efficiently for relatively long time period ( couple of years ) , we need to understand the aging pocess so as to minimise the factors responsible for that .there are many factors which could lead to aging effect .there can be different kind of aging effects like , aging of the materials irrespective of the working conditions , due to the integrated dissipated current inside the detector , and due to irradiation .one of the reason for the aging is the possible contamination of the detector gas with impurities and moisture too .one more reason is that , electron - ion recombination that produces uv photons that causes damage to the electrodes .also , the glass material itself having some internal impurities can deteriorate the surface .first two factors can be controlled but third one has to be taken care by chosing the best glass electrode .the choice of gases is also important to prevent aging .one of the plausible chemical reactions of freon gas , producing fluoride radicals reacting with moisture , causes damage to the electrodes due to the formation of hydrogen fluoride ( hf ) inside the gas gap .so in order to improve the stability of rpcs , the characterisation of electrode material ( glass ) is important .now , we describe the characterisation of glass , in order to understand and minimise the effect of aging .fabrication process is outlined in section [ virpc ] .glass electrode seems to be one of the crucial component and helpful in minimising the aging of rpcs if proper choice of electrode is made .so we have done detailed characterisation studies based on various techniques , for various glass samples procured from different manufacturers from the domestic market .the following properties contribute to the glass characterisation process .* physical properties * : knowing the mass , length and breadth , we measured the density of all the glass samples .no significant difference in the density was found .the results are given in the table [ table1 ] ..density measurements of asahi , saint gobain and modi glass samples .[ cols="^,^,^,^",options="header " , ] * optical properties * : transmittance studies for various glass samples over the ultraviolet to visible light spectrum was carried out .this will indicate the general bulk quality of the glass and a measure of level of impurities in the glass .uv / vis spectroscopy was used for the optical characteristics . fig .[ fig : bulk ] ( left ) shows the optical transmittance for all the glass samples .asahi and saint gobain glass shows better uv - vis transmittance than modi glass sample . * electrical properties * : the bulk resistivity of the glass samples was calculated using two - probe method .the two probe method is one of the standard and most commonly used method for the measurement of resistivity of very high resistivity samples like sheets / films of polymers .[ fig : bulk ] ( right ) shows the bulk resistivity of asahi , saint gobain and modi ( thickness 2.10 mm ) and was found of the order of -cm . * surface properties * : the surface quality of the electrode is crucial in reducing spontaneous discharges which might affect the rate capability of the detector . atomic force microscopy ( afm ) and scanning electron microscopy ( sem ) has been used to study and compare the surface quality of various glass samples .[ fig : sem ] and [ fig : afm ] shows the sem and afm of all the glass samples .asahi and saint gobain were better than modi glass sample .* elemental and compositional studies * : it is important to study the composition of the glass in order to get the information of elements or ions in the glass .the fractional percentages of weights of various compounds present in the glass samples was obtained using the wavelength dispersive x - ray spectroscopy ( wd - xrf ) technique .pixe ( proton induced x - ray emission spectroscopy ) technique is used as supplementary to wd - xrf to do elemental analysis done using cyclotron . fig . [fig : wdxrf1 ] shows the wd - xrf analysis and fig .[ fig : pixe1 ] shows the pixe analysis .table [ table2 ] shows the composition of all the samples .the percentages of and are important as they are the main compound of the glass and they show a constant percentage . degrades the quality of modi sample .thus , it was concluded on the basis of optical , surface properties and elemental composition , that asahi and saint gobain were better than modi glass sample .the surface of modi glass sample was observed to be of poor quality on the basis of sem and afm results and the modi glass sample is impure than the asahi and saint gobain .this will effect the performance of rpc made of modi glass sample .now we describe the fabrication and characterisation of the rpcs made of these glass samples .after characterising the glass samples of different manufacturers , all the different glass samples were used to fabricate rpc to measure efficiency as a second order cross check .we describe the process for fabricating one such rpc .the characterisation of various glass samples of the different manufacturers procurred from domestic market further motivated us to check their performance by characterisation and efficiency measurements of the rpcs made of the tested glass samples .therefore , we fabricated rpcs of asahi , saint gobain and modi glass of 2.10 mm thickness to check their performance .one of the rpc parameters called `` strip width '' study is important in order to do precise physics analysis .depending on the physics goals , the strip width of read - out boards ( pick - up panels ) can be optimised .for this , we have studied the efficiency and cross - talk of the rpcs and varied strip width to check their performance .we have fabricated rpcs of asahi , saint gobain and modi glass using standardized procedure which is as follows .the two glass plates of 30 cm 30 cm size with the four corners chamfered at were cleaned with distilled water and ethanol .a drop of glue ( dp 125 grey ) was applied on one side of the surface at four equdistant positions and one at the center .the button spacers were placed on the top of each glue drop and pressed then .all the side spacers were placed covering all the sides of the glass and nozzles were kept at the corners in a manner such that their direction is in either clockwise direction or anti - clockwise direction .all the side spacers and nozzles were glued with one glass plate and waited for almost 12 hours to dry them at room temperature . after drying up , a drop of gluewas applied on each button spacers and placed the second glass gently on the top of the button spacers .glue was applied again on all the sides between side spacers , nozzles and glass plates .a weight was put on the rpc and allowed the glue to harden for more than 12 hours .then the unpainted glass rpc was painted manually with spray gun on the outer surface with graphite paint .the painted rpc was allowed to dry at room temperature .[ fig : sg - asahi - modi ] shows the fabricated rpcs ( asahi , saint gobain and modi ) before and after graphite coating .we measured the surface resistivity of the fabricated rpcs which was of the order of 600800 k for asahi , saint gobain glass rpc and 600700 k for modi glass rpc as shown in fig .[ fig : sr ] . the best and the worst resistivity of glassesis shown in the figure .the variation in the bulk resistivity is of order of 1015% .however , the variation in the surface resistivity is due to the manual spray painting .we fabricated pick - up panels also of different strip size : 1.8 cm , 2.8 cm and 3.8 cm for doing our strip width studies .the pick - up panel is made from plastic honeycomb of 5 mm thick with 50 micron aluminium sheet ( for grounding ) on one side and copper strips of 2.8 cm ( or 1.8 or 3.8 cm ) with gap of 0.2 cm on the other side . or a foam interleaved between two 50 micron aluminium sheet is etched from one side to make 2.8 cm strips with gap of 0.2 cm .each strip is terminated with a 50 impedance to match the characteristic impedance of the preamplifier .other end of the strip is soldered to wire to connect with electronics .gas leakage and pressure test on these rpcs were done using standard techniques .these rpcs were characterised for v - i , efficiency and cross - talk which is described in the next subsection .the packed rpcs were characterised for the leakage current with different modes .the two modes that have been used are : avalanche and streamer for three glass rpcs : asahi , saint gobain and modi . in avalanche mode, we tested firstly the rpcs with two gases in the ratio freon(r134a ) : isobutane : : 95.5 : 4.5 and obtained their v - i .later , we added a quenching gas in order to check the performance of the rpcs .the gas composition used for this was freon(r134a ) : isobutane : : : 95.15 : 4.51 : 0.34 .we also characterised rpcs in streamer mode with the gas ratio taken as freon(r134a ) : isobutane : argon : : 62 : 8 : 30 .we have categorized the figures of v - i into three sets for asahi , saint gobain and modi glass rpc with two gases , three gases ( avalanche mode ) and streamer mode respectively as shown in figs .[ fig : vi - asahi ] , [ fig : vi - sg ] , [ fig : vi - modi ] . in the v - i plot , both the ohmic and non - ohmic regions are clearly seen . at the lower voltage , the primary ionizationdoes not produce avalanche .so , the gas gap impedance is infinite , therefore the current through the rpc is proportional to resistance provided by spacer which is less than the gap resistance . while at higher voltages , when avalanches are produced the gas resistance drops down and the current obtained is due to glass plates .table [ table3 ] shows the resistances on the basis of the different voltage regions obtained from v - i plots .resistance is low in the lower voltage region as the resistance is due to spacers but its high in the higher voltage region as the voltage is due to glass plates . in the next subsectionwe describe the efficiency and cross - talk measurements taken for the various rpcs . for performing the cross - talk measurements we used cosmic ray muon test stand .the muon ionizes the gas in rpc and the signal generated is picked up by copper strips of the pick - up panels . to transfer this signal to the electronics , we need preamplifiers in case of avalanche mode only .we used charge - sensitive fast preamplifier with gain of 75 .the pre - amplified rpc signals are fed to afe ( analog front - end ) boards in order to convert the amplified analog rpc pulses into logic signals by using a low threshold discriminator circuits .the discriminator signals from the afe boards are further processed by a dfe ( digital front - end ) for multiplexing of these signals and the actual counting was done by scalars .when a trigger signal from scintillator detector was received , the processed signals were latched and recorded by the nim and camac based back - end electronics : control and readout module .these modules were interfaced to a pc through camac controller which regulated the synchronous functioning of all camac modules .* efficiency * fig .[ fig : trigger - rpc ] shows trigger scheme circuit diagram for testing rpc .discriminators are connected to the paddles , , and to make 4-fold coincidence ( to form a trigger signal ) .this 4-fold along with the different rpc strips form a coincidence . = 2.5 cm 30 cm , = 5 cm 35 cm , = 21.5 cm 35 cm , = 21.5 cm 35 cm . and were placed one above the other in a manner to create a window of about 14 cm 2.5 cm .they were aligned with strip number 3 and labelled as `` main strip '' , whereas strip was labelled as `` left strip '' and strip was labelled as `` right strip '' .the data was obtained for x - strip of rpc .the pulse width of scintillators were kept at 60 ns and rpcs at 50 ns .the rpc was placed after to ensure that when the muon passed through all the four paddles it passed through the rpc too .the counters , to are connected to count muon events that passed through scintillator detectors .one preamplifier board with 8 capacity was used to read the pick - up strips .temperature at and relative humidity was maintained .but the source of error was the opening and closing of the door which caused moisture level sometimes to go up and affected the efficiency of the rpcs .all the paddles were anded ... to give a 4-fold signal , which acted as the trigger pulse .we assumed the passage of muon through the set window , if this anded signal was one .the trigger signal and main strip signal from the rpc was then anded together . when the trigger and the respective rpc strip signal was one then only the scalar counter of respective strip incremented its count by one .the number of time this condition was satisfied gave efficiency and is given by , * cross - talk * the fluctuations in the efficiency were due to the `` noise rate '' .it is defined as the rate at which random noise signal hits the rpc strip .it can be due to cosmic ray particles , stray radioactivity and dark current in chamber .expecting that the rpc strip was aligned with window of cosmic ray telescope to pick up the signal .if it is picked up by adjacent strips , it is known as `` cross - talk '' and it can be due to misalignment of strip or due to inadequate amount of quenching gas used .an effort was put to reduce it to improve the efficiency and hence performance of the rpc . using the concept discussed above we obtained the efficiency and cross - talks of asahi , saint gobain and modi glass rpc for strip width = 2.8 cm . figs .[ fig : asahi-2.8 cm ] show the efficiency and cross - talk of asahi glass rpc with the strip width size = 2.8 cm in avalanche mode ( two gases ) , avalanche mode ( three gases ) and streamer mode respectively .from the figures it is observed that the efficiency of the detector increases with voltage reaching a plateau at higher voltage with efficiency greater than 90% .the fluctuations may be due to the noise rate , especially for two gases .the fluctuations in the efficiencies reduces with the addition of third gas in both avalanche mode and streamer mode . figs .[ fig : sg-2.8 cm ] show efficiency and cross - talk of saint gobain glass rpc with the strip width size = 2.8 cm in avalanche mode ( two gases ) , avalanche mode ( three gases ) and streamer mode respectively . figs .[ fig : modi-2.8 cm ] show efficiency and cross - talk of modi glass rpc with the strip width size = 2.8 cm , in avalanche mode ( two gases ) , avalanche mode ( three gases ) and streamer mode respectively .tables [ eff_table ] and [ cross_talk_table ] shows efficiency and cross - talk measurements ( approximate values ) for various glass rpcs operated in different modes at their operating voltages .it is observed that the higher cross - talk was caused in case of two gases only ( avalanche mode ) . due to the poor quality of modi glass rpc a higher cross - talk was observed .it may also be due to the factors like temperature , humidity and noise rate .cross - talk and efficiency improves with the addition of and argon gas . a comparison plot of all the rpcs with strip width size = 2.8 cm in avalanche mode ( three gases ) is shown in fig .[ fig : compall-2.8 cm ] .resistive plate chambers are the main component of whole ical detector and hence a proper r & d of them is absolutely necessary .glass is one of the main component of rpc and before making a choice of electrode it needed detailed studies .we procured glass samples of different manufacturers named as asahi , saint gobain and modi from a local market and compared them on the basis of physical properties , electrical , optical properties , surface characteristics and elemental composition .we tried to find out a comparative scale , which glass sample is best suited as an electrode in rpcs . on the basis of the properties it was concluded that asahi and saint gobain are better than modi glass .we fabricated rpcs of 30 cm 30 cm made of the same material which was characterised for the selection of electrode to be used for rpc .we characterised the three glass rpcs by measuring their cross - talk and efficiency .we conclude that asahi and saint gobain glass rpc gave best results than modi glass rpc .: we thank cil department of pu , cyclotron facility ( nuclear department of pu ) , nitttr , sec 26 , chandigarh for characterisation techniques .we thank engineers and technical staff of pu - ehep lab .r. kanishka acknowledges ugc / dae / dst ( govt . of india ) for funding .a. ghosh , t. thakore and s. choubey , _ determining the neutrino mass hierarchy with ino , t2k , nova and reactor experiments _ , jhep * 1304 * , 009 , http://arxiv.org/abs/1212.1305[arxiv:hep-ph/1212.1305 ] ( 2013 ) .a. chatterjee et al . , _ a simulations study of the muon response of the iron calorimeter detector at the india - based neutrino observatory _ , jinst * 9 * p07001 , http://arxiv.org/abs/1405.7243[[arxiv:1405.7243 ] ] ( 2014 ) .kolahal bhattacharya , et al . , ( ino collaboration ) , _ error propagation of the track model and track fitting strategy for the iron calorimeter detector in india - based neutrino observatory _ , computer physics communications ( elsevier ) * 185 * ( 12 ) 32593268 , ( 2014 )
the proposed magnetised iron calorimeter detector ( ical ) to be built in the india - based neutrino observatory ( ino ) laboratory aims to detect atmospheric muon neutrinos . in order to achieve improved physics results , the constituent components of the detector must be fully understood by proper characterisation and optimisation of various parameters . resistive plate chambers ( rpcs ) are the active detector elements in the ical detector and can be made of glass or bakelite . the number of rpcs required for this detector is very large number so a detailed r&d is necessary to establish the characterisation and optimisation of these rpcs . these detectors once installed will be taking data for 15 - 20 years . in this paper , we report the selection criteria of glass used of various indian manufacturers such as asahi , saint gobain and modi . based on the factors like aging that deteriorate the quality of glass the choice is made . the glass characterisation studies include uv - vis transmission for optical properties , sem , afm for surface properties , wd - xrf , pixe for determining the composition of glass samples and electrical properties . based on these techniques a procedure is adopted to establish a best glass sample . we have done a second order check on the quality of the fabricated glass rpcs . the efficiency and cross - talk of asahi and saint gobain glass rpc came out to be the best .
the phenomenon of heart rate variability ( hrv ) in humans desrcibes the beat - to - beat , apparently random , fluctuation of the heart rate .hrv measured by the time span between ventricular contractions , known as the beat - to - beat rr interval ( rri ) , is also known to share many characteristics found in other natural phenomena .for example , daytime rri in healthy humans exhibits 1/f - like power spectrum , multifractal scaling , and similar increment distribution observed in fluid turbulence .these characteristics may vary significantly in heart disease patients depending on the severity of the disease . the origin and the generation of hrv remain the biggest challenges in the contemporary hrv research .although the respiratory and vascular systems constantly modulate the heart rate , they do not explain the large percentage of the broad - band ( multifractal ) signal power in hrv .for example , it is unlikely that this broad - band feature results directly from the output of the narrow - band respiratory dynamics .also , it is known that the level and the variability of blood pressure and heart rate can change significantly from upright to supine positions . in a 42-day long bed rest test , fortrat et al .showed that the variation in blood pressure and heart before and after the test are qualitatively different , suggesting separate control mechanisms for generating their variability .it is thus believed that a more sophisticated structure may exist , which integrates the feedback from receptors to create the pattern of hrv .apart from its origin , some progess on the hrv generating mechanism may be possible by using the discrete ( lattice ) multiplicative cascade model .this is purely a phenomenology approach that does not prescribe to any physiology term .nontheless , encouraging results were obtained that are consistent with the physiological data in health and in certain heart disease .the main purpose of this work is to investigate the basis of this modeling strategy .our approach is based on the scale invariant symmetry implied from the hrv phenomenology . since rri can not be defined between heart beats , it is appropriate to consider discrete scale invariance ( dsi ) in hrv. it is known that discrete cascade implies dsi .better characterization of dsi in hrv is thus important since it is the necessary condition for the multifractal scaling observed in hrv .the existence of cascade is also significant because it represents a very different approach of the cardiovascular dynamical system from feedback control that is additive in principle .the idea will support the previous studies that a direct influence from baroreflex to multifractal hrv is unlikely , as well as the need to search for a role by the higher control centers in hrv .the consequence of dsi is an oscillating scaling law with a well - defined power law period .such a scaling law is said to exhibit log - periodicity ( lp ) . in this work, we analyzed dsi in daytime healthy hrv by searching lp in the scaling of hrv .typically , lp is averaged out " in the process of finding the scaling law . using the technique called rephasing ," this problem can be effectively resolved and evidence of multiple dsi groups in the healthy daytime rri data was found . in light of this new result , a cascade model is constructed using random branching law to reproduce not only some ofthe known hrv phenomenology , but also the multiple dsi characteristics .the results of this work are organized in five sections . in section 2, a brief review of the notion of dsi is given .the numerical procedures for identifying the dsi property from time series are described in section 3 .numerical examples and results on daytime heart rate data sets are given in section 4 .concluding remarks are given in the last section .a random processes is said to possess continuous scale invariant symmetry if its distribution is preserved after the change of variables , , where and are real numbers ; i.e. , dsi is defined when ( 1 ) only holds for a countable set of scale factors .scale invariance implies power law .the power law in dsi has a log - periodic correction of frequency : i.e. , where and .generally , one can consider , being -dependent , and is a complex number for .novikov suggested lp in the small scale energy cascade of the turbulent flow .sornette and co - workers showed that lp exists more generally in physical and financial systems , such as turbulence , earthquake , rupture and stock market crashes .the existence of the discrete scale factor implies a hierarchical structure .this link can be simply illustrated by the middle third cantor set with the scale factor . with proper rescaling , a precise copy of the setis only obtained with a 3-fold manification of the scale .if denotes the lebesgue measure at scale , the cantor set can be modeled by ( 1 ) using and .thus , the power law exponent of ( the box dimension of the cantor set ) assumes a log - periodic oscillation of frequency about its mean value .the hierarchical structure can be a dynamic object as a result of some time - dependent branching law .such a dynamic hierarchy is believed to exist , for example , in the cascade paradigm of the energy exchange in fluid turbulence where the break - down or branching " of large - scale vortices into ones of smaller scales can occur randomly in space - time with the energy re - distribution following a multiplication scheme . in data analysis, the dynamic hierarchy poses a technical difficulty for finding the scale factor since lp may be averaged out in the process of obtaining the power law .zhou and sornette proposed to conduct averaging _after _ rephasing or re - aligning data segments using a central maximum criterion . using this technique ,these authors successfully extracted lp in turbulence and proposed the dsi symmetry and cascade .the rephasing technique is adopted in this work . instead of the central maximum criterion , the cross - correlation property of the data segments will be used ( see step ( d ) below ) .let denote the rri between the and heart beats .based on the turbulence analogy of hrv , we focus on the lp in the scaling exponent of the empirical law where and is a real number .the implementation of the rephasing follows a 8-step algorithm ; see fig . 1. \(a ) divide into nonoverlapping segments .\(b ) for , calculate savitzky - golay ( sg ) filter to and calculate its first derivative to obtain a -dependent for .the sg filter performs a order polynomial fit over samples .it can produce a smoothing effect in the high frequency while preserving the statistical moments of the signal up to the order of the filter .\(d ) randomly select the segment as the base segment and compute the cross - correlation between and for .\(e ) shift the time origin of by , where , so that the cross - correlation between and the shifted has a maximum at zero time lag .note that for the base segment .\(f ) average the shifted , to obtain .\(g ) compute the spectrum of .\(h ) return to ( c ) with different values .a lomb periodogram is used to estimate the spectrum of for its superiority in handling situations where noise plays a fundamental role in the signal , as well as its capability in handling small data set .although the above algorithm provides the systematic steps to estimate the log - periodic component , noise in the empirical data can also generate spurious peaks in the lomb periodogram . for independent gaussian noise process , this problem can be analyzed by the false alarm probability : where is proportional to the number of points in the spectrum .the smaller the value is , the more likely a genuine log - periodic component exists in the signal .thus , a lomb peak with large suggests a chance event .zhou and sornette conducted extensive simulations and showed that ( 2 ) is in fact an upper bound for a number of correlated noise except for those showing long - term _ persistent _ correlation . the fractional brownian motion ( fbm ) of a hurst exponent greater than 0.5 is an example where ( 2 ) does not apply .the multiple scaling exponents in healthy daytime hrv have been found to lie below such a threshold and we will continue to use ( 2 ) in this work .as shown above , dsi is characterized by the frequency of the lp .however , significant lomb peaks may only capture the higher harmonics , .it is therefore necessary to define the relation of the significant peaks .we propose a simple procedure to achieve this .first , we collect the significant peaks satisfying for and for different sg filter parameters .second , we form a significant lomb peak histogram ( slph ) and locate its local maxima .these maxima identifies the most probable frequencies of the log - periodic oscillation of the power law .let such maxima be .the last step of the procedure is to search the smallest to minimize for integers s .we seek the smallest since , with finite precision in numerical computing , can be made arbitrarily small as this minimization step is simple , easy to implement and , as we show below using synthetic data , it is also effective .the rephasing algorithm introduced above was first tested on synthetic data generated by the discrete cascade where the cascade components are discrete - time processes given by for , , , and is a zero - mean gaussian random variable of variance 1 .let . the scale factor in the dsi hierarchy is related to s by to model the bounded rri, we further assume to assure boundedness .this model has been used in the past to simulate hrv phenomenology , including transition of rri increment probability density function and multifractal scaling . according to ( 4 ) , we generated 30 sets of dyadic ( ) and triadic ( ) with the corresponding and , respectively .each has 8192 points and is divided into segments of 1024 points .twenty - four sets of sg filter are defined based on , . for each combination of , steps ( c ) to ( h ) in the rephasing algorithmis repeated six times based on six different base segments selected in step ( d ) of the algorithm .this is implemented to avoid bias from a particular segment .significant lomb peaks are collected based on the false alarm probability or and points of the lomb periodogram .the results for is reported as no quantitative difference exists for .numerical results for show more variability due to poor statistics .2a shows the of a particular segment of one of the dyadic s . the log - periodic oscillation with a log - period is clearly seen .the lomb periodogram of ( step ( f ) above ) is shown in fig .2b based on a particular choice of and the dominant lp is seen to pick up the second harmonics of .the slph estimated for different sg filters over 30 sets of is obtained in fig .the clustering of the local maxima at integer multiples of is evident .the minimization ( 3 ) identifies the correct scale factor for the dyadic cascade .similar results of the tradic cascade are also found ( fig .these examples demonstrate the effectiveness of the proposed numerical procedures . for hrv ,two databases are considered .the first set ( db1 ) consists of 10 ambulatory rri recordings from healthy young adults .these test subjects were allowed to conduct normal daily activities .the second set ( db2 ) , available from the public domain , consists of 18 ambulatory rri recordings showing normal sinus rhythm .the parameters used in the numerical analysis are the same as above except the data segment length has increased to 2048 points .the choice of the segment length is a balance of two factors : small segment length results in more segments but poorer statistics in the estimation of ; large segment length results in less segments but better estimate of .we tried 1024 points per segment and found similar results ; i.e. , the group averaged value is similar to the ones reported in fig . 5 below .the slph in all cases shows well positioned local maxima that can be easily related to the harmonics of some fundamental frequency ( fig . 4 ) .the values for db1 and db2 are summarized in fig .it is observed that ( a ) there are non - integer scale factor and ( b ) the s cluster in the range of [ 3.5 , 5.5 ] and the group averaged are .8 and .4 for db1 and db2 , respectively .the noninteger unambiguously excludes the possibility of discrete cascades with one scale factor .it implies more complicated branching law and multiple dsi groups in healthy hrv . although hrv and turbulence exhibit similar phenomenology , it is interesting to point out the rather large value ( ) compared with the in fluid turbulence . from the discrete cascade viewpoint ,a larger is compatible with the patchiness " appearance commonly observed in the rri of healthy humans since the s of the cascade will fluctuate on a longer time scale to create the effect . to model the multiple dsi in cascade hrv , the scale factor used in ( 5 )is set to be a random number so that the log - periodic oscillation of can vary over a range of frequencies .we generated 30 sets of according to ( 4 ) using uniformly distributed in the interval [ 2,6 ] .the simulated exhibits the patchiness " pattern observed in the rri data ( fig .6 ) , and similar scaling characteristics found in the past ( figs .7a 7c ) .the scaling exponent of the power law exhibits log - periodic oscillation that is captured by the well positioned local maxima in slph ( figs . 7d , 7e ) .in addition , the average of the s lies close to the group - averaged values of db1 and db2 ( fig .it is known that discrete cascade leads to dsi and characterized by log - periodic modulation of the scaling property .hence , the lp reported in this work supports the view of a cascade for the multifractal generation in hrv .it implies a more sophisticated process than reflex - based control mechanisms that function on the additive basis .it also suggests the need to search for a role by the higher control centers in hrv .it is conjectured that the cascade describes the process which integrates the regulatory feedbacks in the cardiovascular system to create the pattern of hrv .the non - integer scale factor implies multiple dsi .this property was also reported in the screening competition of the growth of diffusion limited aggregation model . to the best of our knowledge ,this is the first instance of multiple dsi being reported in hrv .we do not have the better knowledge of its origin , except to believe it reflects the multiple time - scale control mechanisms in the cardiovascular dynamical system .it is tempting to search for the physiological correlate of the cascade , for example , the role of the cascade components . based on the spectral analysis , we suggested that the large time scale components ( ) capture mainly the sympatho - vagal interaction and the small time scale components ( ) capture the parasympathetic activity .however , we should caution that cascade is a modeling tool derived from statistical physics .the can therefore represent the range of micro- to macroscopic processes in the cardiovascular dynamical system .a rather narrow range of the scale factor $ ] estimated from the two different databases implies a stable " hierarchical structure of the cascade that does not vary sensitively with the details of the healthy population .the analysis of the identified dsi characteristics in other physiological conditions is currently underway and its result will be reported in the near future .this research is supported by natural science and engineering research council of canada .the author would like to thank many years of valuable comments and suggestions by dr .hughson of the university of waterloo and critical comments by the anonymous referee .[ 4 ] d.c .lin and r.l .hughson , _ phys ._ , * 86 * , 1650 ( 2001 ) ; d.c .lin and r.l .hughson , _ ieee trans . biomed ._ , * 49 * , 97 ( 2002 ) ; d.c .lin , _ fractals _ , * 11 * , 63 ( 2003 ) ; d.c .lin , _ phys ., * 67 * , 031914 ( 2003 ) . [ 16 ] a. johansen , et al . ,_ j. geophys ._ , * 105 * , 28111 ( 2000 ) ; y. huang , et al . ,_ j. geophys ._ , * 105 * , 28111 ( 2000 ) . [ 17 ] y. huang , et al .e _ , * 55 * , 6433 ( 1997 ) .[ 18 ] a. johansen and d. sornette , o. ledoit , _ j. risk _ , * 1 * , 5 ( 1999 ) . fig. 1 sketch of the numerical procedure for rephasing .the second segment is illustrated as the base segment and rephasing was shown for ( is determined at the maximum of the cross - correlation function between the and the base segments ) .log - periodicity in is estimated from the lomb periodogram .2 ( a ) versus taken from the synthetic dyadic bounded cascade .the solid line is a pure sine wave with a period of .( b ) typical lomb periodogram of ( averaged over all s ) .3 slph estimated from 30 sets of ( a ) synthetic dyadic bounded cascade and ( b ) triadic .the grid lines in ( a ) and ( b ) are drawn according to and , , respectively .4 ( a ) slph of a typical data set from db1 .the local maxima are marked by ( b ) versus , , showing as the harmonics generated by the fundamental frequency .the straight line has the slope .( c ) similar to ( a ) based on a data set taken from db2 .( d ) similar to ( b ) based on the local maxima of ( c ) .the straight line has the slope .note the local maximum between and was not fitted by the harmonics of .5 scale factor s for 10 subjects in db1 , 18 subjects in db2 and 30 sets of synthetic data generated by the cascade model .the group averaged values and standard deviations are superimposed and drawn as " and vertical bar , respectively .7 ( a ) to ( c ) show the -like power spectrum , power law , and the nonlinear of , respectively , of the shown in fig . 5 ; see ref . 4 for the similar characteristics reported for rri data in healthy humans .( d ) and ( e ) show the slph of two typical .well - positioned local maxima in ( d ) and ( e ) capture the harmonics generated by : .4 and .85 , respectively .
evidence of discrete scale invariance ( dsi ) in daytime healthy heart rate variability ( hrv ) is presented based on the log - periodic power law scaling of the heart beat interval increment . our analysis suggests multiple dsi groups and a dynamic cascading process . a cascade model is presented to simulate such a property . 20 true pt
human have the remarkable ability of selective visual attention .cognitive science explains this as the biased competition theory `` that human visual cortex is enhanced by top - down guidance during feedback loops .the feedback signals suppress non - relevant stimuli present in the visual field , helping human searching for ' ' goals " . with visual attention ,both human recognition and detection performances increase significantly , especially in images with cluttered background . inspired by human attention , the recurrent visual attention model ( ram )is proposed for image recognition .ram is a deep recurrent neural architecture with iterative attention selection mechanism , that mimics the human visual system to suppress non - relevant image regions and extract discriminative features in a complicated environment .this significantly improves the recognition accuracy , especially for fine - grained object recognition .ram also allows the network to process a high resolution image with only limited computational resources . by iteratively attending to different sub - regions ( with a fixed resolution ) , ram could efficiently process images with various resolutions and aspect ratios in a constant computational time . besides attention, human also tend to dynamically allocate different computational time when processing different images .the length of the processing time often depends on the task and the content of the input images ( e.g. background clutter , occlusion , object scale ) .for example , during the recognition of a fine - grained bird category , if the bird appears in a large proportion with clean background ( figure [ fig : splash]a ) , human can immediately recognize the image without hesitation .however , when the bird is under camouflage ( figure [ fig : splash]b ) or hiding in the scene with background clutter and pose variation ( figure [ fig : splash]c ) , people may spend much more time on locating the bird and extracting discriminative parts to produce a confident prediction .c|cc & + ( a ) easy & + + inspired by this , we propose an extension to ram named as dynamic time recurrent attention model ( dt - ram ) , by adding an extra binary ( continue / stop ) action at every time step . during each step, dt - ram will not only update the next attention , but produce a decision whether stop the computation and output the classification score .the model is a simple extension to ram , but can be viewed as a first step towards dynamic model during inference , where the model structure can vary based on each input instance .this could bring dt - ram more flexibility and reduce redundant computation to further save computation , especially when the input examples are easy " to recognize .although dt - ram is an end - to - end recurrent neural architecture , we find it hard to directly train the model parameters from scratch , particularly for challenging tasks like fine - grained recognition .when the total number of steps increases , the delayed reward issue becomes more severe and the variance of gradients becomes larger .this makes policy gradient training algorithms such as reinforce harder to optimize .we address this problem with curriculum learning . during the training of ram, we gradually increase the training difficulty by gradually increasing the total number of time steps .we then initialize the parameters in dt - ram with the pre - trained ram and fine - tune it with reinforce .this strategy helps the model to converge to a better local optimum than training from scratch .we also find intermediate supervision is crucial to the performance , particularly when training longer sequences .we demonstrate the effectiveness of our model on public benchmark datasets including mnist as well as two fine - grained datasets , cub-200 - 2011 and stanford cars .we also conduct an extensive study to understand how dynamic time works in these datasets .experimental results suggest that dt - ram can achieve state - of - the - art performance on fine - grained image recognition .compared to ram , the model also uses less average computational time , better fitting devices with computational limitations .visual attention is a long - standing topic in computer vision . with the recent success of deep neural networks , mnih develop the recurrent visual attention model ( ram ) for image recognition , where the attention is modeled with neural networks to capture local regions in the image .ba follow the same framework and apply ram to recognize multiple objects in images .sermanet further extend ram to fine - grained image recognition , since fine - grained problems usually require the comparison between local parts . besides fine - grained recognition , attention models also work for various machine learning problems including machine translation , image captioning , image question answering and video activity recognition .based on the differentiable property of attention models , most of the existing work can be divided into two groups : soft attention and hard attention .the soft attention models define attention as a set of continuous variables representing the relative importance of spatial or temporal cues .the model is differentiable hence can be trained with backpropogation .the hard attention models define attention as actions and model the whole problem as a partially observed markov decision process ( pomdp ) .such models are usually nondifferentiable to the reward function hence use policy gradient such as reinforce to optimize the model parameters .our model belongs to the hard attention since its stopping action is discrete .the visual attention models can be also viewed as a special type of feedback neural networks .a feedback neural network is a special recurrent architecture that uses previously computed high level features to back refine low level features .it uses both top - down and bottom - up information to compute the intermediate layers . besides attention models ,feedback neural networks also have other variants .for example , carreira performs human pose estimation with iterative error feedback .newell build a stacked hourglass network for human pose estimation .hu and ramanan show that network feedbacks can help better locating human face landmarks .all these models demonstrate top - down information could potentially improve the model discriminative ability . however , these models either fix the number of recurrent steps or use simple rules to decide early stopping . graves recently introduce _ adaptive computational time _ in recurrent neural networks .the model augments the network with a _ sigmoidal halting unit _ at each time step , whose activation determines the probability whether the computation should stop .figurnov extend to spatially adaptive computational time for residual networks .their approach is similar but define the _ halting units _ over spatial positions .neumann extend the similar idea to temporally dependent reasoning .they achieve a small performance benefit on top of a similar model without an adaptive component .jernite learn a scheduler to determine what portion of the hidden state to compute based on the current hidden and input vectors .all these models can vary the computation time during inference , but the stopping policy is based on the cumulative probability of _ halting units _ , which can be viewed as a fixed policy .as far as we know , odena is the first attempt that learn to change model behavior at test time with reinforcement learning .their model adaptively constructs computational graphs from sub - modules on a per - input basis .however , they only verify on small dataset such as mnist and cifar-10 .ba augment ram with the `` end - of - sequence '' symbol to deal with variable number of objects in an image , which inspires our work on dt - ram .however , they still fix the number of attentions for each target .there is also a lack of diagnostic experiments on understanding how `` end - of - sequence '' symbol affects the dynamics . in this work, we conduct extensive experimental comparisons on larger scale natural images from fine - grained recognition .fine - grained image recognition has been extensively studied in recent years . based on the research focus, fine - grained recognition approaches can be divided into representation learning , part alignment models or emphasis on data .the first group attempts to build implicitly powerful feature representations such as bilinear pooling or compact bilinear pooling , which turn to be very effective for fine - grained problems .the second group attempts to localize discriminative parts to effectively deal with large intra - class variation as well as subtle inter - class variation .the third group studies the importance of the scale of training data .they achieve significantly better performance on multiple fine - grained dataset by using an extra large set of training images . with the fast development of deep models such as bilinear cnn and spatial transformer networks ,it is unclear whether attention models are still effective for fine - grained recognition . in this paper , we show that the visual attention model , if trained carefully , can still achieve comparable performance as state - of - the - art methods .the difference between a dynamic structure model and a fixed structure model is that during inference the model structure depends on both the input and parameter .given an input , the probability of choosing a computational structure is .when the model space of is defined , this probability can be modeled with a neural network . during training , with a given model structure ,the loss is . hence the overall expected loss for an input is = \sum_{\mathcal{s } } p(\mathcal{s}|x , \theta ) l_\mathcal{s}(x , \theta ) \label{eq : loss}\ ] ] the gradient of with respect to parameter is : \end{aligned}\ ] ] the first term in the above expectation is the same as reinforce algorithm , it makes the structure leading to smaller loss more probable .the second term is the standard gradient for neural nets with a fixed structure . during experiments ,it is difficult to directly compute the gradient of the over because it requires to evaluate exponentially many possible structures during training .hence to train the model , we first sample a set of structures , then approximate the gradient with monte carlo simulation : , the model could output more confident predictions . ]the recurrent attention model is formulated as a partially observed markov decision process ( pomdp ) . at each time step, the model works as an agent that executes an action based on the observation and receives a reward .the agent actively control how to act , and it may affect the state of the environment . in ram , the action corresponds to the location of the attention region .the observation is a local ( partially observed ) region cropped from the image .the reward measures the quality of the prediction using all the cropped regions and can be delayed .the target of learning is to find the optimal decision policy to generate attentions from observations that maximizes the expected cumulative reward across all time steps .more formally , ram defines the input image as and the total number of attentions as . at each time step , the model crops a local region around location which is computed from the previous time step .it then updates the internal state with a recurrent neural network which is parameterized by .the model then computes two branches .one is the location network which models the attention policy , parameterized by .the other is the classification network which computes the classification score , parameterized by . during inference ,it samples the attention location based on the policy .figure [ fig : ram ] illustrates the inference procedure .when the dynamic structure comes to ram , we simply augment it with an additional set of actions that decides when it will stop taking further attention and output results . is a binary variable with 0 representing continue " and 1 representing stop " .its sampling policy is modeled via a stopping network . during inference , we sample both the attention and stopping with each policy independently . figure [ fig : dt - ram ] shows how the model works .compared to figure [ fig : ram ] , the change is simply by adding to each time step .figure [ fig : dt - ram - illustration ] illustrates how dt - ram adapts its model structure and computational time to different input images for image recognition .when the input image is easy " to recognize ( figure [ fig : dt - ram - illustration ] left ) , we expect dt - ram stop at the first few steps .when the input image is hard " ( figure [ fig : dt - ram - illustration ] right ) , we expect the model learn to continue searching for informative regions .is added to each time step . represents `` continue '' ( green solid circle ) and represents `` stop '' ( red solid circle ) . ]given a set of training images with ground truth labels , we jointly optimize the model parameters by computing the following gradient : where are the parameters of the recurrent network , the attention network , the stopping network and the classification network respectively . compared to equation [ eq : train ] , equation [ eq : loss - dt - ram ] is an approximation where we use a negative of reward function to replace the loss of a given structure in the first term .this training loss is similar to .although the loss in equation [ eq : train ] can be optimized directly , using can reduce the variance in the estimator . is the sampling policy for structure . is the cumulative discounted reward over time steps for the -th training example .the discount factor controls the trade - off between making correct classification and taking more attentions . is the reward at -th step . during experiments , we use a delayed reward .we set if and only if .* intermediate supervision : * unlike original ram , dt - ram has intermediate supervision for the classification network at every time step , since its underlying dynamic structure could require the model to output classification scores at any time step .the loss of is the average cross - entropy classification loss over training samples and time steps .note that depends on , indicating that each instance may have different stopping times . during experiments ,we find intermediate supervision is also effective for the baseline ram .* curriculum learning : * during experiments , we adopt a gradual training approach for the sake of accuracy .first , we start with a base convolutional network ( e.g. residual networks ) pre - trained on imagenet .we then fine - tune the base network on the fine - grained dataset .this gives us a very high baseline .second , we train the ram model by gradually increase the total number of time steps .finally , we initialize dt - ram with the trained ram and further fine - tune the whole network with reinforce algorithm .we conduct experiments on three popular benchmark datasets : mnist , cub-200 - 2011 and stanford cars .table [ tab : dataset ] summarizes the details of each dataset .mnist contains 70,000 images with 10 digital numbers .this is the dataset where the original visual attention model tests its performance .however , images in mnist dataset are often too simple to generate conclusions to natural images .therefore , we also compare on two challenging fine - grained recognition dataset . cub-200 - 2011 consists of 11,778 images with 200 bird categories .stanford cars includes 16,185 images of 196 car classes .both datasets contain a bounding box annotation in each image .cub-200 - 2011 also contains part annotation , which we do not use in our algorithm .most of the images in these two datasets have cluttered background , hence visual attention could be effective for them .all models are trained and tested without ground truth bounding box annotations ..statistics of the three dataset .cub-200 - 2011 and stanford cars are both benchmark datasets in fine - grained recognition . [ cols="<,^,^,^,^",options="header " , ] * qualitative results : * we visualize the qualitative results of dt - ram on cub-200 - 2011 and stanford cars testing set in figure [ fig : visualization ] and figure [ fig : visualization_car ] respectively . from step 1 to step 6 , we observe a gradual increase of background clutter and recognition difficulty , matching our hypothesis of using dynamic computation time for different types of images .in this work we present a simple but novel method for learning to dynamically adjust computational time during inference with reinforcement learning . we apply it on the recurrent visual attention model and show its effectiveness for fine - grained recognition .we believe that such methods will be important for developing dynamic reasoning in deep learning and computer vision. future work on developing more sophisticated dynamic models for reasoning and apply it to more complex tasks such as visual question answering will be conducted .richard s sutton , david a mcallester , satinder p singh , yishay mansour , et al .policy gradient methods for reinforcement learning with function approximation . in _ nips _ , volume 99 , pages 10571063 , 1999 .jonathan krause , michael stark , jia deng , and li fei - fei .3d object representations for fine - grained categorization . in _ proceedings of the ieee international conference on computer vision workshops _ , pages 554561 , 2013 .christian szegedy , wei liu , yangqing jia , pierre sermanet , scott reed , dragomir anguelov , dumitru erhan , vincent vanhoucke , and andrew rabinovich .going deeper with convolutions . in _ proceedings of the ieee conference on computer vision and pattern recognition _ ,pages 19 , 2015 .kaiming he , xiangyu zhang , shaoqing ren , and jian sun .deep residual learning for image recognition . in _ proceedings of the ieee conference on computer vision and pattern recognition _ , pages 770778 , 2016 .kelvin xu , jimmy ba , ryan kiros , kyunghyun cho , aaron c courville , ruslan salakhutdinov , richard s zemel , and yoshua bengio .show , attend and tell : neural image caption generation with visual attention . in _icml _ , volume 14 , pages 7781 , 2015 .huijuan xu and kate saenko .ask , attend and answer : exploring question - guided spatial attention for visual question answering . in _european conference on computer vision _ , pages 451466 .springer , 2016 .zichao yang , xiaodong he , jianfeng gao , li deng , and alex smola .stacked attention networks for image question answering . in_ proceedings of the ieee conference on computer vision and pattern recognition _ , pages 2129 , 2016 .serena yeung , olga russakovsky , greg mori , and li fei - fei .end - to - end learning of action detection from frame glimpses in videos . in _ proceedings of the ieee conference on computer vision and pattern recognition _ , pages 26782687 , 2016 .marijn f stollenga , jonathan masci , faustino gomez , and jrgen schmidhuber .deep networks with internal selective attention through feedback connections . in _ advances in neural information processing systems _ , pages 35453553 , 2014chunshui cao , xianming liu , yi yang , yinan yu , jiang wang , zilei wang , yongzhen huang , liang wang , chang huang , wei xu , et al . look and think twice : capturing top - down visual attention with feedback convolutional neural networks . in _ proceedings of the ieee international conference on computer vision _, pages 29562964 , 2015 .qian wang , jiaxing zhang , sen song , and zheng zhang .attentional neural network : feature selection using cognitive feedback . in _ advances in neural information processing systems _ ,pages 20332041 , 2014 .joao carreira , pulkit agrawal , katerina fragkiadaki , and jitendra malik .human pose estimation with iterative error feedback . in _ proceedings of the ieee conference on computer vision and pattern recognition _ , pages 47334742 , 2016 .peiyun hu and deva ramanan .bottom - up and top - down reasoning with hierarchical rectified gaussians . in _ proceedings of the ieee conference on computer vision and pattern recognition _, pages 56005609 , 2016 .thomas berg , jiongxin liu , seung woo lee , michelle l alexander , david w jacobs , and peter n belhumeur .birdsnap : large - scale fine - grained visual categorization of birds . in_ proceedings of the ieee conference on computer vision and pattern recognition _ , pages 20112018 , 2014 .yin cui , feng zhou , yuanqing lin , and serge belongie .fine - grained categorization and dataset bootstrapping using deep metric learning with humans in the loop . in_ proceedings of the ieee conference on computer vision and pattern recognition _ , pages 11531162 , 2016 .shaoli huang , zhe xu , dacheng tao , and ya zhang . part - stacked cnn for fine - grained visual categorization . in _ proceedings of the ieee conference on computer vision and pattern recognition _ ,pages 11731182 , 2016 .jonathan krause , hailin jin , jianchao yang , and li fei - fei .fine - grained recognition without part annotations . in _ proceedings of the ieee conference on computer vision and pattern recognition _ , pages 55465555 , 2015 .aditya khosla , nityananda jayadevaprakash , bangpeng yao , and fei - fei li .novel dataset for fine - grained image categorization : stanford dogs . in _ proc .cvpr workshop on fine - grained visual categorization ( fgvc ) _ , volume 2 , 2011 .maria - elena nilsback and andrew zisserman .automated flower classification over a large number of classes . in _ computer vision , graphics & image processing , 2008 .sixth indian conference on _ , pages 722729 .ieee , 2008 .tsung - yu lin , aruni roychowdhury , and subhransu maji .bilinear cnn models for fine - grained visual recognition . in _ proceedings of the ieee international conference on computer vision _ , pages 14491457 , 2015 .thomas berg and peter belhumeur .poof : part - based one - vs .-one features for fine - grained categorization , face verification , and attribute estimation . in _ proceedings of the ieee conference on computer vision and pattern recognition _ , pages 955962 , 2013 .efstratios gavves , basura fernando , cees gm snoek , arnold wm smeulders , and tinne tuytelaars .fine - grained categorization by alignments . in _ proceedings of the ieee international conference on computer vision _ ,pages 17131720 , 2013 .jonathan krause , benjamin sapp , andrew howard , howard zhou , alexander toshev , tom duerig , james philbin , and li fei - fei. the unreasonable effectiveness of noisy data for fine - grained recognition . in _european conference on computer vision _ , pages 301320 .springer , 2016 .jia deng , wei dong , richard socher , li - jia li , kai li , and li fei - fei .imagenet : a large - scale hierarchical image database . in _ computer vision and pattern recognition , 2009 .cvpr 2009 .ieee conference on _ , pages 248255 .ieee , 2009 .marcel simon and erik rodner .neural activation constellations : unsupervised part model discovery with convolutional networks . in _ proceedings of the ieee international conference on computer vision _ , pages 11431151 , 2015 . yuning chai , victor lempitsky , and andrew zisserman .symbiotic segmentation and part localization for fine - grained categorization . in _ proceedings of the ieee international conference on computer vision _ ,pages 321328 , 2013 .ross girshick , jeff donahue , trevor darrell , and jitendra malik .rich feature hierarchies for accurate object detection and semantic segmentation . in _ proceedings of the ieee conference on computer vision and pattern recognition _ , pages 580587 , 2014 . yaming wang , jonghyun choi , vlad morariu , and larry s davis . mining discriminative triplets of patches for fine - grained classification . in _ proceedings of the ieee conference on computer vision and pattern recognition_ , pages 11631172 , 2016 .
we propose a dynamic computational time model to accelerate the average processing time for recurrent visual attention ( ram ) . rather than attention with a fixed number of steps for each input image , the model learns to decide when to stop on the fly . to achieve this , we add an additional continue / stop action per time step to ram and use reinforcement learning to learn both the optimal attention policy and stopping policy . the modification is simple but could dramatically save the average computational time while keeping the same recognition performance as ram . experimental results on cub-200 - 2011 and stanford cars dataset demonstrate the dynamic computational model can work effectively for fine - grained image recognition.the source code of this paper can be obtained from https://github.com/baidu-research/dt-ram = 1
the gibbard satterthwaite theorem proves that , under some simple assumptions , a voting rule can always be manipulated . in an influential paper ,bartholdi , tovey and trick proposed an appealing escape : perhaps it is computationally so difficult to find a successful manipulation that agents have little option but to report their true preferences ?to illustrate this idea , they demonstrated that the second order copeland rule is np - hard to manipulate .shortly after , bartholdi and orlin proved that the more well known single transferable voting ( stv ) rule is np - hard to manipulate .many other voting rules have subsequently been proven to be np - hard to manipulate .there is , however , increasing concern that worst - case results like these do not reflect the difficulty of manipulation in practice . indeed ,several theoretical results suggest that manipulation may often be easy ( e.g. ) .in addition to attacking this question theoretically , i have argued in a recent series of papers that we may benefit from studying it empirically .there are several reasons why empirical analysis is useful .first , theoretical analysis is often restricted to particular distributions like uniform votes .manipulation may be very different in practice due to correlations in the preferences of the agents .for instance , if all preferences are single - peaked then there voting rules where it is in the best interests of all agents to state their true preferences .second , theoretical analysis is often asymptotic so does not reveal the size of hidden constants. the size of such constants may be important to the actual computational cost of computing a manipulation .in addition , elections are typically bounded in size .is asymptotic behaviour relevant to the size of elections met in practice ?an empirical study may quickly suggest if the result extends to more candidates .finally , empirical studies can suggest theorems to prove .for instance , our experiments suggest a simple formula for the probability that a coalition is able to elect a desired candidate .it would be interesting to derive this exactly .my empirical studies have focused on two voting rules : single transferable voting ( stv ) and veto voting .stv is representative of voting rules that are np - hard to manipulate without weights on votes . indeed , as i argue shortly , it is one of the few such rules .veto voting is , on the other hand , a simple representative of rules where manipulation is np - hard when votes are weighted or ( equivalently ) we have uncertainty about how agents have voted .the two voting rules therefore cover the two different cases where computational complexity has been proposed as a barrier to manipulation .stv proceeds in a number of rounds .each agent totally ranks the candidates on a ballot . until one candidate has a majority of first place votes , we eliminate the candidate with the least number of first place votes ballots placing the eliminated candidate in first placeare then re - assigned to the second place candidate .stv is used in a wide variety of elections including for the australian house of representatives , the academy awards , and many organizations including the american political science association , and the international olympic committee .stv has played a central role in the study of the computational complexity of manipulation .bartholdi and orlin argued that : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` stv is apparently unique among voting schemes in actual use today in that it is computationally resistant to manipulation . '' _( page 341 of ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ by comparison , the veto rule is a much simpler scoring rule in which each agent gets to cast a veto against one candidate . the candidate with the fewest vetoes wins .there are several reasons why the veto rule is interesting to study .the veto rule is very simple to reason about .this can be contrasted with other voting rules like stv .part of the complexity of manipulating the stv rule appears to come from reasoning about what happens between the different rounds .the veto rule , on the other hand , has a single round .the veto rule is also on the borderline of tractability since constructive manipulation ( that is , ensuring a particular candidate wins ) of the veto rule by a coalition of weighted agents is np - hard but destructive manipulation ( that is , ensuring a particular candidate does not win ) is polynomial .empirical analysis requires collections of votes on which to compute manipulations .my analysis starts with one of the simplest possible scenarios : elections in which each vote is equally likely .we have one agent trying to manipulate an election of candidates in which other agents vote .votes are drawn uniformly at random from all possible votes .this is the impartial culture ( ic ) model . in many real life situations , however ,votes are correlated with each other .i therefore also considered single - peaked preferences , single - troughed preferences , and votes drawn from the polya eggenberger urn model . in an urn model , we have an urn containing all possible votes . we draw votes out of the urn at random , and put them back into the urn with additional votes of the same type ( where is a parameter ) .this generalizes both the impartial culture model ( ) and the impartial anonymous culture ( ) model .real world elections may differ from these ensembles .i therefore also sampled some real voting records .finally , one agent on their own is often unable to manipulate the result .i therefore also considered coalitions of agents who are trying to manipulate elections .my experiments suggest different behaviour occurs in the problem of computing manipulations of voting rules than in other np - hard problems like propositional satisfiability and graph colouring .for instance , we often did not see a rapid transition that sharpens around a fixed point as in satisfiability .many transitions appear smooth and do not sharpen towards a step function as problem size increases .such smooth phase transitions have been previously seen in polynomial problems .in addition , hard instances often did not occur around some critical parameter .figures 1 to 3 reproduce some typical graphs from .is fixed and we vary the number of candidates . the y - axis measures the probability that the manipulator can make a random candidate win . ]is fixed and we vary the number of candidates . the y - axis measures the mean number of search nodes explored to compute a manipulation or prove that none exists .median and other percentiles are similar . is the published worst - case bound for the recursive algorithm used to compute a manipulation . ]is fixed and we vary the number of agents . ]similar phase transition studies have been used to identify hard instances of np - hard problems like propositional satisfiability , constraint satisfaction , number partitioning , hamiltonian circuit , and the traveling salesperson problem .phase transition studies have also been used to study polynomial problems as well as higher complexity classes and optimization problems .finally , phase transition studies have been used to study problem structure like small worldiness and high degree nodes .another multi - agent problem in which manipulation may be an issue is the stable marriage problem .this is the well - known problem of matching men to women so that no man and woman who are not married to each other both prefer each other .it has a wide variety of practical applications such as a matching doctors to hospitals . as with voting ,an important issue is whether agents can manipulate the result by mis - reporting their preferences .unfortunately , roth proved that _ all _ stable marriage procedures can be manipulated .we might hope that computational complexity might also be a barrier to manipulate stable marriage procedures . in joint work with pini , rossi and venable ,i have proposed a new stable marriage procedures that is np - hard to manipulate .another advantage of this new procedure is that , unlike the gale - shapley algorithm , it does not favour one sex over the other .our procedure picks the stable matching that is most preferred by the most popular men and women .the most preferred men and women are chosen using a voting rule .we prove that , if the voting rule used is stv then the resulting stable matching procedure is also np - hard to manipulate .we conjecture that other voting rules which are np - hard to manipulate will give rise to stable matching procedures which are also np - hard to manipulate . the final domain in which i have studied computational issues surrounding manipulationis that of ( sporting ) tournaments ( joint work with russell ) .manipulating a tournament is slightly different to manipulating an election . in a sporting tournament ,the voters are also the candidates .since it is hard ( without bribery or similar mechanisms ) for a team to play better than it can , we consider just manipulations where the manipulators can throw games .we show that we can decide how to manipulate round robin and cup competitions , two of the most popular sporting competitions in polynomial time .in addition , we show that finding the minimal number of games that need to be thrown to manipulate the result can also be determined in polynomial time .finally , we give a polynomial time proceure to calculate the probability that a team wins a cup competition under manipulation .i have argued that empirical studies can provide insight into whether computational complexity is a barrier to the manipulation .somewhat surprisingly , almost every one of the many millions of elections in the experiments in was easy to manipulate or to prove could not be manipulated .such experimental results increase the concerns that computational complexity is indeed a barrier to manipulation in practice .many other voting rules have been proposed which could be studied in the future .two interesting rules are maximin and ranked pairs .these two rules have only recently been shown to be np - hard to manipulate , and are members of the small set of voting rules which are np - hard to manipulate without weights or uncertainty .these results demonstrate that empirical studies can provide insight into the computational complexity of computing manipulations. it would be interesting to consider similar phase transition studies for related problems like preference elicitation .pini , m. , rossi , f. , venable , k. , t , w. : manipulation and gender neutrality in stable marriage procedures . in : 8th int .joint conf . on autonomous agents and multiagent systems ( aamas 2009 ) .( 2009 ) 665672 xia , l. , zuckerman , m. , procaccia , a. , conitzer , v. , rosenschein , j. : complexity of unweighted coalitional manipulation under some common voting rules . in : proc . of 21st ijcai ,joint conf . on artificial intelligence ( 2009 ) 348353
when agents are acting together , they may need a simple mechanism to decide on joint actions . one possibility is to have the agents express their preferences in the form of a ballot and use a voting rule to decide the winning action(s ) . unfortunately , agents may try to manipulate such an election by mis - reporting their preferences . fortunately , it has been shown that it is np - hard to compute how to manipulate a number of different voting rules . however , np - hardness only bounds the worst - case complexity . recent theoretical results suggest that manipulation may often be easy in practice . to address this issue , i suggest studying empirically if computational complexity is in practice a barrier to manipulation . the basic tool used in my investigations is the identification of computational `` phase transitions '' . such an approach has been fruitful in identifying hard instances of propositional satisfiability and other np - hard problems . i show that phase transition behaviour gives insight into the hardness of manipulating voting rules , increasing concern that computational complexity is indeed any sort of barrier . finally , i look at the problem of computing manipulation of other , related problems like stable marriage and tournament problems . epsf
emergent properties of artificial or natural complex systems attract growing interests recently .some of them are conveniently modeled with a network , where constituting ingredients and interactions are represented with vertices and links , respectively .watts and strogatz demonstrated that real - world networks display the small - world effect and the clustering property , which can not be explained with the regular and random networks . later on , in the study of the www network , albert _ et al . _ found that the degree , the number of attached links , of each vertex follows a power - law distribution .those works trigger a burst of researches on the structure and the organization principle of complex networks ( see refs. for reviews ) .many real - world networks , e.g. , in biological , social , and technological systems , are found to obey the power - law degree distribution .a network with the power - law distribution is called a scale - free ( sf ) network .one of the possible mechanism for the power law is successfully explained with the barabsi - albert ( ba ) model .the model assumes that a network is growing and that the rate acquiring a new link for an existing vertex is proportional to a popularity measured by its degree .the popularity - based growth appears very natural since , e.g. , creating a new web site , one would link it preferentially to popular sites having many links . with the ba and related network models ,structural and dynamical properties of networks have been explored extensively .on the other hand , there exists another class of networks which have a group structure .consider , for example , online communities such as the `` groups '' operated by the yahoo ( http://www.yahoo.com ) and the `` cafes '' operated by the korean portal site daum ( http://www.daum.net ) .they consist of individual members and groups , gatherings of members with a common interest , and growth of the community is driven not only by members but also by groups .a community evolves as an individual registers as a new member .the new comers can create new groups with existing members or joins existing groups .the online community is a rapidly growing social network .the emerging structure would be distinct from that observed in networks without the group structure . in this paper , we propose a growing network model for the community with the group structure .we model the community with a bipartite network consisting of two distinct kinds of vertices representing members and groups , respectively .a link may exist only between a member vertex and a group vertex , which represents a membership relation .the bipartite network has been considered in the study of the movie actor network consisting of actors and movies , the scientific collaboration network of scientists and articles , and the company director network of directors and boards of directors .usually those networks are treated as unipartite by projecting out one kind of vertices of less interest .some biological and social networks are known to have a modular structure , where vertices in a common module are densely connected while vertices in different modules are sparsely connected .the modular structure is coded implicitly in the connectivity between vertices .unipartite network models with the modular structure were also studied in refs . , where vertices form modules which in turn form bigger modules hierarchically or the modular structure emerges dynamically as a result of social interactions . in ref . , each vertex is assigned to a potts - spin - like variable pointing to its module .these studies on the group structures of networks have mainly focused on the groups with finite number of members . however , there are groups in the real - world online community which keep growing as the community evolves . reflecting growing dynamics of the real - world online community, our model takes account of the group structure explicitly with a bipartite network consisting of member and group vertices . upon growing ,both the member and group vertices evolve in time .we study the dynamics of the size of groups and the activity of the members .the size of a group is defined as the number of members in the group and the activity of a member is the number of groups in which the member participates .when the community grows large enough , the group size distribution shows a power law distribution unlike the network models studied previously . to test our model, we analyze the empirical data from on the online communities , the `` groups '' in http://www.yahoo.com and the `` cafe '' in http://www.daum.net and show that both communities indeed show power law group size distributions for wide ranges of group sizes .this paper is organized as follows . in sec .[ sec:2 ] , we introduce the growing network model . depending on the choice of detailed dynamic rules , one may consider a few variants of the model .characteristics such as the group size distribution , the member activity distribution , and the growth of the number of groups are studied analytically in a mean field theory and numerically in sec .[ sec:3 ] .those characteristics are also calculated for the real - world online communities and compared with the model results .we conclude the paper with summary in sec .[ sec:4 ] .we introduce a model for a growing community with the group structure .the community grows by adding a new member at a time , who may open a new group or join an existing group .following notations are adopted : a member entering the community at time step is denoted by .the activity , the number of participating groups , of is denoted by . as members enter the community , new groups are created or existing groups expand .the group is denoted by , its creation time by , and its size by .the total number of members and groups is denoted by and , respectively . initially , at time , the community is assumed to be inaugurated by members , denoted by , belonging to an initial group .that is , we have that , , for , , and . at time , a new individual is introduced into the community and becomes a member by repeating the following procedures until its activity reaches : * * selection * : it selects a partner among existing members with a selection probability . * *creation or joining * : with a creation probability , it creates a new group with the partner .otherwise , it selects randomly one of the groups of with the equal probability and joins it .if is already a member of the selected group , then the procedure is canceled .a specific feature of the model varies with the choice of those probabilities and .regarding to the selection , simplest is the random choice among existing members with the equal probability .note that the selection may be regarded as an invitation of a new member by existing members .then , it may be natural to assume that active members invite more newcomers .such a case is modeled with a preferential selection probability . after selecting a partner , the newcomer may create a new group or join one of s groups with the equal probability .in that case the creation probability is variable as . in the other case, it may create a new group with a fixed probability . combining the strategies in the two procedures , we consider the possible four different growth models denoted by rv , rf , pv , and pf , respectively .here , r ( p ) stands for the random ( preferential ) selection , and v ( f ) for the group creation with the variable ( fixed ) probability .for example , the rf model has the selection probability , and the creation probability , .the growth rules are summarized in table [ table1 ] . and with six groups .the symbol and represents a member and a group , respectively . ] and . a square ( circle )symbol stands for a group ( member ) . ]the whole structure of the community is conveniently represented with a bipartite network of two kinds of vertices ; one for the group and the other for the member .a link exists only from a member vertex to a group vertex to which it belongs .the member activity and the group size correspond to the degree of the corresponding vertex .figure [ fig1 ] shows a typical network configuration for the rv model with . to help readers understand the growth dynamics ,we add the indices for members and groups in the figure .it is easily read off that selects and becomes a member of at and that opens a new group with at , and so on .figure [ fig2 ] shows a configuration of a rv network with grown up to members with groups .it is noteworthy that there appear hub groups having a lot of members .the emerging structure of the network will be studied in the next section ..model description and mean field results for the group size distribution exponent . here , and are the group number growth rate given in eqs .( [ eq : theta ] ) and ( [ eq : theta_pv ] ) , respectively .the activity distribution follows a power law only for the pf model with the exponent .[ cols="<,^,^",options="header " , ]the number of groups , the activity of each member , and the size of each group increase as the network grows . with those quantities , we characterize the growth dynamics and the network structure . in the following , we study the dynamics of those quantities averaged over network realizations . for simplicity s sake , we make use of the same notations for the averaged quantities .the network dynamics implies that they evolve in time as follows : where if belongs to or 0 otherwise .the initial conditions are given by , , and with the creation time of .we analyze the equations in a continuum limit and in a mean field scheme , neglecting any correlation among dynamic variables .firstly we consider the rv model .using the corresponding and in table [ table1 ] , eqs .( [ dela],[delm],[dels ] ) become where we approximate in eq .( [ dels ] ) with , the fraction of members of among all members .the solution for is given by } \ .\ ] ] it shows that an older member with smaller has a larger activity and that the activity grows very slowly in time . with the solution for , one can easily show that for large with hence , the average number of groups increases linearly in time as with the group number growth rate .the group size increases algebraically as we have obtained the activity of each member and the size of each group , which allow us to derive the distribution function and for the activity and the group size , respectively .the activity distribution function is given by the relation with the uniform individual distribution , .the differentiation can be done through eq .( [ eq : ai ] ) , which yields that the activity distribution is bounded as .similarly , the group size distribution is given by with the group creation time distribution .we assume that the group creation time is distributed uniformly , which is justified with the linear growth of .then the group size distribution follows a power law with the exponent note that the distribution exponent is determined by the group number growth rate .we now turn to the pf model . with the selection and creation probabilities , eqs .( [ dela],[delm],[dels ] ) are written as we also took the approximation in eq .( [ dels ] ) .trivially we find that the group number grows in time as . for and ,one need evaluate the quantity .summing over all both sides of eq .( [ eq : dela_pf ] ) , one obtains that .note that , which yields that . hence we obtain the algebraic growth of the activity and the group size as results allow us to find the distribution functions and .they follow the power distribution and with the exponents here we also assumed the uniform distribution of in eq .( [ eq : s_pf ] ) , which is supported from the linear growth of .in contrast to the rv model , both distributions follow the power - law .the exponents do not depend on the parameter , but only on the group creation probability . for the pv and the rf model ,the followings can be shown easily : the pv model behaves similarly as the rv model .the group number increases linearly in time as with the group number growth rate .unfortunately , we could not obtain a closed form expression for it .however , if we adopt the assumption that the selection probability is proportional to instead of , it can be evaluated analytically as the approximation would become better for larger values of .the group size grows algebraically as in eq .( [ eq : s_rv ] ) with instead of .therefore , the group size distribution follows the power - law with the exponent presented in table [ table1 ] .the rf model also displays the power - law group size distribution .the distribution exponent is given in table [ table1 ] .note that and are the same . on the other hand, the activity distribution follows an exponential distribution in the rf and the pv model .origin for the power - law distribution of the group size is easily understood . in all modelsconsidered , the size of a group increases when one of its members invites a new member .the larger a group is , the more chance to invite new members it has . therefore there exists the preferential growth in the group size , which is known to lead to the power - law distribution .the activity of a member increases when a newcomer selects it and creates a new group .when the random selection probability is adopted , such a process does not occur preferentially for members with higher activity .it results in the exponential type activity distribution in the rv and rf models . in the pv model , although the selection probability is proportional to the activity , the creation probability is inversely proportional to the activity .hence , it does not have the preferential growth mechanism in the member activity either . only in the pf model ,the activity growth rate is proportional to the activity of each member .therefore , the activity distribution follows the power - law only in the pf model .for the rv and the pv model , respectively .the rf model has and , and the pf model has and .the community has grown up to and the distributions are averaged over samples . ] for the rv and the pv model .the solid ( dashed ) curve represents the analytic mean field results for the rv ( pv ) model .( b ) numerical results for ( open symbols ) of the rf and the pf model , and for ( filled symbols ) of the pf model .the solid ( dashed ) curve represents the analytic results for ( ) in table [ table1 ] . ]the analytic mean field results are compared with numerical simulations . in simulations , we chose and all data were obtained after the average over at least 10000 samples .we present the numerical data in fig .[ fig3 ] . in accordance with the mean field results ,the group size distribution follows the power - law in all cases .the activity distribution also shows the expected behavior ; the power - law distribution for the pf model and exponential type distributions for the other models .we summarize the distribution exponents in fig .[fig4 ] .the measured values of the distribution exponents are in good agreement with the analytic results .our network models display distinct behaviors from those bipartite networks such as the movie actor network , the scientific collaboration networks , and the director board network which have been studied previously .for the first two examples , their growth is driven only by the member vertices , the actors and the scientists , respectively .the activity of members may increase in time .however , the group vertices , the movies and the papers , respectively , are frozen dynamically and their sizes are bounded practically . for the last example , both the members ( directors ) and the groups ( boards ) may evolve in time .however , it was shown that the group size distribution is also bounded .our model is applicable to evolving networks with the group structure where the size of a group may increase unlimitedly .the online community is a good example of such networks . to test the possibility ,we study the empirical data obtained from the groups and the cafe operated by the yahoo in http://www.yahoo.com and the daum in http://www.daum.net , respectively . it is found in august , 2004 that there are 1,516,750 ( 1,743,130 ) groups ( cafes ) with 76,587,494 ( 351,565,837 ) cumulative members in the yahoo ( daum ) site .the numbers of members of the groups are available via the web sites .figure [ fig5 ] presents the cumulative distribution of the group size .the distribution has a fat tail .although the distribution function in the log - log scale show a nonnegligible curvature in the entire range , it can still be fitted reasonable well into the power law for a range over two decades ( see the straight lines drawn in fig .[ fig5 ] ) . from the fitting, we obtain the group size distribution exponents and . the power - law scaling suggests that the online community may be described by our network model . unfortunately , information on the activity distribution is not available publicly .so we could not compare the activity distribution of the communities with the model results .we would like to add the following remark : a real - world online community evolves in time as new members are introduced to and new groups are created . at the same time, it also evolves as members leave it and groups are closed .those processes are not incorporated into the model .our model is a minimal model for the online community where the effects of leaving members and closed groups are neglected .we have introduced the bipartite network model for a growing community with the group structure .the community consists of members and groups , gatherings of members .those ingredients are represented with distinct kinds of vertices . anda membership relation is represented with a link between a member and a group . upon growinga group increases its size when one of its members introduces a new member .hence , a larger group grows preferentially faster than a smaller group . with the analytic mean field approaches and the computer simulations, we have shown that the preferential growth leads to the power - law distribution of the group size .on the other hand , the activity distribution follows the power - law only for the pf model with the preferential selection probability and the fixed creation probability ( see table [ table1 ] ) .we have also studied the empirical data obtained from the online communities , the groups of the yahoo and the cafe of the daum .both communities display the power - law distribution of the group size .it suggests our network model be useful in studying their structure .d. j.watts and s. h. strogatz , nature ( london ) * 393 * , 440 ( 1998 ) .r. albert , h. jeong , and a .-barabsi , nature ( london ) * 401 * , 130 ( 1999 ) .r. albert and a .-barabsi , rev .* 74 * , 47 ( 2002 ) .s.n . dorogovtsev and j.f.f .mendes , adv . phys . * 51 * , 1079 ( 2002 ) . m.e.j .newman , siam rev . * 45 * , 167 ( 2003 ) .barabsi and r. albert , science * 286 * , 509 ( 1999 ) ; a .-barabsi , r. albert , and h. jeong , physica a * 272 * , 173 ( 1999 ) . for instance , the daum cafe has grown up with more than three millions of groups and 28 millions of members since it was first launched in 1999 .m. e. j. newman , s. h. strogatz , and d. j. watts , phys .e * 64 * , 026118 ( 2001 ) .goldstein , s.a .morris , and g.g .yen , cond - mat/0409205 .m. e. j. newman , proc .98 * , 404 ( 2001 ) ; phys .e * 64 * , 016131 ( 2001 ) ; _ ibid ._ * 64 * , 016132 ( 2001 ) .m. e. j. newman , phys .e * 68 * , 026121 ( 2003 ) .m. girvan and m.e.j .newman , proc .sci . * 99 * , 7821 ( 2002 ) .e. ravasz , a.l .somera , d.a .mongru , z.n .oltvai , and a .-barasi , science * 297 * , 1551 ( 2002 ) ; e. ravasz and a .-barabsi , phys .e * 67 * , 026112 ( 2003 ) .d. j. watts , p. s. dodds , and m. e. j. newman , science * 296 * , 1302 ( 2002 ) .motter , t. nishikawa , and y .- c .lai , phys .e * 68 * , 036105 ( 2003 ) .b. skyrms and r. pemantle , proc .sci . * 97 * , 9340 ( 2000 ) .jin , m. girvan , and m.e.j .newman , phys .e * 64 * , 046132 ( 2001 ) .a. grnlund and p. holme , phys .e * 70 * , 036108 ( 2004 ) .kim , g. j. rodgers , b. kahng , and d. kim , cond - mat/0310233 .the dynamics of opening a new group or joining an existing group is analogous to the copy and growth dynamics of the simon model ( see e.g. , ref .s. bornholdt and h. ebel , phys .e * 64 * , 035104 ( 2001 ) .the yahoo groups can be divided into 16 categories and 18,165 sub - categories and the daum cafes into 22 categories and 825 sub - categories .we also find that the group size distribution in each category also has the similar fat tail .
we propose a growing network model for a community with a group structure . the community consists of individual members and groups , gatherings of members . the community grows as a new member is introduced by an existing member at each time step . the new member then creates a new group or joins one of the groups of the introducer . we investigate the emerging community structure analytically and numerically . the group size distribution shows a power law distribution for a variety of growth rules , while the activity distribution follows an exponential or a power law depending on the details of the growth rule . we also present an analysis of empirical data from on the online communities , the `` groups '' in http://www.yahoo.com and the `` cafe '' in http://www.daum.net , which shows a power law distribution for a wide range of group sizes .
the elucidation of hamiltonian chaos and lyapunov instability by poincar and lorenz is familiar textbook material .models which capture aspects of complexity , the logistic and baker maps , the lorenz attractor and the mandelbrot set , combine visual appeal with mechanistic understanding in the bare minimum of spatial dimensions , two for maps and three for flows . mechanical models with only three- or four - dimensional phase spaces are simple enough that the entire phase space can be explored exhaustively.``small systems '' can augment our understanding of nature in terms of numerical models by introducing more complexity .just a few more degrees of freedom make an ergodic exhaustive sampling impossible .for the small systems we treat here we take on the more difficult task of defining and analyzing the time - dependent convergence of `` typical '' trajectories .chaos involves the exponential growth of perturbations .joseph ford emphasized the consequence that the number of digits required in the initial conditions is proportional to the time for which an accurate solution is desired . accordinglya `` typical '' nonexhaustive trajectory or history is the best that we can do . to go beyond the simplest models to those which elucidate macroscopic phenomena , like phase transitions and the irreversibility described by the second law of thermodynamics, we like terrell hill s idea of small - system studies ( in the 1960s he wrote a prescient book , _ thermodynamics of small systems_. ) in what follows we describe two small - system models which are the foci of the ian snook prize problem for 2017 .these models are hamiltonian , both with four degrees of freedom so that their motions are described in eight - dimensional phase spaces .the double pendulum with rigid links is an excellent model for the table - top demonstration of chaos .bill saw one in action at an all - day stanford lecture given by james yorke .an even simpler mathematical model for chaos can be obtained with a single pendulum . for chaosthe single pendulum needs a spring rather than a rigid link .the single springy pendulum moves in a four - dimensional phase space , just as does the double pendulum with rigid links . along with haraldposch we investigated mathematical models for chaos based on chains of pendula , both rigid and springy .we studied many - body instabilities by characterizing the form of the detailed description of many - dimensional chaos , the lyapunov spectrum .we considered two kinds of model hamiltonians describing chains in a gravitational field : [ 1 ] chains composed of particles with equal masses , as in a physical length of chain ; [ 2 ] chains in which only the bottom mass was affected by gravity , as in a light chain supporting a heavy weight .figure 1 shows five snapshots , equally spaced in time , from a chaotic double - pendulum trajectory .initially the motionless chain was placed in the horizontal configuration appearing at the top right of figure 1 .if gravity affects only the lower of the two masses ( as in the type-2 models supporting a heavy weight ) the corresponding hamiltonian is /2 + ( \kappa/2 ) [ \ ( r_1 - 1)^2 + ( r_{12}-1)^2 \ ] + y_2 \ .\ ] ] where and are the lengths of the upper and lower springs . to enhance the coupling between the springs and gravity we choose the force constant here+ the lyapunov exponents making up the spectrum are conventionally numbered in the descending order of their long - time - averaged values .we begin with the largest , . the long - time - averaged rate at which the distance between the trajectories of two nearby phase - space points increases .that rate , , is necessarily positive in a chaotic system .a more detailed description of rates of change of lengths and areas , and volumes , and hypervolumes of dimensionality up to that of the phase space itself , leads to definitions of additional lyapunov exponents .the next exponent , , is needed to describe the rate at which a typical phase - space area , defined by three nearby points , increases ( or decreases ) with increasing time , . againan average over a sufficiently long time for convergence is required .likewise the time - averaged rate of change of a three - dimensional phase volume defined by four neighboring trajectories is .this sequence of rates and exponents continues for the rest of the spectrum .there are exponents for a -dimensional phase - space description .the time - reversibility of hamiltonian mechanics implies that all the rates of change change sign if the direction of time is reversed .this suggests , for instance , that all the exponents , and , are `` paired '' , with the rates forward in time opposite to those backward in time .this turns out to be `` true '' for the long - time - averaged exponents but could be `` false '' for the local exponents .local exponents depend upon the recent past history of neighboring trajectories .the global exponents , which describe the growth and decay of the principal axes of comoving hyperellipsoids in phase space are paired , though the time required to show this through numerical simulation can be long .this exponent pairing is the focus of the 2017 snook prize , as we detail in what follows .there is a vast literature describing and documenting the numerical evaluation and properties of lyapunov spectra .the theoretical treatments are sometimes abstruse and lacking in numerical verification .this year s prize problem seeks to help remedy this situation .the numerical foundation for the study of lyapunov exponents is an algorithm developed by shimada and nagashima in sapporo and benettin in italy , along with his colleagues galgani , giorgilli , and strelcyn , beginning in the late 1970s .google indicates hundreds of thousands of internet hits for `` lyapunov spectrum '' .we mention only a few other references here .the internet makes these and most of the rest readily available .+ aoki and kusnezov popularized the model as a prototypical atomistic lattice - based model leading to fourier heat conduction .in addition to a nearest - neighbor hookes - law potential the model incorporates quartic tethers binding each particle to its own lattice site .here we denote the displacements of the particles from their sites as . in our one - dimensional casethe spacing between the lattice sites does not appear in the hamiltonian or in the equations of motion .in numerical work it is convenient to choose the spacing equal to zero while setting the particle masses , force constants for the pairs , and those for the tethers all equal to unity .for a four - particle problem in an eight - dimensional phase space the three - part hamiltonian is : + \sum_4^{springs } ( q_{i , j}^2/2 ) \ .\ ] ] the periodic boundary condition includes the spring linking particles 1 and 4 : see figure 2 for two ways of visualizing the periodic boundary conditions of the chain .+ the energy range over which chaos is observed in the model includes about nine orders of magnitude .the chaotic range for a four - body chain includes the two cases we discuss in the present work , .with both the springy pendulum and the models in mind we turn next to a description of their chaotic properties .like most smoothly - differentiable hamiltonian systems the double springy pendulum has infinitely many periodic or quasiperiodic phase - space solutions surrounded by a chaotic sea .dynamics in the sea is exponentially sensitive to perturbations .the dynamics occurs in an eight - dimensional phase space .perturbations oriented along the trajectory or perpendicular to the energy surface , where there is no longtime growth at all , give two zeroes , so that the maximum number of nonzero lyapunov exponents is six .each positive exponent is necessarily paired with its negative twin , with the two changing roles if the direction of time is reversed .it is often stated that this time - reversible pairing links not only the time - averaged rates of the dynamics , but also the `` local '' or `` instantaneous '' rates .because chaotic pendulum problems give different local exponents if cartesian and polar coordinates are used one might think that pairing could be hindered by using a mixture of these coordinates . to check on this idea we considereda mixed - coordinate hamiltonian for the model of figure 1 with polar coordinates for the `` inside '' particle 1 : + y_2 + ( \kappa/2 ) [ \ ( r-1)^2 + ( r_{12}-1)^2 \ ] \ ; \ ] ] formulating and solving the motion equations in mixed cartesian and polar coordinates is an intricate error - prone task .it is useful first to solve the problem in cartesian coordinates .that solution then provides a check for the more complicated mixed - coordinate case .energy conservation is a nearly - infallible check of the programming .we computed spectra of lyapunov exponents averaged over one billion fourth - order and one billion fifth - order runge - kutta timesteps , .this ensures that the numerical truncation errors of order or are of the same order as the double - precision roundoff error .we chose the initial condition of figure 1 with both masses motionless at the support level , , so that the initial potential , kinetic , and total energies all vanished . only the outer cartesian mass interacts with the gravitational field .the simplest numerical method for obtaining lyapunov spectra is first to generate a -dimensional `` reference trajectory '' in the -dimensional phase space .then a set of similar `` offset '' trajectories , an infinitesimal distance away , , are generated in the same space with numerical offset vectors of length or 0.000001 . while advancing the resulting -dimensional differential equations the local lyapunov exponents are obtained by `` gram - schmidt '' orthonormalization .this process rescales the vectors to their original length and rotates all but the first of them in order to maintain their orthonormal arrangement .the rescaling operation portion of the gram - schmidt process gives local values for the lyapunov exponents : for the type-2 double pendulum of figure 1 the time - averaged lyapunov spectrum is : the rms fluctuations in these rates are typically orders of magnitude larger than the rates themselves .the uncertainty in the exponents as well as the differences between exponents using fourth - order or fifth - order runge - kutta integrators with are both of order .our numerical work shows that the pairing of the exponents is maintained if one of the pendula is described by polar coordinates with the other pendulum cartesian .the local exponents are different but still paired .the algorithm for generating the lyapunov exponents requires the ordering of offset vectors in the vicinity of a reference trajectory .the first vector follows exactly the same motion equations with the proviso that its length is constant .the second vector , also of constant length , is additionally required to remain orthogonal to the first so that the combination of the two gives the rate of expansion or contraction of two - dimensional areas in the vicinity of the reference trajectory . in general the offset vector satisfies constraints in all , keeping its own length constant while also maintaining its orthogonality to the preceding vectors .although the local rates associated with the vectors are necessarily ordered when time - averaged over a sufficiently long time to give the , this ordering is regularly violated , locally , as figures 3 and 4 show .offhand one would expect that increasing the lyapunov exponents or decreasing the accuracy of the simulation would lead to more rapid convergence of the ordering of the vectors .for this reason we consider a model which is as simple as possible , with a relatively large chaotic range , and is easy to simulate .this model , named for its quartic tethering potential , has proved particularly useful in the simulation of heat flow .we consider the equilibrium version of the model here , an isolated system .the simplest lyapunov algorithm for the model is exactly that used with the springy pendula .we follow trajectories in the -dimensional phase space , rescaling them at every timestep to obtain the complete spectrum of instantaneous lyapunov exponents .this phase - space integration of nine trajectories , followed by gram - schmidt orthonormalization , can be modified by using lagrange multipliers to impose the eight constant - length constraints and the orthogonality constraints .a third approach , particularly simple to implement for the model with its power - law equations of motion , is to linearize the motion equations so that the offset vectors , rather than being small , can be taken as unit vectors in `` tangent space '' . by using separate integrators for the `` reference trajectory '' and for the eight unit vectorsthe programming is at about the same level of difficulty as is that of the straightforward phase - space approach .we implemented both approaches for the problems and found good agreement for the lyapunov spectra at a visual level , even for calculations using a billion timesteps .this is because the reference trajectories for the phase - space and tangent - space algorithms are identical .fourth - order and fifth - order runge - kutta integrators are particularly useful algorithms for small systems .first , these integrators are easy to program . these integrators are also explicit , a real simplification whenever a variable timestep is desirable .their errors are typically opposite in sign . for the simple harmonic oscillator the fourth - order energy decays while the fifth - order energy diverges . by choosing a sufficiently small timestep , for which the two algorithms agree , one can be confident in the accuracy of the trajectories .another useful technique is adapative integration : comparing solutions with a single timestep to those from two successive half steps with .the timestep is then adjusted up or down by a factor of two whenever it is necessary to keep the root - mean - squared error in a prescribed band , for instance. at the expense of about a factor of fifty in computer time , fortran makes it possible to carry out quadruple - precision simulations with double - precision programming by changing the gnu compiler command : here the fortran program is code.f and the executable is xcode .the springy pendula and problems detailed here show that `` pairing '' is typically present after sufficient time , with that time sensitive to the largest lyapunov exponent as well as to the initial conditions .there are several features of these introductory problems that merit investigation : + [ 1 ] to what extent is there an unique chaotic sea ? can the symmetry of the initial conditions limit the portion of phase space visited when the dynamics is chaotic ? + [ 2 ] within the model s chaotic sea do the time - averaged kinetic temperatures , agree for all the particles ? ( if not , a thermal cycle applying heat and extracting work from the chain could be developed so as to violate the second law. ) + [ 3 ] is the pairing time simply related to the lyapunov exponents and the chain length ? + [ 4 ] is the accuracy of the pairing simply related to the accuracy of the integrator ?+ the next and last question , which motivated this year s prize problem seems just a bit more difficult : [ 5 ] can relatively - simple autonomous hamiltonian systems be devised for which long - time local pairing is absent ?our exploratory work has suggested that dynamical disturbances induced by collisions , with those collisions separated by free flight , could lead to repeated violations of pairing . on the other hand dettmann and morrisshave published a proof of pairing for isokinetic systems .a simple gas of several diatomic or triatomic molecules is likely to be enough to settle that question .the 2017 ian snook prize will be awarded to the most interesting paper discussing and elucidating these questions .entries should be submitted to computational methods in science and technology , cmst.eu , prior to 1 january 2018 .the prize award of 500 united states dollars sponsored by ourselves , and the additional ian snook prize award , also 500 , will be awarded to the author(s ) of the paper best addressing this prize problem .we are grateful to the poznan supercomputing and networking center for their support of these prizes honoring our late australian colleague ian snook ( 1945 - 2013 ) .we also appreciate useful comments , suggestions , and very helpful numerical checks of our work furnished by ken aoki , carl dettmann , clint sprott , karl travis , and krzysztof wojciechowski .we particularly recommend aoki s reference 10 for a comprehensive study of the dynamics of one - dimensional equilibrium systems .g. benettin , l. galgani , a. giorgilli , and j .-strelcyn , `` lyapunov characteristic exponents for smooth dynamics systems and for hamiltonian systems ; a method for computing all of them , parts i and ii : theory and numerical application '' , meccanica * 15 * , 9 - 20 and 21 - 30 ( 1980 ) .b. a. bailey , `` local lyapunov exponents ; predictability depends on where you are '' , in _ nonlinear dynamics and economics _ , w. a. barnett , a. p. kirman , and m. salmon , editors ( cambridge university press , 1996 ) pages 345 - 359 .h. a. posch and r. hirschl , `` simulation of billiards and of hard - body fluids '' in _ hard ball systems and the lorentz gas _ , encyclopedia of the mathematical sciences * 101 * , edited by d. szsz ( springer verlag , berlin , 2000 ) , pages 269 - 310 .w. g. hoover and k. aoki , `` order and chaos in the one - dimensional model : n - dependence and the second law of thermodynamics '' , communications in nonlinear science and numerical simulation ( in press , 2017 ) = ar 1605.07721 .w. g. hoover , j. c. sprott , and c. g. hoover , `` adaptive runge - kutta integration for stiff systems : comparing nos and nos - hoover dynamics for the harmonic oscillator '' , american journal of physics * 84 * , 786 - 794 ( 2016 ) .g. hoover and c. g. hoover ,`` what is liquid ?lyapunov instability reveals symmetry - breaking irreversibilities hidden within hamilton s many - body equations of motion '' , condensed matter physics * 18 * , 1 - 13 ( 2015 ) = ar 1405.2485 .
the time - averaged lyapunov exponents , , support a mechanistic description of the chaos generated in and by nonlinear dynamical systems . the exponents are ordered from largest to smallest with the largest one describing the exponential growth rate of the ( small ) distance between two neighboring phase - space trajectories . two exponents , , describe the rate for areas defined by three nearby trajectories . is the rate for volumes defined by four nearby trajectories , and so on . lyapunov exponents for hamiltonian systems are symmetric . the time - reversibility of the motion equations links the growth and decay rates together in pairs . this pairing provides a more detailed explanation than liouville s for the conservation of phase volume in hamiltonian mechanics . although correct for long - time averages , the dependence of trajectories on their past is responsible for the observed lack of detailed pairing for the instantaneous `` local '' exponents , . the 2017 ian snook prizes will be awarded to the author(s ) of an accessible and pedagogical discussion of local lyapunov instability in small systems . we desire that this discussion build on the two nonlinear models described here , a double pendulum with hookes - law links and a periodic chain of hookes - law particles tethered to their lattice sites . the latter system is the model popularized by aoki and kusnezov . a four - particle version is small enough for comprehensive numerical work and large enough to illustrate ideas of general validity .
integrated nested laplace approximations ( inla ) were introduced by as a tool to do approximate bayesian inference in latent gaussian models ( lgms ) .the class of lgms covers a large part of models used today , and the inla approach has been shown to be very accurate and extremely fast in most cases .software is provided through the r - inla package , see http://www.r-inla.org .an important subclass of lgms is the rich family of generalized linear mixed models ( glmms ) with gaussian priors on fixed and random effects .the use of inla for bayesian inference for glmms was investigated by , who reanalyzed all of the examples from . found that inla works very well in most cases , but one of their examples shows some inaccuracy for binary data with few or no replications . in this paper , we introduce a new correction term for inla , significantly improving accuracy while adding negligibly to the overall computational cost . to set the scene , we consider a minimal simulated example illustrating the problem ( postponing more thorough empirical evaluations until section [ sec : examples ] ) .consider the following model : for , let , , and where , iid .let the precision have a prior , while the prior for is .we simulated data from this model , setting , and .figure [ figure1 ] shows the resulting posterior distributions for the intercept and for the log precision , , where the histograms show results from long mcmc runs using jags , the black curves show posteriors from inla without any correction , and the red curves show results using the new correction defined in section [ sec : method ] .while some of our later examples show more dramatic differences between inla and long mcmc runs , these results exemplify quite well our general experience with using inla for `` difficult '' binary response glmms : variances of both random and fixed effects tends to be underestimated , while the means of the fixed effects are reasonably well estimated .one part of the problem is that the usual assumptions ensuring asymptotic validity of the laplace approximation do not hold here ( for details on asymptotic results , see the discussion in section 4 of ) .the independence of the random effects make the effective number of parameters on the order of the number of data points . in more complex models, there is often some amount of smoothing or replication that alleviates the problem , but it may still occur . except in the case of spline smoothing models , there is a lack of strong asymptotic results for random effects models with a large effective number of parameters . in the simulation from model ,the data provide little information about the parameters , with the shape of the likelihood function adding to the problem .figure [ figure2 ] illustrates the general problem . here , the top panel shows the log - likelihood of a single bernoulli observation as a function of the linear predictor , i.e. where . the bottom panel shows the corresponding derivative .we see that the log - likelihood gets very flat ( and the derivative near zero ) for high values of , so inference will get difficult . bayesian and frequentist estimation for glmms with binary outcomes has been given some attention in the recent literature , but a computationally efficient bayesian solution appropriate for the inla approach has been lacking .an alternative to our new approach would be to consider higher - order laplace approximations , other modifications to the laplace approximation , or expectation propagation - type solutions , but we view them as too costly to be applicable for general use in inla .the motivation for using inla is speed , so we see it as a design requirement for any correction that it should add minimally to the running time of the algorithm .we proceed as follows . in section [sec : method ] , we present a derivation of our new correction method .section [ sec : examples ] presents empirical studies , both on real and simulated data , showing that the method works well in practice .finally section [ sec : conclusion ] gives a brief discussion and some concluding remarks .consider a latent gaussian model , with hyperparameters , latent field and observed data ( for ) , where the joint distribution may be written as where is a multivariate gaussian density .we want to approximate the posterior marginals and . the laplace approximation of is where is a gaussian approximation found by matching the mode andthe curvature at the mode of , and is the mean of the gaussian approximation .given and some approximation ( see below ) , the posterior marginals of interest are calculated using numerical integration : in the current implementation of inla , the used in are approximated using skew normal densities , based on a second laplace approximation ; see section 3.2.3 of for details . noticethat , in equation we use a gaussian approximation , with marginals .thus , both and are approximations to the marginals , but the are more accurate since they are based on a second laplace approximation . inwe need to approximate the full joint distribution .our basic idea is to use the improved approximations in order to construct a better approximation to the joint distribution .we aim for an approximation of that retains the dependence structure of the gaussian approximation , while having the improved marginals .this can be achieved by using a gaussian copula .before we describe the copula construction , we need to define some notation . first, for , let and denote the mean and variance of each marginal , and let be the cumulative distribution function corresponding to .second , let and assume that is the distribution of . as usual , denotes the cumulative standard gaussian distribution function .furthermore , let and denote the marginal means and variances of the gaussian approximation , let be the precision matrix of , and let where , and define .note that we have from the definition of ( the construction of the skew normal changes the mean and adds skewness , but keeps the variance unchanged ; again , see section 3.2.3 of for detailed explanations ) , so from here on we denote both simply by .we will now show how to construct a joint distribution having marginals and the dependence structure from , using a gaussian copula ( see e.g. for a general introduction ) .first , note that $ ] by the probability integral transform ( pit ) .let . applying the inverse of the pitthen yields that , from which it follows that is distributed as , which is the marginal distribution we want .since we have only done marginal transformations , the dependence structure of the original is still intact .thus , to construct the new approximation to the joint distribution , we define the transformed value as follows : + \tilde\mu _i(\theta ) .\label{eq : copula}\ ] ] we may simplify the construction above by replacing the in by .this means that we do not correct for skewness , but we take advantage of the improved mean from . we denote this as the `` mean only '' correction .( we will later discuss the possibility of retaining as a skew normal ; this we denote as the `` mean and skewness '' correction . ) in the simple `` mean only '' case , the transformation reduces to a shift in mean : the jacobian is equal to one , and the transformed joint density function is a multivariate normal with mean and precision matrix , i.e. in the laplace approximation defined in equation , both the numerator and the denominator should be evaluated in the point , where is the mean of the gaussian approximation .thus , the density functions above should be evaluated in . from equation ,the original ( uncorrected ) log posterior is evaluated at , where comparing equations , , and , we see that the copula approximation can be implemented by adding the term to the already calculated log posterior evaluated at , where the addition of the term does not add significantly to the computational cost of inla this simple operation is essentially free . for the inla implementation of the copula correction, we have found that it is sufficient to only include fixed effects ( including any random effects of length one ) in the calculation of .the effect of the correction is strongest and most consistent for the fixed effects , while the ( often very numerous ) random effects contribute very small individual effects to the correction , mainly adding extraneous noise to the estimation . for these reasons ,including only fixed effects gives better numerical stability and also seems to provide a more accurate approximation , while reducing computational costs .conceptually , including only the fixed effects involves finding , and then again finding ( where is the index set of the fixed effects ) , which might seem computationally costly .however , it can be done cheaply by using the linear combination feature described in section 4.4 of : if is the number of fixed effects , only the ( parallel ) solution of a -dimensional linear system is needed .additionally , to guard against over - correction , we perform a soft thresholding on , as follows : first we define a sigmoidal function : which is increasing , has derivative equal to one at the point , and where as .then we replace by , where for and with the `` correction factor '' parameter determining the degree of shrinkage ( more shrinkage for smaller values of ) .since the function is approximately linear with unit slope around zero , will be close to for small and moderate values of , while larger values will be increasingly shrunk toward zero .note that since for all , .the value of does not have a large impact on the results unless a too small value is chosen .its main purpose is as a safeguard to avoid too large corrections in very difficult cases . in our experience gives too strong shrinkage , while for example corresponds to no shrinkage , so it seems clear that should be somewhere in between these extremes .we have found that is a good choice , letting the correction do its job while guarding against too large changes , and we have used this value for all of the examples .results appear to be very robust to the exact value chosen for .note also that since the correction effect is scaled with the number of fixed effects , it is less surprising that a single value for could work well in a wide variety of circumstances .as mentioned , we have also investigated a more general case of the copula construction , where we retain as a skew normal distribution , i.e. the cdf of .this results in a more complicated correction term , derived in appendix [ app ] .we have not found any appreciable differences in the accuracy compared to the simpler case without skewness , so we have concluded that the non - skew version is preferable due to its simplicity .we will show both the skew and the non - skew correction for the toenail data discussed in section [ sec : toenail ] , but otherwise we show only results from the simpler non - skew version .we have tried both corrections on many ( both real and simulated ) data sets , and never seen a significant difference in the results .as mentioned in the introduction , studied the use of inla for binary valued glmms , and they showed that the approximations were inaccurate in some cases .we have redone the simulation experiment described on pages 1014 of the supplementary material of , both for inla without any correction , and inla with the `` mean only '' correction described in section [ sec : method ] . in the original simulation study by , are iid , with clusters , observations per cluster , and . given for and otherwise , and sampling times , the following two models were considered : which corresponds to models ( 0.7 ) and ( 0.8 ) on page 11 of the supplementary material of .we first consider model .we only show the results for ( i.e. binary data ) , as this is the most difficult case with the largest errors in the approximation .the correction also works well for , but this case is easier to deal with for inla .this is seen empirically , and is also as expected based on considering the asymptotic properties of the laplace approximation : for there is more `` borrowing of strength''/replication , so the original approximation should work better .we use the same settings as : where , the prior for and priors for the .the true values of the fixed effects are .we made 1,000 simulated data sets , running inla both with and without the new correction , as well as very long mcmc chains using jags ( each of the 1,000 datasets were run with 1,000,000 mcmc samples after a burn - in of 100,000 , using every 100th sample ) .@|llrrrrrrrr|@ + & & & & & & & & & + & true values & 1.000 & 1.000 & 0.000 & -2.500 & 1.000 & -1.000 & -0.500 & + & uncorrected inla & 0.705 & 0.722 & 1.133 & -2.494 & 0.998 & -1.052 & -0.486 & + & corrected inla & 0.952 & 0.850 & 0.775 & -2.562 & 1.024 & -1.080 & -0.504 & + & mcmc & 0.946 & 0.849 & 0.773 & -2.537 & 1.017 & -1.081 & -0.482 & + + + & & & & & & & & & + & uncorrected inla & -0.382 & -0.403 & 0.390 & 0.120 & -0.127 & 0.052 & -0.016 & + & corrected inla & -0.003 & 0.000 & 0.002 & -0.073 & 0.046 & -0.002 & -0.101 & + + + & & & & & & & & & + & uncorrected inla & 0.585 & 0.812 & 1.174 & 0.822 & 0.834 & 0.882 & 0.889 & + & corrected inla & 0.933 & 0.956 & 0.998 & 0.904 & 0.871 & 0.943 & 0.908 & + + + & & & & & & & & & + & uncorrected inla & 90.3% & 90.0% & 90.8% & 92.6% & 92.7% & 93.5% & 93.5% & + & corrected inla & 94.2% & 93.9% & 94.4% & 93.5% & 93.1% & 94.3% & 93.7% & + results from the simulation study are shown in table [ table1 ] . note that the aim here is to be as close as possible to the mcmc results , not the true values .the upper part of the table shows averages of the posterior means over the 1,000 simulations .we see that inla gets much closer to the mcmc results for all parameters except , which is in any case reasonably close to the mcmc value .the improvement is particularly large for the variance parameter .this is also seen in the second panel , which shows for each parameter ( averaged over the 1,000 simulations ) , i.e. the difference in inla and mcmc estimates scaled by the mcmc standard deviation . here ,the random effects variance and the fixed effects except are also more accurately estimated .the third lower panel shows the ratios ; here the correction improves the estimation of the variance for all parameters . for , and get very close to a ratio of one , and there are also major improvements for the fixed effects variances . finally , the bottom panel shows average coverage of 95% ( i.e. , ) credible intervals from inla over the mcmc samples for each simulated data set . clearly , coverage is improved considerably by the correction .table [ table2 ] reports summary statistics for the computation times in seconds over the 1,000 data sets .note that in this case computational times are abnormally high due to somewhat extreme parameter settings inla will usually be much faster .however , ratios of computing times for the corrected vs the uncorrected versions should stay approximately the same .appendix [ appsim ] contains additional simulation studies : results from simulations for model for different values of the covariance matrix of are shown in appendix [ sec : model - with - two ] .furthermore , in appendix [ sec : model - with - very ] we consider the effect of having extremely few observations per cluster , while we in appendix [ sec : simul - with - missp ] study a misspecified model , simulating from model while estimating model .the correction appears to work well for all the different cases considered in appendix [ appsim ] ..summary statistics for computation times in seconds for each data set [ cols="<,^,^,^,^,^,^",options="header " , ] we start by discussing the toenail data , which is a classical data set with a binary response and repeated measures .the data are from a clinical trial comparing two competing treatments for toenail infection ( dermatophyte onychomycosis ) .the 294 patients were randomized to receive either itraconazole or terbinafine , and the outcome ( either `` not severe '' infection , coded as , or `` severe '' infection , coded as ) was recorded at seven follow - up visits .not all patients attended all the follow - up visits , and the patients did not always appear at the scheduled time .the exact time of the visits ( in months since baseline ) was recorded . for individual ,visit , with outcome , treatment and time our model is then \text{logit } \p_{ij } & = & \alpha_0 + \alpha_{\text{trt } } \text{trt}_i + \alpha_{\text{time } } \text{time}_{ij } + \alpha_{tt } \text{trt}_i \text { time}_{ij } + b_i\\[-1pt ] b_i & \sim & n(0 , \sigma^2).\end{aligned}\]]notice that this is the same model as model , except that the time variable here varies over individuals .normal priors with mean zero and variance were used for , , , and .inla underestimates quite severely .the top panel of figure [ figure3 ] shows the different estimates of the posterior distribution of the log precision , .the histogram shows the results from a long mcmc run using jags , the black curve shows the posterior from inla without the correction , the red curve shows the simple ( non - skew ) version of the inla correction , while the green curve shows the inla correction accounting for skewness ( as discussed in appendix [ app ] ) .the bottom panel of figure [ figure3 ] shows the additive correction to the log posterior density , as a function of the hyperparameter ( log precision ) .we see that there is very little difference between the two corrections . for the toenail data , the estimated random effects standard deviation is approximately , which is very high .to investigate how the copula correction works as increases , we studied simulated data sets from the model above , where we set to different values , and where the parameters were fixed to the values from a long mcmc run using the real toenail data ( i.e. , we simulate only the outcome , keeping the covariates unchanged ) .results are shown in figure [ figure4 ] for different values of ranging from to .we clearly get very accurate corrected posteriors for . for , we gradually get a tendency of under - correction .( the value of is shown above each histogram ) .the histograms are from long mcmc runs , uncorrected inla are shown as black curves , while the red curves shows inla with the correction . since the goal here is to study the difference between mcmc and inla , we omit axes the relevant scale is given by the mcmc variances , which are evident from the histograms . ]we shall now study the case where the data are poisson distributed .we consider a simple simulated ( extreme ) example in order to investigate how well the correction works in the poisson case .for we generated iid where with .we chose and , a prior for the precision , and a prior for .figure [ figure5 ] shows the results for different values of the intercept .each histogram is based on ten parallell mcmc runs using jags , each with 200,000 iterations after a burn - in of 100,000 . here ,reduction of implies that estimation is more difficult , since negative with large absolute value will imply that the counts are very low , with many zeroes , and the data are uninformative .we see that uncorrected inla tends to get less accurate as moves towards more extreme values , while the correction seems to work well .( the value of is shown above each histogram ) .the histograms are from long mcmc runs , uncorrected inla posteriors are shown as black curves , while the red curves shows inla posteriors with the correction . ] until now , we have considered fairly simple latent structures , where the random effects have been iid ( multivariate ) normally distributed .the reader may perhaps wonder if the generality implied by having `` latent gaussian models '' in the title is really justified what if latent structures are more complicated , with for example temporal or spatial structure ?in fact , the complexity of the latent field is not particularly relevant for the accuracy problems we study here .this can be seen by considering the basic formulation of the latent gaussian model together with the main building blocks of the inla machinery : essentially , the latent structure is contained within the gaussian part , for which the computations are exact ( and fast , since the precision matrix of the gaussian part will usually be sparse ) . in a sense , the lgm approach separates the estimation in an `` easy '' ( gaussian ) part and a `` difficult '' part .it is perhaps somewhat counterintuitive at first sight that the dynamic / time - series / spatial model constitutes the `` easy '' part ! in this paper we have in fact considered the `` difficult '' part , aiming to choose examples at the boundary of what we considered to be feasible .thus , we argue that our general title is indeed justified . we illustrate this with a simple simulated example where the latent structure is auto - regressive of order one ( ar1 ) , using a similar setup as in the `` minimal '' example presented in the introduction . for ,let , , and where the are now given an ar1 model , as follows : , ( where ) for , where is the marginal precision .define and which is the parameterization used internally in inla .we use a gamma prior for , and priors for both and .data was simulated from model , using , and as the true values . as in the example in the introduction , we made long mcmc runs and compared the results to inla both with and without the correction .the results are shown in figure [ figure6 ] .again , it is clear that the overall accuracy of inla is improved using the correction ., the middle panel shows results for , and the bottom panel show results for , the `` internal '' of inla .the histograms show posterior distributions from a long mcmc run ( ten chains of one million iterations each ) , the black curves show the posterior from inla , while the red curve shows the posteriors using our new correction to inla . ]the binary ( and , more generally , binomial ) glmms discussed in sections [ sec : simulation - study ] and [ sec : toenail ] are are important in many applications , particularly for biomedical data .poisson glmms are also of great interest , and among the difficult cases here are point processes such as log - gaussian cox processes , where data are typically extremely sparse : essentially there are ones at the observed points , and zeroes everywhere else .our example in section [ sec : poisson ] shows a stylized , extreme case of this type .studying the correction for the full log - gaussian cox process case could be a topic for future work .even though the point process case may be difficult , there will often be some degree of smoothing and/or replication making inference easier , so real data sets should be less extreme than the simulated example in section [ sec : poisson ] . from the results in this paper, it appears that the copula correction is robust and works well .there is no general theory guaranteeing that the method will always work under all circumstances , but we feel that the intuition underlying the method is quite strong .since inla for lgms is quite accurate in most cases , the correction is not needed in general , only for problematic cases such as those discussed in this paper . using the copula correction method, we can stretch the limits of applicability of inla , while maintaining its computational speed .let denote the `` standardized '' skew normal cdf corresponding to .we start by finding the jacobian of the transformation defined in equation .note first that , immediately from letting and denote the density functions corresponding to and , differentiating with respect to then gives so \right)}\ ] ] and the jacobian of the transformation is since for and for all .note that where .collecting the different terms and again substituting ,\ ] ] the transformed joint log density is therefore \phi^{-1 } \left[\tilde f_j \left(\frac{\tilde x_j-\tilde\mu_j(\theta ) } { \sigma_j(\theta)}\right)\right]\\ + \sum_{i=1}^n \log\tilde f_i\left(\frac{\tilde x_i-\tilde\mu_i(\theta ) } { \sigma_i(\theta)}\right ) - \sum_{i=1}^n \log\phi\left(\phi^{-1 } \left[\tilde f_i \left(\frac { \tilde x_i-\tilde\mu_i(\theta)}{\sigma_i(\theta)}\right)\right]\right ) + \text{constant}\end{gathered}\ ] ] the original ( uncorrected ) log posterior is evaluated at where .therefore , the version of the copula correction accounting for skewness amounts to adding a term to the original log joint posterior , where \ ! \phi^{-1 }\!\left[\tilde f_j \left(\frac{\mu_j(\theta)-\tilde\mu_j(\theta ) } { \sigma_j(\theta)}\right)\right]\\ + \sum_{i=1}^n \log\tilde f_i\left(\frac{\mu_i(\theta)-\tilde\mu _ i(\theta)}{\sigma_i(\theta)}\right ) - \sum_{i=1}^n \log\phi\left(\phi^{-1 } \left[\tilde f_i \left(\frac{\mu _ i(\theta)-\tilde\mu_i(\theta)}{\sigma_i(\theta)}\right)\right]\right).\end{gathered}\ ] ] calculations of and were done using the functions ` psn ` and ` dsn ` in the r package ` sn ` .here we study model ( 0.8 ) from page 11 of the supplementary material of , where the observations are iid binomial , with clusters , observations per cluster , for and otherwise , and sampling times .the model is where the are iid bivariate normally distributed with mean .following , the prior for the precision matrix of is a wishart distribution with three degrees of freedom and diagonal scale parameter with diagonal elements and .the fixed effects are given priors .as in , we shall consider the case when and are uncorrelated , but we shall here also consider the correlated case with correlation and , respectively .additionally , we consider two different settings of the marginal variances of and : 1 .var , var ( as in ) , 2 .var , var . for each of the two settings of the marginal variances above , we ran the simulation experiment for the three settings of ( , and correlation ) , giving six simulation settings in total . for each simulation setting , we made 200 simulated data sets , and ran two mcmc chains of 200,000 iterations each ( after discarding the first 100,000 iterations ) for each simulated data set .we have yet to specify the number of trials in the binomial distribution .it turns out that this model is nearly unidentifiable for , with very slow mcmc convergence and with numerical instability when running inla ( both with and without the correction ) .therefore , we will here consider , and show the results for .results ( not shown ) are similar also for .as expected , the estimation becomes more accurate as grows , and for large ( say , ) there is no need for the inla correction anymore .results are shown in tables [ table3][table8 ] below , where we use the parameterization used internally by inla , where , and ( note that , and are defined on the whole real line ) .it seems like the correction is working quite well , giving an overall improvement .the coverage probabilities are improved in all cases expect for the with , so we see an improvement for of the combinations of parameters and simulation settings .the variance ratio var(inla)/var(mcmc ) is also improved for nearly all the cases , while the other performance measures show an overall ( though not uniform ) improvement .the method does not seem to deteriorate for higher values of the marginal variances and correlation .@|llcccccccc|@ + & & & & & & & & & + & true values & 0.693 & 1.386 & 0.000 & -2.500 & 1.000 & -1.000 & -0.500 & + & uncorrected inla & 1.527 & 2.130 & 1.566 & -2.664 & 1.021 & -0.692 & -0.428 & + & corrected inla & 1.380 & 2.016 & 1.313 & -2.703 & 1.038 & -0.704 & -0.429 & + & mcmc & 1.449 & 2.022 & 1.707 & -2.694 & 1.032 & -0.707 & -0.421 & + + + & & & & & & & & & + & uncorrected inla & 0.113 & 0.167 & -0.124 & 0.125 & -0.086 & 0.042 & -0.038 & + & corrected inla & -0.086 & -0.028 & -0.329 & -0.028 & 0.045 & 0.007 & -0.047 & + + + & & & & & & & & & + & uncorrected inla & 0.882 & 0.986 & 0.927 & 0.905 & 0.907 & 0.937 & 0.948 & + & corrected inla & 0.943 & 0.974 & 0.996 & 0.994 & 0.982 & 0.977 & 0.987 & + + + & & & & & & & & & + & uncorrected inla & 92.4% & 93.3% & 92.8% & 93.7% & 93.7% & 94.2% & 94.3% & + & corrected inla & 92.9% & 93.7% & 90.0% & 93.7% & 93.8% & 94.6% & 94.7% & + @|llcccccccc|@ + & & & & & & & & & + & true values & 0.693 & 1.386 & 1.099 & -2.500 & 1.000 & -1.000 & -0.500 & + & uncorrected inla & 1.444 & 1.866 & 2.051 & -2.623 & 1.139 & -0.721 & -0.590 & + & corrected inla & 1.366 & 1.786 & 1.900 & -2.642 & 1.148 & -0.730 & -0.594 & + & mcmc & 1.363 & 1.790 & 2.176 & -2.649 & 1.148 & -0.731 & -0.582 & + + + & & & & & & & & & + & uncorrected inla & 0.126 & 0.134 & -0.118 & 0.111 & -0.073 & 0.027 & -0.041 & + & corrected inla & 0.013 & -0.031 & -0.252 & 0.035 & -0.005 & 0.002 & -0.066 & + + + & & & & & & & & & + & uncorrected inla & 0.891 & 0.998 & 0.930 & 0.904 & 0.912 & 0.934 & 0.953 & + & corrected inla & 0.948 & 1.001 & 1.043 & 0.942 & 0.953 & 0.952 & 0.979 & + + + & & & & & & & & & + & uncorrected inla & 92.8% & 94.1% & 93.1% & 93.8% & 93.9% & 94.2% & 94.4% & + & corrected inla & 93.7% & 94.8% & 92.8% & 93.9% & 94.1% & 94.4% & 94.6% & + @|llcccccccc|@ + & & & & & & & & & + & true values & 0.693 & 1.386 & 2.944 & -2.500 & 1.000 & -1.000 & -0.500 & + & uncorrected inla & 0.700 & 1.530 & 3.133 & -2.543 & 1.027 & -0.947 & -0.460 & + & corrected inla & 0.662 & 1.471 & 3.062 & -2.553 & 1.031 & -0.954 & -0.465 & + & mcmc & 0.605 & 1.466 & 3.224 & -2.569 & 1.037 & -0.959 & -0.448 & + + + & & & & & & & & & + & uncorrected inla & 0.195 & 0.130 & -0.107 & 0.110 & -0.080 & 0.034 & -0.059 & + & corrected inla & 0.113 & -0.007 & -0.183 & 0.066 & -0.049 & 0.013 & -0.087 & + + + & & & & & & & & & + & uncorrected inla & 0.958 & 1.023 & 0.943 & 0.905 & 0.921 & 0.920 & 0.950 & + & corrected inla & 0.989 & 1.050 & 1.087 & 0.925 & 0.951 & 0.932 & 0.973 & + + + & & & & & & & & & + & uncorrected inla & 93.4% & 94.7% & 93.7% & 93.9% & 94.1% & 94.0% & 94.3% & + & corrected inla & 94.4% & 95.4% & 94.6% & 94.1% & 94.4% & 94.2% & 94.5% & + @|llcccccccc|@ + & & & & & & & & & + & true values & -1.099 & 0.693 & 0.000 & -2.500 & 1.000 & -1.000 & -0.500 & + & uncorrected inla & -0.749 & 0.962 & -0.243 & -2.274 & 0.956 & -0.661 & -0.584 & + & corrected inla & -0.809 & 0.850 & -0.294 & -2.318 & 0.978 & -0.672 & -0.592 & + & mcmc & -0.844 & 0.867 & -0.250 & -2.306 & 0.971 & -0.652 & -0.593 & + + + & & & & & & & & & + & uncorrected inla & 0.335 & 0.317 & 0.017 & 0.101 & -0.106 & -0.021 & 0.049 & + & corrected inla & 0.126 & -0.058 & -0.106 & -0.039 & 0.047 & -0.050 & 0.003 & + + + & & & & & & & & & + & uncorrected inla & 0.953 & 0.986 & 0.984 & 0.902 & 0.908 & 0.886 & 0.903 &+ & corrected inla & 0.951 & 0.969 & 0.950 & 0.950 & 0.978 & 0.933 & 0.976 & + + + & & & & & & & & & + & uncorrected inla & 92.9% & 93.2% & 94.8% & 93.9% & 93.9% & 93.6% & 93.8% & + & corrected inla & 94.2% & 94.7% & 94.3% & 94.3% & 94.6% & 94.2% & 94.7% & + @|llcccccccc|@ + & & & & & & & & & + & true values & -1.099 & 0.693 & 1.099 & -2.500 & 1.000 & -1.000 & -0.500 & + & uncorrected inla & -0.714 & 1.289 & 1.778 & -2.151 & 1.039 & -1.013 & -0.559 & + & corrected inla & -0.816 & 1.106 & 1.348 & -2.229 & 1.080 & -1.041 & -0.570 & + & mcmc & -0.816 & 1.172 & 1.643 & -2.166 & 1.048 & -1.003 & -0.557 & + + + & & & & & & & & & + & uncorrected inla & 0.336 & 0.273 & 0.141 & 0.045 & -0.061 & -0.022 & -0.010 & + & corrected inla & 0.002 & -0.170 & -0.306 & -0.203 & 0.226 & -0.091 & -0.068 & + + + & & & & & & & & & + & uncorrected inla & 0.954 & 1.030 & 0.977 & 0.892 & 0.905 & 0.901 & 0.934 & + & corrected inla & 0.934 & 0.964 & 0.761 & 0.984 & 1.031 & 0.976 & 1.041 & + + + & & & & & & & & & + & uncorrected inla &92.8% & 93.5% & 93.7% & 93.5% & 93.6% & 93.7% & 94.2% & + & corrected inla & 93.3% & 94.2% & 90.5% & 93.7% & 94.2% & 94.6% & 95.3% & + @|llcccccccc|@ + & & & & & & & & & + & true values & -1.099 & 0.693 & 2.944 & -2.500 & 1.000 & -1.000 & -0.500 & + & uncorrected inla & -1.087 & 1.025 & 4.483 & -2.672 & 1.015 & -0.876 & -0.590 & + & corrected inla & -1.160 & 0.940 & 4.466 & -2.699 & 1.018 & -0.902 & -0.606 & + & mcmc & -1.169 & 0.941 & 4.537 & -2.703 & 1.039 & -0.845 & -0.572 & + + + & & & & & & & & & + & uncorrected inla & 0.288 & 0.192 & -0.063 & 0.083 & -0.150 & -0.061 & -0.079 & + & corrected inla & 0.032 & -0.008 & -0.075 & 0.011 & -0.130 & -0.111 & -0.149 & + + + & & & & & & & & & + & uncorrected inla & 0.938 & 0.961 & 0.883 & 0.883 & 0.897 & 0.887 & 0.932 & + & corrected inla & 0.978 & 1.019 & 1.060 & 0.933 & 0.943 & 0.937 & 0.986 & + + + & & & & & & & & & + & uncorrected inla & 92.1% & 92.6% & 91.4% & 93.5% & 93.5% & 93.5% & 93.9% & + & corrected inla & 93.5% & 93.9% & 93.3% & 94.1% & 94.1% & 94.0% & 94.3% & + in the simulation study described in section [ sec : simulation - study ] we followed and used observations per cluster .as suggested by a reviewer , we here consider the effect of having an even smaller value of .we only show the results for the most extreme possible case , which is . using the close to non - informative prior settings of model in section [ sec : simulation - study ] ( priors for the and a gamma prior for ) , the case with is already quite difficult .using the settings described in section [ sec : simulation - study ] , the simulated data are relatively low - informative , making stable and reliable inference non - trivial . in order to study the even more extreme case of , more informative priorsare needed , otherwise both mcmc and inla will fail . for non - informative ( or very weakly informative ) priorsthe model is just too close to being singular .therefore , to study the case of , we use the following priors : for the , and also a prior for the log precision , .we used sampling times , and 200 simulated data sets .one million mcmc samples ( after a burn - in of 100,000 ) were used for each data set .the results are shown in table [ table9 ] .the correction seems to work well , giving improved estimates in nearly all cases .note in particular that the 95% coverage is uniformly improved , and that all the coverage values are between 93.7% and 96.2% when using the correction .@|llrrrrrrrr|@ + & & & & & & & & & + & true values & 1.000 & 1.000 & 0.000 & -2.500 & 1.000 &-1.000 & -0.500 & + & uncorrected inla & 0.667 & 0.762 & 0.689 & -1.990 & 0.837 & -1.178 & -0.423 & + & corrected inla & 1.104 & 0.933 & 0.358 & -2.032 & 0.860 & -1.219 & -0.442 & + & mcmc & 0.956 & 0.899 & 0.392 & -2.114 & 0.891 & -1.230 & -0.427 & + + + & & & & & & & & & + & uncorrected inla & -0.337 & -0.362 & 0.355 & 0.250 & -0.295 & 0.079 & 0.020 & + & corrected inla & 0.149 & 0.085 & -0.041 & 0.163 & -0.168 & 0.013 & -0.059 & + + + & & & & & & & & & + & uncorrected inla & 0.393 & 0.592 & 0.841 & 0.876 & 0.823 & 0.902 & 0.873 &+ & corrected inla & 2.030 & 1.360 & 1.127 & 0.920 & 0.896 & 0.933 & 0.907 & + + + & & & & & & & & & + & uncorrected inla & 89.2% & 89.2% & 89.5% & 93.5% & 92.6% & 93.7% & 93.3% & + & corrected inla & 96.2% & 95.9% & 96.0% & 94.2% & 93.8% & 94.1% & 93.7% & + as suggested by a reviewer , we here study the effect of the case of estimation from a misspecified model : we simulate data from the model and estimate using model ( with prior settings as in section [ sec : simulation - study ] ) . as before ,we use extremely long mcmc chains ( one million iterations after discarding 100,000 iterations ) as the `` gold standard '' .we simulated 200 data sets from each of the six configurations described in appendix [ sec : model - with - two ] .the results are shown in tables [ table10][table15 ] .again , the correction improves the results in nearly all cases , so it does not seem like using a misspecified model presents any particular problems for the inla correction. @|llrrrrrrrr|@ + & & & & & & & & & + & uncorrected inla & 0.424 & 0.536 & 1.778 & -2.482 & 1.068 & -0.833 & -0.561 & + & corrected inla & 0.591 & 0.640 & 1.440 & -2.541 & 1.093 & -0.847 & -0.579 & + & mcmc & 0.623 & 0.660 & 1.374 & -2.533 & 1.091 & -0.855 & -0.562 & + + + & & & & & & & & & + & uncorrected inla & -0.378 & -0.394 & 0.361 & 0.150 & -0.147 & 0.042 & 0.007 & + & corrected inla & -0.069 & -0.067 & 0.058 & -0.019 & 0.011 & 0.012 & -0.079 & + + + & & & & & & & & & + & uncorrected inla & 0.492 & 0.714 & 1.044 & 0.834 & 0.845 & 0.900 & 0.898 & + & corrected inla & 0.888 & 0.923 & 1.002 & 0.905 & 0.881 & 0.948 & 0.916 & + + + & & & & & & & & & + & uncorrected inla & 87.6% & 87.3% & 88.4% & 92.8% & 92.9% & 93.7% & 93.6% & + & corrected inla & 93.9% & 93.4% & 93.9% & 93.6% & 93.3% & 94.4% & 93.8% & + @|llrrrrrrrr|@ + & & & & & & & & & + & uncorrected inla & 0.983 & 0.890 & 0.585 & -2.667 & 0.973 & -1.060 & -0.220 & + & corrected inla & 1.255 & 1.018 & 0.282 & -2.737 & 0.996 & -1.088 & -0.230 & + & mcmc & 1.362 & 1.066 & 0.172 & -2.746 & 1.004 & -1.111 & -0.214 & + + + & & & & & & & & & + & uncorrected inla & -0.498 & -0.541 & 0.544 & 0.202 & -0.202 & 0.084 & -0.024 & + & corrected inla & -0.147 & -0.150 & 0.140 & 0.017 & -0.050 & 0.038 & -0.069 & + + + & & & & & & & & & + & uncorrected inla & 0.592 & 0.854 & 1.318 & 0.780 & 0.812 & 0.850 & 0.869 & + & corrected inla & 0.851 & 0.943 & 1.059 & 0.851 & 0.842 & 0.905 & 0.888 & + + + & & & & & & & & & + & uncorrected inla & 89.3% & 89.1% & 89.6% & 91.8% & 92.2% & 93.0% & 93.3% & + & corrected inla & 94.0% & 93.8% & 94.1% & 93.1% & 92.9% & 93.8% & 93.6% & + @|llrrrrrrrr|@ + & & & & & & & & & + & uncorrected inla & 1.071 & 0.937 & 0.451 & -2.621 & 1.097 & -1.243 & -0.478 & + & corrected inla & 1.447 & 1.104 & 0.073 & -2.713 & 1.133 & -1.287 & -0.501 & + & mcmc & 1.471 & 1.118 & 0.037 & -2.698 & 1.132 & -1.295 & -0.478 & + + + & & & & & & & & & + & uncorrected inla & -0.496 & -0.538 & 0.542 & 0.192 & -0.208 & 0.081 & 0.004 & + & corrected inla & -0.039 & -0.042 & 0.041 & -0.046 & 0.015 & 0.008 & -0.099 & + + + & & & & & & & & & + & uncorrected inla & 0.596 & 0.854 & 1.308 & 0.775 & 0.783 & 0.842 & 0.855 & + & corrected inla & 0.959 & 0.989 & 1.027 & 0.871 & 0.830 & 0.918 & 0.881 & + + + & & & & & & & & & + & uncorrected inla & 89.5% & 89.4% & 89.8% & 91.7% & 91.6% & 92.9% & 93.0% & + & corrected inla & 95.0% & 94.8% & 95.0% & 93.2% & 92.6% & 94.0% & 93.4% & + @|llrrrrrrrr|@ + & & & & & & & & & + & uncorrected inla & 2.644 & 1.587 & -0.873 & -2.461 & 0.960 & -0.626 & -0.486 & + & corrected inla & 2.938 & 1.671 & -0.976 & -2.508 & 0.977 & -0.637 & -0.497 & + & mcmc & 3.161 & 1.736 & -1.053 & -2.546 & 0.998 & -0.623 & -0.505 & + + + & & & & & & & & & + & uncorrected inla & -0.471 & -0.500 & 0.523 & 0.215 & -0.271 & -0.005 & 0.103 & + & corrected inla & -0.211 & -0.219 & 0.225 & 0.096 & -0.153 & -0.026 & 0.043 & + + + & & & & & & & & & + & uncorrected inla & 0.683 & 0.828 & 1.021 & 0.788 & 0.786 & 0.826 & 0.838 & + & corrected inla & 0.848 & 0.921 & 1.018 & 0.843 & 0.811 & 0.880 & 0.858 & + + + & & & & & & & & & + & uncorrected inla & 91.1% & 91.1% & 91.4% & 91.8% & 91.5% & 92.6% & 92.6% & + & corrected inla & 94.2% & 94.2% & 94.4% & 93.0% & 92.4% & 93.4% & 93.1% & + @|llrrrrrrrr|@ + & & & & & & & & & + & uncorrected inla & 3.246 & 1.749 & -1.056 & -3.025 & 0.936 & -0.642 & -0.182 & + & corrected inla & 3.821 & 1.888 & -1.203 & -3.117 & 0.957 & -0.676 & -0.186 & + & mcmc & 3.990 & 1.938 & -1.261 & -3.144 & 0.978 & -0.663 & -0.183 & + + + & & & & & & & & & + & uncorrected inla & -0.506 & -0.541 & 0.570 & 0.251 & -0.277 & 0.032 & 0.009 & + & corrected inla & -0.144 & -0.154 & 0.161 & 0.063 & -0.139 & -0.017 & -0.012 & + + + & & & & & & & & & + & uncorrected inla & 0.643 & 0.802 & 1.020 & 0.761 & 0.793 & 0.806 & 0.830 & + & corrected inla & 0.922 & 0.973 & 1.050 & 0.846 & 0.826 & 0.885 & 0.860 & + + + & & & & & & & & & + & uncorrected inla & 90.4% & 90.4% & 90.7% & 91.2% & 91.6% & 92.3% & 92.6% & + & corrected inla & 94.8% & 94.8% & 95.0% & 93.0% & 92.7% & 93.5% & 93.1% & + @|llrrrrrrrr|@ + & & & & & & & & & + & uncorrected inla & 3.622 & 1.854 & -1.180 & -3.170 & 1.160 &-0.564 & -0.213 & + & corrected inla & 4.118 & 1.973 & -1.302 & -3.253 & 1.185 & -0.582 & -0.220 & + & mcmc & 4.468 & 2.058 & -1.389 & -3.312 & 1.218 & -0.576 & -0.222 & + + + & & & & & & & & & + & uncorrected inla & -0.526 & -0.564 & 0.595 & 0.276 & -0.326 & 0.017 & 0.037 & + & corrected inla & -0.229 & -0.240 & 0.248 & 0.117 & -0.186 & -0.008 & 0.007 & + + + & & & & & & & & & + & uncorrected inla & 0.641 & 0.802 & 1.021 & 0.751 & 0.755 & 0.797 & 0.804 & + & corrected inla & 0.844 & 0.925 & 1.033 & 0.817 & 0.788 & 0.860 & 0.832 & + + + & & & & & & & & & + & uncorrected inla & 90.1% & 90.1% & 90.4% & 90.9% & 90.6% & 92.1% & 92.1% & + & corrected inla & 94.3% & 94.2% & 94.5% & 92.6% & 91.9% & 93.1% & 92.7% & +we thank youyi fong for providing r code relating to the simulation study described in section [ sec : simulation - study ] .we are also very grateful to leonard held and rafael sauter for providing us with a copy of their unpublished paper along with r code relevant for the analysis of toenail data described in section [ sec : toenail ] .we thank janine illian , geir - arne fuglstad , dan simpson and two anonymous reviewers for helpful comments that have led to an improved presentation .this research was supported by the norwegian research council .
we introduce a new copula - based correction for generalized linear mixed models ( glmms ) within the integrated nested laplace approximation ( inla ) approach for approximate bayesian inference for latent gaussian models . while inla is usually very accurate , some ( rather extreme ) cases of glmms with e.g. binomial or poisson data have been seen to be problematic . inaccuracies can occur when there is a very low degree of smoothing or `` borrowing strength '' within the model , and we have therefore developed a correction aiming to push the boundaries of the applicability of inla . our new correction has been implemented as part of the r - inla package , and adds only negligible computational cost . empirical evaluations on both real and simulated data indicate that the method works well . * *
the understanding of the macroscopic dynamics from the underlying microscopic time evolution is a central issue of non equilibrium statistical mechanics .the massive use of computer simulations over the last years has led to new approaches to this very old problem . among others , we mention legendre integrators , the connfessit method , adaptive mesh refinement and multiscale modeling .the last two methods do not require knowledge of the macroscopic equations .on the other hand , there has been much effort to derive approximate macroscopic equations from the microscopic dynamics , which yield reliable results under various circumstances ( see e.g. ref . for an overview of constitutive equations for polymer liquids ) . here ,we follow the approach proposed in ref . to combine microscopic and macroscopic simulations in a combined integration scheme which recognizes the onset and breakdown of the chosen macroscopic description during the simulation .note , that the breakdown of a chosen macroscopic description does not imply a similar breakdown of other , improved macroscopic descriptions . instead of improving the macroscopic equations , which is the aim of many works on closure approximations ( see e.g. and references therein ) , we here keep the chosen macroscopic description and use it as long and as frequently in the simulation as possible . while this integration scheme was used in ref . to detect the onset of the macroscopic description , we here present the full scheme that switches back and forth between microscopic and macroscopic simulations . we apply this scheme to well known models of ferrofluid dynamics where it decides between direct brownian dynamics simulations and integration of the constitutive equation .in order to keep the paper self contained , we briefly summarize the main ideas of the combined integration scheme based on the invariance principle proposed in refs .we assume a given microscopic description of the system , where the microscopic variables are denoted by .the microscopic dynamics is specified by the vector field , in addition , we assume that the set of macroscopic variables has been chosen . typically , is the distribution function over the set of microscopic coordinates and contains low order moments of . in this case , the macroscopic variables are linear functionals of the microscopic distribution function , . although the method can be applied to more general situations , we here limit ourselves to this case for the sake of clarity .the reduced or macroscopic description assumes not only closed form macroscopic equations , but also a family of canonical distribution functions .the canonical distribution functions satisfy the consistency relation .then , the macroscopic dynamics is given by different routes to the construction of have been proposed . in many applications , the dynamic system eq .( [ kinetic ] ) is equipped with a lyapunov function ( the entropy , free energy , etc . ) , and the canonical distribution functions are conditional maximizers of subject to fixed . in order to estimate the accuracy of the macroscopic description we define the defect of invariance as the difference of the microscopic and macroscopic time derivative , by construction , .if the defect of invariance vanishes for all admissible values of , then the reduced description is called invariant and the family represents the invariant manifold in the space of the microscopic variables . the invariant manifold is relevant if it is stable .exact invariant manifolds are known only in very few cases .corrections to the manifold through minimizing is part of the so called method of invariant manifolds . here, we exploit the invariance principle in a different way .let denote the microscopic variables at time for given initial conditions at time .the values of the macroscopic variables at time are given by .on the other hand , the solution of the macroscopic equations ( [ dtmacro ] ) with corresponding initial conditions gives .we denote with the value of the defect of invariance ( [ defect ] ) with respect to some norm and a fixed threshold value .if at time the defect of invariance satisfies it is said that the macroscopic description _ sets on _ , since the reduced description is sufficiently accurate . however , if the macroscopic description _ breaks down _ since the accuracy of the macroscopic dynamics is insufficienttherefore , the evaluation of the defect of invariance ( [ defect ] ) on the current solution either to the macroscopic or to the microscopic dynamics and checking eqs .( [ onset ] ) and ( [ breakdown ] ) we can decide whether integration of the macroscopic dynamics is sufficiently accurate or not .this information is used in the combined integration scheme to switch between microscopic and macroscopic simulations .the scheme is sketched in fig .[ fig_hybridscheme ] .suppose at time the microscopic dynamics is integrated for given initial condition .the integration is continued until at time the inequality ( [ onset ] ) is satisfied . at this point ,the macroscopic dynamics is started with the actual values of the macroscopic variables , .the macroscopic dynamics is integrated until the macroscopic description breaks down at a later time , which is signaled by . at this time it is necessary to switch back from the macroscopic to the microscopic simulations in order to achieve the required accuracy .the initial condition for the microscopic simulation at time is obtained from the macroscopic description , .then , the microscopic dynamics is integrated until the macroscopic description sets on etc . in the sequel, we demonstrate this scheme for the case of ferrofluid dynamics .ferrofluids are stable suspensions of nano sized ferromagnetic colloidal particles in a suitable carrier liquid .these fluids attract considerable interest due to their peculiar behavior , such as the magnetoviscous effect , the dependence of the viscosity coefficients on the magnetic field .we here consider the kinetic model of ferrofluid dynamics proposed in refs . . in this model ,the ferromagnetic particles are assumed to be identical , magnetically hard ferromagnetic monodomain particles .it is further assumed that the particles are of an ellipsoidal shape with axes ratio and that the magnetic moment is oriented parallel to the symmetry axes of the particle .let denote the orientational distribution function to find a ferromagnetic particle with the orientation , where is a vector on the three - dimensional unit sphere . in the general notation of sec .[ hybridscheme ] , the microscopic coordinates are the orientations and is the orientational distribution function . the normalized macroscopic magnetization is given by , where denotes integration over the three dimensional unit sphere .the normalization is performed with the saturation magnetization , where denotes the number density . in the presence of a local magnetic field and a velocity field ,the dynamics is given by in eq .( [ ffkinetic ] ) we have introduced the rotational operator , the vorticity , the symmetric velocity gradient , and the so called shape factor .the rotational diffusion coefficient defines the rotational relaxation time .the dimensionless magnetic field is defined by , where and denote boltzmann s constant and temperature , respectively .the equilibrium distribution }\ ] ] is the stationary solution to the kinetic equation ( [ ffkinetic ] ) in the absence of flow . from eq .( [ f_eq ] ) , the equilibrium magnetization is found to be given by , where is the langevin parameter and denotes the langevin function .except for special cases , exact solutions to the kinetic equation ( [ ffkinetic ] ) are unknown and closed form equations for the magnetization can not be derived exactly from eq .( [ ffkinetic ] ) . in order to solve the closure problem , the authors of ref . have suggested to use the family of equilibrium distributions ( [ f_eq ] ) , where the magnetic field is replaced by an effective field .thus , the non equilibrium magnetization is given by , where we have introduced the norm of the effective field and .this so called effective field approximation ( efa ) is a particular instance of the quasi - equilibrium or maximum entropy approximation .it is derived from extremizing the entropy functional =-\int\!d^2u\,f({\mathbf{u}})\ln[f({\mathbf{u}})/f_{{\mathbf{h}}}({\mathbf{u}})]u ] , where denotes the total number of integration time steps . for a better comparison , all data shown in fig .[ fig_error_oscilshear ] are obtained with the same pc with a p4 processor . from fig .[ fig_error_oscilshear ] we observe that the relative error decreases with decreasing while the time the microscopic simulation is integrated in the combined scheme increases and thus the required cpu time increases .overall , we observe that the relative error decreases almost linearly with elapsed cpu time .note , that does not correspond to the exact result but to the bd simulation .valuable discussions with a. n. gorban , h.c .ttinger , and s. hess are gratefully acknowledged .this work was supported in part by dfg priority program spp 1104 colloidal magnetic fluids under grant no . he1100/6 - 2 .the coefficients in eq . ( [ uudefect ] ) contain contributions from brownian motion , the magnetic field and the symmetric velocity gradient .in particular , -l_3(\xi)\right){\mathbf{n}}\cdot{\mathbf{h}}\nonumber\\ & & { } + \frac{b}{\xi}\left ( 14l_3(\xi)-4\frac{l_2 ^ 2(\xi)}{l_1(\xi ) } + 9\frac{l_2(\xi)}{l_1(\xi)}\right){\mathbf{d}}\colon{\mathbf{n}}{\mathbf{n}}\end{aligned}\ ] ] where \ ] ] and the total derivative of the langevin function can be expressed by 00 p. ilg , i. v. karlin , h. c. ttinger , canonical distribution functions in polymer dynamics : i. dilute solutions of flexible polymers , physica a 315 ( 2002 ) 367 - 385 .p. ilg , m. krger , i. v. karlin , h. c. ttinger , canonical distribution functions in polymer dynamics : ii . liquid - crystalline polymers ,physica a 319c ( 2003 ) 134 - 150 .a. n. gorban , p. a. gorban , i. v. karlin , legendre integrators , post - processing and quasiequilibrium , j. non - newton .fluid mech ., this issue .m. laso and h. c. ttinger , calculation of viscoelastic flow using molecular models - the connfessit approach , j. non - newton .fluid mech .47 ( 1993 ) 1 - 20 .r. m. jendrejack , j. j. de pablo , m. d. graham , a method for multiscale simulation of flowing complex fluids , j. non - newton .fluid mech . 108 ( 2002 ) 123 - 142 . c. i. siettos , m. d. graham , i. g. kevrekidis , coarse brownian dynamics for nematic liquid crystals : bifurcation , projective integration , and control via stochastic simulation , j. chem .phys . 118 ( 2003 ) 10149 - 10156 .r. b. bird , j. m. wiest , constitutive equations for polymeric liquids , ann .fluid mech . 27 ( 1995 ) 169 - 193 .a. n. gorban , i. v. karlin , p. ilg , h. c. ttinger , corrections and enhancements of quasi equilibrium states , j. non - newton .fluid mech .96 ( 2001 ) 203 - 219 . i. v. karlin , p. ilg , h. c. ttinger , invariance principle to decide between micro and macro computations in _ recent developments in mathematical and experimental physics , volume c : hydrodynamics and dynamical systems_. p. 43 - 50 , f. uribe ( ed . ) , kluwer , dordrecht , 2002 .a. n. gorban and i. v. karlin , thermodynamic parameterization , physica a 190 ( 1992 ) 393 - 404 .a. n. gorban , i. v. karlin , a. y. zinovyev , constructive methods of invariant manifolds for kinetic problems , preprint 2003 , available at http://www.ihes.fr/preprints/m03/resu/resu-m03-50.html .e. blums , a. cebers , m. m. maiorov , _ magnetic fluids_. de gruyter , berlin , 1997 . m. krger , p. ilg , s. hess , magnetoviscous model fluids , j. phys .condens .matter 15 ( 2003 ) s1401-s1423 .m. a. martsenyuk , viscosity of a suspension of ellipsoidal ferromagnetic particles in a magnetic field , j. appl .mech . tech .phys . 14 ( 1973 ) 564 - 566 .a. cebers , simulation of the magnetic rheology of a dilute suspension of ellipsoidal particles in a numerical experiment , magnetohydrodynamics 20 ( 1984 ) 349 - 354 .s. hess , fokker - planck - equation approach to flow alignment in liquid crystals z. naturforsch .31a ( 1976 ) 1034 .m. a. martsenyuk , yu . l. raikher , m. i. shliomis , on the kinetics of magnetization of suspension of ferromagnetic particles , sov . phys. jetp 38 ( 1974 ) 413 - 416 .p. ilg , m. krger , s. hess , orientational order parameters and magnetoviscosity of dilute ferrofluids , j. chem .( 2001 ) 9078 - 9088 .p. ilg and m. krger , magnetization dynamics , rheology , and an effective description of ferromagnetic units in dilute suspension , phys .e 66 ( 2002 ) 021501 .p. ilg , m. krger , s. hess , a. yu . zubarev , dynamics of colloidal suspensions of ferromagnetic particles in plane couette flow : comparison of approximate solutions with brownian dynamics simulations , phys . rev .e 67 ( 2003 ) 061401 .h. c. ttinger , _stochastic processes in polymeric fluids_. springer , berlin , 1996 .w. h. press , s. a. teukolsky , w. t. vellering , b. p. flannery , _ numerical recipes in fortran_. cambidge university press , 2nd ed .1 * sketch of the combined integration scheme .the macroscopic dynamics is integrated whenever the norm of the defect of invariance is smaller than some fixed threshold value . otherwise , the microscopic dynamics is integrated . *2 * magnetization dynamics as a function of reduced time in the absence of velocity gradients .a constant magnetic field was applied during the time interval , while the magnetic field was switched off outside this interval .circles and squares are the results of the bd simulation for and , respectively , while solid and dashed line are the corresponding predictions of the efa .the inset shows the comparison for on a finer scale . *3 * deviation of normalized magnetization calculated from bd simulation and the efa for ( circles ) and ( squares ) for the same conditions as in fig .[ fig_steph_bdvsefa ] .solid and dashed lines are the defect of invariance as calculated from the matrix norm of eq .( [ uudefect ] ) for and , respectively . for better visibility ,the matrix norm was multiplied by a factor . *4 * magnetization dynamics as a function of reduced time for the same condition as in fig .[ fig_steph_bdvsefa ] .circles and squares are the result of the bd simulation for and , respectively , while solid and dashed lines are the results of the combined integration scheme with for and , respectively . within the boxed regions ( indicated by the shading in the upper part ) , and the bd simulation is performed , otherwise the efa is integrated .the inset shows the comparison for on a finer scale , where the dashed dotted line is the result of the efa . *5 * magnetization dynamics as a function of reduced time in a constant magnetic field for and steady shear flow with shear rate for , where the magnetic field is oriented in the gradient direction .no magnetic field and no shear flow is applied outside the mentioned time intervals .circles and squares represent the results of the bd simulation for and , respectively , while solid and dashed lines are the corresponding result of the efa . * fig .6 * deviation of normalized magnetization ( circles ) and ( squares ) calculated from bd simulation and the efa for the same conditions as in fig .[ fig_stephstepshear_bdvsefa ] .the solid line is the defect of invariance as calculated from the matrix norm of eq .( [ uudefect ] ) . for better visibility ,the matrix norm was multiplied by a factor . ** magnetization dynamics as a function of reduced time for the same conditions as in fig .[ fig_stephstepshear_bdvsefa ] .circles and squares represent the result of full bd simulation , the dashed - dotted line corresponds to the efa and full lines are the result of the combined integration scheme , where the efa is integrated within the boxed regions while otherwise bd simulations are performed . * fig .8 * magnetization dynamics as a function of reduced time for inception of oscillatory shear flow with frequency and amplitude .the magnetic field is oriented in flow direction with .circles and squares represent the results of the bd simulation for and , respectively , while solid and dashed lines are the corresponding result of the efa . * fig .9 * deviation of normalized magnetization ( circles ) and ( squares ) calculated by bd simulation and from efa as functions of time .also shown is the norm of defect of invariance ( solid line ) , which is multiplied by a factor for better visibility .the same flow conditions as in fig .[ fig_oscillshear_bdvsefa ] are considered . *10 * magnetization dynamics as a function of reduced time for the same conditions as in fig .[ fig_oscillshear_bdvsefa ] .symbols are the result of the bd simulation .solid and dashed lines correspond to the combined integration , where the efa is integrated within the boxed regions ( indicated by the shading in the upper part ) and the bd simulation is preformed outside . *11 * magnetization dynamics as a function of reduced time for the same conditions as in fig .[ fig_oscillshear_bdvsefa ] , but where the shear flow was stopped at time .symbols are the result of the bd simulation .solid and dashed lines correspond to the combined integration , where the efa is integrated within the boxed regions ( indicated by the shading in the upper part ) and the bd simulation is preformed outside .dashed - dotted lines are the result of the efa . *12 * relative error defined in the text as a function of cpu time in seconds on a logarithmic scale . the same conditions as in fig .[ fig_oscillshear_bdvsefa ] are considered .the number above the filled symbols are the corresponding values of .solid lines are guides to the eye .different values of decreasing from ( efa ) to ( bd simulation ) have been chosen in the combined integration scheme in order to obtain increasingly more accurate results for .( 10,5 ) ( 2.9,1)(1,0)9.2 ( 2.9,4)(1,0)9.2 ( 1.2,3.9)macro ( 1.2,0.9)micro ( 2.9,1)(1,0)2.6 ( 5.5,1)(0,1)3.0 ( 5.5,4)(1,0)3.2 ( 8.7,4)(0,-1)3.0 ( 8.7,1)(1,0)2.5 ( 3.3,0.3) ( 6.5,0.3) ( 9.3,0.3) ( 2.9,4.2) ( 5.5,4.2) ( 8.7,4.2) ( 12.3,3.9) ( 12.3,0.9)
a method for the combination of microscopic and macroscopic simulations is developed which is based on the invariance of the macroscopic relative to the microscopic dynamics . the method recognizes the onset and breakdown of the macroscopic description during the integration . we apply this method to the case of ferrofluid dynamics , where it switches between direct brownian dynamics simulations and integration of the constitutive equation . multiscale simulation , reduced description , constitutive equation , kinetic theory , magnetic liquids 05.10.-a computational methods in statistical physics and nonlinear dynamics , 83.10.gr constitutive relations , 05.20.dd kinetic theory , 75.50.mm magnetic liquids
broadcast scenarios have been widely studied for video or audio broadcasting .more recently , the multimedia broadcast / multicast service ( mbms ) became a requirement of the long - term evolution ( lte ) specifications to support the delivery of broadcast / multicast data in lte systems .broadcast and multicast downlink transmissions make no significant difference at the physical layer. basically , broadcast services are available to all users without the need of subscribing to a particular service . therefore ,multicasting can thus be seen as `` broadcast via subscription '' , with the possibility of charging for the subscription .mbms is intended to be used for some content , such as streaming transmission of a sport or cultural event , but broadcasting may also be of interest to transmit some signalling such as a beacon for time synchronization or for power control purposes .we consider broadcasting under a green - aware objective aiming at reducing the energy consumption which is an important issue in wireless environments .broadcasting may bring a strong improvement in wireless channels since a common resource ( in frequency and/or time ) may be used for all destinations .the transmission cost for a base station ( bs ) to reach all nodes in a multicast group is assumed to be proportional to the power needed to reach the worst mobile among the group , where the worst refers to the mobile receiving the weaker signal which relies on its distance and on additional shadowing effects .we thus consider the situation where there is one common information that every mobile is interested to receive , and which can be obtained from any one of bss .the objective is then to achieve a mobiles assignment which _ minimizes the total power consumption_. as mentioned above , this problem is relevant not only for streaming data transmission but also for signalling .a corollary question is indeed how many base stations should be kept active to ensure the signalling to a group of mobiles even when they are not transmitting . in low activity periods , such as during the night , it could be relevant to keep active only few bs ensuring the coverage of few active mobiles . indeed , energy consumption and electromagnetic pollution are main societal and economical challenges that developed countries have to handle .the evolution of cellular networks toward smaller cells offering theoretically higher capacity could in turn lead to an unacceptable increase of the energy expenditure of wireless systems .when decreasing the cell size , the energy consumed for data transmission becomes lower compared to the _ operational power costs _( e.g. power amplifiers , cooler , etc . ) of a typical bs . switching off a bsmay then bring significant improvements in energy efficiency .therefore , we take into account the switching on / off operation in the problem formulation . the overarching problem studied in the sequelis then finding _ energy - efficient broadcast transmission techniques _ to reduce spurious energy using distributed schemes .the mobile assignment problem ( map ) in the context of broadcast transmission that we study in this work is actually a special case of the _ simple plant location problem _ ( splp ) .splp lies within _ clustering problems_. in the map , basically , the objective is to assign the points to at most clusters so that the sum of all distances between points in the same cluster ( -clustering ) is minimized . in , the typical cost for a bs - mobile pairis assumed to be only a function of distance between the bs and the mobile , formulated as . here, is a_ cluster _ of mobiles assigned to bs , is the set of clusters , is the distance between mobile and bs , is the path loss exponent and is the operational power cost loaded to bs .this formulation is modified in order to consider the effect of shadowing leading to the following total cost where denotes the shadowing effect between mobile and bs .thus , this modification turns the map into the splp .while finding the global minimum of the map may be identified as an np - hard problem from splp literature , the large scale nature of the cellular network further requires to solve it in a decentralized manner .thus , game theory appears as a natural tool to cope with both features : distributed decision and np - hardness .we address this problem by considering the mobiles as players being able to make strategic decisions and the bss as the strategy identifiers : each mobile has to choose the best bs to be served. computational geometric approaches to the map can be found in . in , the authors examined the 1-dimensional version of the map , where the effects of shadowing and operational power cost are not taken into account .polynomial time solutions via dynamic programming are proposed . in , authors suggested approximation algorithms ( and an algebraic intractability result ) for selecting an optimal line on which to place bss to cover mobiles , and a proof of np - hardness for any path loss exponent . the papers focused on source - initiated broadcasting of data in static all - wireless networks .data are distributed from a source node to each node in a network .the main objective is to construct a minimum - energy broadcast tree rooted at the source node .multi - hop routing is not the scope of our paper . in , the combined problem of ( i ) deciding what subset of the mobileswould be assigned to each bs , and then ( ii ) sharing the bss cost of multicast among the mobiles is studied .the subset that is wished to assign to a given bs is said to be its target set of mobiles .this problem can be conceived as a coalitional pricing game played by mobiles which is called _ the association game of mobiles_. we propose algorithmic solutions for mobile assignment in the context of broadcasting , in order to minimize the overall energy consumption related to transmission and operational powers . in this context , switching off some fraction of bss is considered to be a way of decreasing dramatically the total energy consumption .note however that heterogeneous networks include macro and small - cells with or without coordination .it is reasonable to assume that small - cells are subject to switching off operation while macro - cells are always turned on .they can indeed serve moving mobiles in order to decrease the number of hand - offs . further , since the small - cells are deployed intensively , their transmission power is lower than those of macro - cells , while their circuit power dominates . comparing the transmission power about few milliwatts with the operational power costs which may approach tens of watts , turning off a fraction of bss is appealing for reducing the total energy footprint of the network .the efforts for turning off some bss will be concentrated on small - cells serving fixed mobiles .the referred literature mostly concentrates on the geometric aspects of the map where basically , the coverage area of a bs is assumed to be a disc which issues from omnidirectional antenna pattern .however , _the effect of shadowing _ , special designed _ antenna patterns _ as well as _ the operational power costs_ may impact the bs - mobile assignments . in this paper , we take into account these effects by introducing a _ power cost matrix _ containing all bs - mobile pairing power costs .furthermore , several papers working on coverage optimization deal with static optimization and planning from a centralized point of view .but the dynamic switching - off process associated to the large - scale nature of the network induces to find distributed solutions .to this end , we deal with this problem through a group formation game formulation .subsequently , we introduce a new algorithm based on group formation games , called _ hedonic decision algorithm_. this formalism is constructive : a new class of group formation games is introduced where the utility of players within a group is separable and symmetric .this is a generalization of parity affiliation games and this hedonic decision algorithm is in fact applicable for any set covering problem . to prove the efficiency of this approach ,we then derive four other methods allowing to solve the initial problem .first , we propose a recursive algorithm called _ the hold minimum algorithm _ which solves the considered problem optimally .however , the hold minimum algorithm operates in a centralized way since it requires the whole knowledge for each bs - mobile pairing power cost .we then adapt an approach from the splp literature to the map : a centralized polynomial - time heuristic algorithm is proposed called the _ the column control _ which produces optimal assignments when taking into account the operational power cost .this algorithm is also extended to a distributed approach , where each mobile gathers the local information from the bss located in its range .on the other hand , _ the nearest base station algorithm _ , a distributed greedy algorithm which runs in polynomial - time is also evaluated .this algorithm is not efficient if the operational power cost is large , but is very efficient for the fast - moving users served by macro bss .the rest of the paper is organized as follows . in section [ sec : genericproblem ] , the map is formulated mathematically as a clustering problem and different formulations are then proposed . in section [ sec : decentralized ] , the game framework and the hedonic decision algorithm are proposed . in section [ sec : efficientalgorithms ] , we derive other algorithmic solutions for the map and their complexity is anlayzed in section [ sec : timecomplexity ] . finally , we present simulation results in section [ sec : simulationresults ] and we expose some conclusions in section [ sec : conclusions ] .we consider the coverage problem in the case of broadcast transmission in cellular networks .we assume that each bs transmits simultaneously to the mobiles .the distance between the mobile and bs is represented by .the power needed to receive the transmission is given by .we consider basic signal propagation model capturing path loss as well as shadowing effect formulated as where and denote transmitted power from bs to mobile and path loss exponent , respectively .the random variable is used to model slow fading effects and commonly follows a log - normal distribution .the required transmission power depends on the mobile having the worst signal level from the bs ( figure [ fig : broadcasttransmission ] ) . at this power level , all mobiles are guaranteed to receive a sufficient power .we also consider the operational power cost denoted as which captures the energy expenditure of a typical bs for operational costs ( power amplifiers , cooler , etc . ) .so , the total power cost ( transmission power + operational power cost ) of a typical transmission between bs and mobile is denoted as let and be the sets of mobiles and bss , respectively .representing the _ power cost matrix _ , we assume where if , then ( denotes a maximal power , for instance , in wifi , it is mw ) . clustering is a rich branch of combinatorial problems which have been extensively studied in many fields including database systems , image processing , data mining , molecular biology , etc . . consider the set of mobiles ; a _cluster _ is any non - empty subset of and a _ clustering _ is a partition of .many different clustering problems can be defined .the mostly studied problems are defined through their objective which is to assign the points to at most clusters so that either : * _ k - centre _ : the maximum distance from any point to its cluster centre is minimized , * _ k - median _ : the sum of distances from each point to its closest cluster centre is minimized , * _ k - clustering _ : the sum of all distances between points in the same cluster is minimized .the problem of clustering a set of points into a specific number of clusters so as to minimize the sum of cluster sizes is referred to as min - size -clustering problem . in , the typical cost for a bs - mobile pairis assumed to be only a function of the bs - mobile distance and leads to the formulation : , where is a _ cluster _ of mobiles assigned to bs , is the set of clusters , is the distance between mobile and bs , is the path loss exponent , is the operational power cost loaded to bs . since in this paper , we add the effect of shadowing to such a cost , the cost function becomes .note that the shadowing effect breaks the monotonicity with ( see figure [ fig : broadcasttransmission ] ) and then shifts the problem to a _ simple plant location problem _ ( splp ) formulated as follows : * let have potential facility locations .a facility can be opened in any location ; opening a facility location has a non - negative cost corresponding to in the map .each open facility can provide an unlimited amount of commodity corresponding to unlimited number of mobiles served by a bs in the map ; * there are customers that require a service .the goal is to determine a subset of the set of potential facility locations , at which to open facilities and an assignment of all clients to these facilities so as to minimize the overall total cost .however , the splp formulation presents a very high complexity , and we rather propose to turn out the problem as a set covering problem , starting from a binary integer formulation .note that there are at most possible subsets of .each subset can be associated to any bs and the number of combinations is given by .we note the collection of total possibilities .the index set of is denoted by .let be a 0 - 1 matrix with if the node ( i.e. mobile or bs ) belongs to the set .let be an -dimensional vector .the value represents the optimal power of by which we denote a pair which consists of a set of mobiles assigned to bs .clearly the formulation of this problem is given by where the term imposes that only one bs is associated to a mobile . by this way, we do not let a mobile to be assigned to several bss .it follows that the optimal clustering is denoted as such that where is the optimal pairing .thanks to this formulation , we can show that the map may be derived as a set partitioning problem .in the map , the set is associated with another set . therefore , the collection contains those subsets of each of which is associated with every element of the set .consider the following example .let us have a power cost matrix given by the collection of total possibilities : recall that denotes the cluster of mobiles assigned to bs .the optimal values for each possibility is given by .then , we define the following matrix : the optimal total power is thus calculated by the following binary integer program : the values and result in the optimal total power of the example scenario , i.e. , with the optimal clustering . however , set partitioning problems are well known to be np - hard .consequently , the map being a special set partitioning problem is also np-hard.in the next section , we propose to reduce this complexity .the previous formulation stated that a unique bs is allowed to serve a mobile . however in terms of pure coverage considerations , the optimal solution may feature some mobiles to be covered by several bs , no matter to which bs the mobile eventually associate with .we then relax the condition of associating only one bs to a cluster of mobiles in such that it is now possible to have a cluster of mobiles covered by more than one bs : .thus , this arrangement turns the map into so called _ set covering _ problem .consider a set of mobiles assigned to bs noted as .when a group of mobiles deviates to another bs then the cost due to becomes additive .let the cost of and be and , respectively .we denote the total cost before deviation of as and after deviation of as , respectively , which can be given by where and are the remaining costs before and after deviation , respectively .there is always a potential ( probability ) increasing the total cost when a deviation occurs , i.e. .for better observation , let us consider the following power cost matrix : let and , respectively .then , , , and resulting in the following total costs : where . utilizing this property , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we delete from the collection all those assignments whenever the cost of is equal to the cost of such that . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for example , let us consider the last example where and .for , all possible assignments are , , , , , , corresponding to the cost vector .note that the cost of , , , are equal each other .therefore , we remove , , from the collection . the reduced collection of assignments becomes as following : note that this property is an extension of the geometric `` coverage range '' .if the shadowing was not considered , this approach would reduced to defining the maximal coverage range .but with shadowing the mobile requiring the most power is not always the further one .thus , the binary integer program of finding the solution of the problem is given by the solution of this problem is found to be and which fits to the optimal one .by such an elimination , the size of the collection of assignments reduces from to . thisprove that the set - covering formulation is much more efficient than the set - partitioning one .enumerating all possible solutions and choosing the one which produces the lowest cost is known as brute - force search or generate and test .we represent by the _ assignment matrix _ where .if mobile is assigned to bs , then , otherwise .notice that each row of the assignment matrix includes only unique `` 1 '' which means that a mobile is served by only one bs , i.e. .this is not in contradiction with our former remark about the possibility of having a mobile covered by several bs . herenow , we decide to associate a mobile to a bs .so if a mobile is covered by several bss , the serving bs can be anyone of this covering set . denoting the collection of the assignment matrices ,actually , we formalize the problem as following : where is the element - wise product .note that the total number of possibilities of assignment matrices can be calculated as .we now turn to the study of decentralized methods for solving the map .our approach is based on _ group formation game _ ( see , , ) .a group formation game is represented by a triple where is the set of _ players _ ( i.e. the mobiles ) , is the set of _ strategies _( i.e. the bss ) shared by all the players and is the _ utility function _ of player .each player chooses exactly one element from the alternatives in .the choices of players are represented by which is called the _ strategy - tuple _( shows the strategy chosen by player ) .a _ partition _ of players according to strategy - tuple is denoted as where is the group of players choosing the strategy .we assume the two following conditions and we will see later how we can define the utilities to achieve these conditions : 1 .* separability * : the utility of any player in any group is said to be _separable _ if its utility can always be split as a sum according to : where may be interpreted as the gain of player from player if chooses strategy .note that is the utility of player when it is the only player choosing strategy .thus , the separability property states that utility transfers among a group of users sharing the same strategy is done such that the utility granted to one user is a sum of utilities granted individually by each partner in the group .* symmetricity * : the utility is said to be _ symmetric _ if the individual gain of from is equal to the gain of from when they both share the same strategy : therefore , this symmetric utility can be referred to as : , . is called the _ symmetric bipartite utility _ of player and while the common strategy is . actually , the game defined above is a straightforward generalization of party affiliation games .[ thm : gisapotentialgame ] is a potential game .a non - cooperative game is a potential game whenever there exists a function meaning that when player switches from strategy to the difference of its utility can be given by the difference of a function .this function is called a _ potential function_. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the strategy - tuple that maximizes the potential function is a nash equilibrium in the game ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ let us choose as following the potential function : \notag \\ & = \sum_{j\in n } \sum_{a\in g_j^{\sigma } } v(a , a;j ) + \frac{1}{2 } \sum_{j\in n } \sum_{a\in g_j^{\sigma } } \sum_{b\in g_j^{\sigma}:b\neq a } v(a , b;j ) .\label{eq : potentialfunction}\end{aligned}\ ] ] actually , here we take the sum of single player utilities and the half of total symmetric bipartite utilities .let us rewrite the potential function as following : when player switches from to ( the other players do not change their strategies ) , then the strategy - tuple is transformed from to , and the potential becomes note that since the total utility due to the other players is equal both in and .thus , the difference of potentials is given by on the other hand , the difference of the utility of player is calculated as by this result we conclude that which proves that is a potential game .thus , admits a nash equilibrium in pure strategies which results in the partition and which maximizes the potential function . _the proof [ thm : gisapotentialgame ] is constructive : _ any group formation game possessing separable and symmetric utility gain of players within a group always converges to a pure nash equilibrium .recall that the required power for serving the group of mobiles by bs is denoted as .we represent by , as a _ utility _ arising due to group .note that is a monotonically decreasing function .the _ clustering profit _ due to mobile and in bs is given by \ ] ] where is the utility of player when is served alone in bs . therefore , .note that the clustering profit is a useful metric for evaluating a group of mobiles .whenever a group of mobiles are near each other , then the clustering profit is high ; thus , assigning this group to only one bs is almost always efficient . to ensure separability and symmetricity , we propose to choose the symmetric bipartite utilityaccording to : where is called as _ clustering weight _ which is a parameter that must be adjusted according to the environment .we will show later how it can impact the convergence point of the system .thus , the utility function of any player is given by then , let us rewrite the potential function according to defined symmetric bipartite utility .then , using eq . , we may express the corresponding potential function which is equal to .\ ] ] now , let us consider that we can order the mobiles associated to bs , according to the required power .the ordered set of mobiles related to bs is represented by .observe that we can now compute the potential function as following : .\ ] ] according to this result and the description of individual costs , we can state the following properties : 1 . if is very small , i.e. the dominant term in the potential function and in the symmetric bipartite utility is the individual power , then with a very low , each mobile will privilege an association to the nearest bs .this is exactly the case when .2 . when increases a mobile may decide to leave its nearest neighbour if the lost in power is compensated by the second term .suppose that the mobile wants to associate to a bs , where all other mobiles currently associated with experience a better channel .then the gain to associate to this second bs for this user will be , i.e. the sum of all powers of mobiles already associated with this bs , weighted by .it is clear that the mobile will be joining either a cell having already strong power terms or a huge number of users .if becomes very large , we can expect that all mobiles will converge to the same bs which means that only one bs is active .consider the setting in which only one player decides its strategy .it is called as _ best - reply dynamics _ when a player chooses the strategy which maximizes its utility .when there is no any player which can improve its utility , then this network topology , i.e. , corresponds to a nash equilibrium .note that any local maximum in the potential function is a nash equilibrium .therefore , the network topology obtained by best - reply dynamics accounts for a local maximum of the potential function .total power cost related to can be given by where is the group of mobiles associated with bs in the case of stable strategy - tuple .assuming that each mobile is capable to discover those bss that can transmit to it , we can produce a scheduler in the following way : each bs generates a random clock - time for all those mobiles that it can transmit ; then each mobile selects randomly a clock - time from those bss that it can discover . we need to produce the clock - times by such a way that the collision of the turns of mobiles is minimal . in case of a collision ,the clock - times of the corresponding mobiles are regenerated by corresponding bss . in algorithm[ alg : thehedonicdecision ] , the pseudo - code of the hd is given .note that this is an algorithm performed in both bs and mobile sides by an exchange of the information in a separated channel .* base station * : check stability send information to each mobile about the current partition check stability * mobile * : determine the preferred bs according to eq .send information to the preferred bs in the literature , the use of game models for set covering problems is called as _ set covering games _ . the hd algorithm is a novel approach for set covering games .this algorithm is suitable for any set covering problem and facility activation problems where the agents are allowed to make strategic decisionsin this section , we propose different algorithmic solutions to evaluate and compare the efficiency of the distributed algorithm .centralized algorithms exploit the set covering problem formulation since the search space is the smallest . however , binary integer linear programs are known to be np - complete and thus we introduce two algorithms based on dynamic programming : _ the hold minimum algorithm _ and _ the column control algorithm_. then , we develop the distributed version of the column control algorithm . a greedy solution of the problem is introduced as _ the nearest base station _ approach .the nearest bs and the column control are known and already used in the literature for splp problems .we adapt these algorithms to the map . because of the large scale nature of the collection set , we rather develop the algorithms by making all operations on the power cost matrix .this approach foster the iterative removals of elements in the collection set , and ensure a faster convergence .the hm algorithm solves the problem _optimally_. we explain the algorithm by an example .consider the power cost matrix which is given by the power cost matrix can also be shown as , where . in each step , the algorithm removes a group of values of the power cost matrix .removing means that we eliminate those clusters that include the mobile and bs from the collection set .the algorithm compares maximum clusterings and holds only the clustering minimizing the total cost .thus , it terminates in a step where each mobile is assigned to only one bs . in step ,the power cost matrix and collection set is denoted as =(p_1[s ] , p_2[s ] , \ldots , p_n[s]) ] , respectively .let us now turn to the example .in the initial step , we assume that = \mathbf{p} ] given by = \{(1;1 ) , ( 2;1 ) , ( 3;1 ) , ( 1,2;1 ) , ( 1,3;1 ) , ( 2,3;1 ) , \notag\\ ( 1,2,3;1 ) , ( 1;2 ) , ( 2;2 ) , ( 3;2 ) , ( 1,2;2 ) , ( 1,3;2 ) , \notag\\ ( 2,3;2 ) , ( 1,2,3;2)\}.\end{gathered}\ ] ] recall that assigning a cluster of mobiles to bs has a cost . therefore ,if we find the maximum value of , we obtain the total cost in case of all mobiles in column are assigned to bs .for example , .this means that if all mobiles are assigned to bs 1 , then the total cost is .the algorithm runs as following : in step , we find the maximum value of each column of power cost matrix , then eliminate all values in power cost matrix except minimum of the calculated maximum values .namely , and , then is eliminated by putting an = \begin{bmatrix } \infty & 3 \\ 1 & 4 \\ 2 & 8 \end{bmatrix}.\ ] ] thus , the collection set reduces to the following = \{(2;1 ) , ( 3;1 ) , ( 2,3;1 ) , ( 1;2 ) , ( 2;2 ) , ( 3;2 ) , ( 1,2;2 ) , \notag \\ ( 1,3;2 ) , ( 2,3;2 ) , ( 1,2,3;2)\}.\end{gathered}\ ] ] first column contains an which means that mobile must be assigned to another bs ( i.e. in this example , obviously bs ) .in fact , this represents the recursiveness of the algorithm where we run the algorithm for a sub power cost matrix . in this simple example ,the sub power cost matrix is . in general case , the algorithm does the following = \left\ { \begin{array}{ll } \max p_{j}[s ] + \mathfrak{h}(\mathbf{p}^{sub}_{j}[s ] ) , & \hbox{if sub power cost matrix ; } \\\max p_{j}[s ] , & \hbox{otherwise . }\end{array } \right.\ ] ] where ] in step . here, is the function which gives the optimal value and assignments obtained by running hm algorithm . for , we calculate = \max(1,2 ) + \mathfrak{h}(3 ) = 2 + 3=5 ] . on the other hand, we do not need to calculate ] .then , the algorithm holds minimum value of ,p_2[2 ] ) = p_1[2 ] = 5 ] , where = ( 2) ] .we remove , since ,p_2[3 ] ) = p_1[3] s ] .thus , the collection set is reduced as following in step : = \left\ { \mathcal{s}[s-1 ] \setminus ( s;k ) : i\in s , \forall i \in r[s ] \textrm { and } \forall k\in n\setminus j\right\}.\ ] ] considering the last example , in step , = ( 1,2,3,4) ] , where denotes some area .moreover , the deployment scenario used to generate figures [ fig : pavvslambdabnbs ] , [ fig : pavvslambdabwithc0 ] , [ fig : meanpowervslambdamsmallcell ] , [ fig : meanpovermaxthetapvsthetasmallcell ] corresponds to small - cells .we assume , being the typical maximum received signal power of a wireless network as well as we set arbitrarily if and we set the path loss exponent .we also assume an equal operational power cost for all bs , .we compare the proposed algorithms for different values of and .the average total power was calculated by monte carlo simulations by running the algorithms for different generated power cost matricesand taking the mean of the results .we first start with macro - cells .as mentioned above , we use a honeycomb model with a cell radius equal to .table [ tab : comperisonofhmnbs ] presents the optimal set - covering ( sc ) result as well as those obtained with the different proposed algorithms for different realization of power cost matrices and when the operational power cost is null , i.e. for all bss .the transmission power if .it turns out that the hd algorithm is very efficient and converges to nearly optimal assignments when operational power costs are neglected .this result indicates that switching off some bs does not decrease the total power significantly .the nbs algorithm also produces near optimal results in many examples . in these practical scenarios ,the idea of switching off some bs is mostly interesting when the circuit power is dominant , which is not the case for macro - cells .further , as fast moving users are associated with macro - cells in priority , it seems reasonable to keep all active .therefore , the nbs algorithm is efficient and the gain achievable with any other optimal algorithm is marginal .table [ tab : comperisonofhmccdccnbs ] compares the sc optimal results with all developed algorithms introduced in the paper for small - cells scenarios .the operational power cost is set to and the bs density is increased .cc and dcc algorithms produce optimal assignments for almost all examples .however , the dcc algorithm naturally performs worse when the number of bss increases .moreover , the nbs algorithm exhibits worst results which highlights the interest of switching off some bss when the operational power cost is higher than the transmission power cost .figure [ fig : pavvslambdabnbs ] illustrates the results achieved with the nbs algorithm , for different densities of bs and mobiles .this figure highlights an intuitive property of the nbs approach . when no operational cost is considered ( see figure [ fig : pavvslambdabnbs ] ) , the power consumption decreases with the bs density , since the average distance between mobiles and bss decreases accordingly . on the opposite ,when an operational cost is considered , the nbs algorithm leads to an increased energy consumption since the number of active bs increases accordingly .let us now switch to the cc and dcc algorithms .figure [ fig : pavvslambdabwithc0 ] focuses only on scenarios with circuit power , because they correspond to the more relevant cases .the upper curves show that the cc algorithm achieves much better results than nbs , especially when the bs density is high .this algorithm privileges solutions with larger cells .the distributed version dcc performs worst than the centralised one but still better than the nbs .figure [ fig : meanpowervslambdamsmallcell ] plots the change of the average total power with respect to the intensity of mobiles for small - cells scenario .the assumptions are as following : , ( in figure [ fig : meanpovermaxthetapvsthetasmallcell ] , the optimal is found ) , and area .note that the hd algorithm performs efficiently even though it is decentralized .for example , in case of , the average number of mobiles is given by ; thus , the average power used per mobile is calculated as following : a ) the hd algorithm : , b ) the cc algorithm : , c ) the greedy - sc algorithm : , d ) the sc algorithm : .figure [ fig : meanpovermaxthetapvsthetamacrocell ] depicts the change of the average total power with respect to the intensity of mobiles for macro - cell deployment . , ( area ) , and ( in figure [ fig : meanpovermaxthetapvsthetamacrocell ] , we plot the change of average total power with respect to , and choose the optimal value ) . here, we observe that the hd algorithm produces remarkable results .calibrating properly is significant , otherwise the hd algorithm may not converge to the near optimal results . on the other hand ,the nbs algorithm is also efficient in the macro - cell deployment .the drawback of greedy - sc algorithm reveals here since it works with a mechanism where the larger cells are privileged . in figures [ fig :meanpovermaxthetapvsthetasmallcell ] and [ fig : meanpovermaxthetapvsthetamacrocell ] , the normalized average total power is plotted with respect to . from the figures and our observations in experiments performed in matlab, it might be considered that is mainly affected by the area over which the algorithm runs .for example , in figure [ fig : meanpovermaxthetapvsthetamacrocell ] , the normalized average total power has a minimum in the same value of intensity of bss , but it moves to a higher value when the area is enlarged from to .figure [ fig : meannumofroundsvsarea ] shows the change of the average number of rounds of the hd algorithm for converging to a nash equilibrium with respect to the area .the figure implies that the average number of rounds has a logarithmic characteristic .moreover , when the operational power costs are zero , the average number of rounds increases since smaller cells are formed ; therefore , the hd algorithm needs more rounds to converge to a nash equilibrium .[ tab : comperisonofhmnbs ] [ cols="^,^,^,^,^,^,^,^,^,^",options="header " , ] with respect to intensity of bss for increasing values of intensity of mobiles , . ] with respect to intensity of bss for increasing values of intensity of mobiles , , . ] with respect to the intensity of mobiles . ] with respect to the intensity of mobiles . ] . ] . ]this paper addressed the map problem in the context of broadcast transmission .we introduced a novel decentralized solution based on group formation games , which we named the hedonic decision ( hd ) algorithm .this formalism is constructive : a new class of group formation games is introduced where the utility of players within a group is separable and symmetric being a generalization of party affiliation games .we proposed a centralized optimal recursive algorithm ( the hm ) as well as a centralized polynomial - time heuristic algorithm ( the cc ) .the results exhibit that the hd algorithm achieves very good results if the parameter is well chosen .the exact value of is not provided and may be used as a setting parameter .the proposed hd algorithm is efficient and may be used in many other set covering problems .for instance , indoor wireless network planning has been studied for several years and the optimal bs activation is an important problem .the proposed hd algorithm could be used for planning purposes and thus optimizing the number of bs , but could also be used to optimize dynamically the number of active bs as a function of actives users .the provider may deploy a high density of bs and then could run dynamically the hd algorithm to optimize then number of active bss .furthermore , it could be interesting to change the game parameters .one can define different clustering weights for each bs . that setting could provide better results .it is also possible to try different symmetric bipartite utility allocations . 1 s. sesia , i. toufik , and m. baker , _ lte the umts long term evolution : from theory to practice_. john wiley & sons , ltd . , 2009 .j. krarup , p.m. pruzan , `` the simple plant location problem : survey and synthesis , '' _european journal of operational research _ , vol .3681 , 1983 . v. bil , i. caragiannis , c. kaklamanis , and p. kanellopoulos , `` geometric clustering to minimize the sum of cluster sizes , '' _ in proc .13th european symp .algorithms , vol.3669 of lncs _ , pp.460471 , 2005 .n. lev - tov and d. peleg , `` polynomial time approximation schemes for base station coverage with minimum total radii , '' _ computer networks _ ,vol.47 , no.4 , pp.489501 , mar .h. alt , e. m. arkin , h. brnnimann , j. erickson , s. p. fekete , c. knauer , j. lenchner , j. s. b. mitchell , and k. whittlesey , `` minimum - cost coverage of point sets by disks , '' _ in proceedings of acm symposium on computational geometry ( scg 06 ) _ , pp . 449458 , new york , ny , usa , 2006 .s. funke , s. laue , z. lotker , and r. naujoks , `` power assignment problems in wireless communication : covering points by disks , reaching few receivers quickly , and energy - efficient travelling salesman tours , '' _ in proceedings of ad hoc networks _ ,pp.10281035 , 2011 .v. rodoplu , t.h .meng , `` minimum energy mobile wireless networks , '' _ ieee journal on selected areas in communications _ , vol.17 , no.8 , pp.13331344 , aug . 1999 . j. e. wieselthier , g. d. nguyen , and a. ephremides , `` on the construction of energy - efficient broadcast and multicast trees in wireless networks , '' ._ ieee infocom 2000 _, pp.586-594 , tel aviv , israel , 2000 .. eeciolu , and t. gonzalez , `` minimum - energy broadcast in simple graphs with limited node power , '' _ iasted international conference on parallel and distributed computing and systems ( pdcs 2001 ) _ , pp.334-338 , anaheim , ca , aug . 2001 .m. cagalj , j. p. hubaux , and c. enz , `` minimum - energy broadcast in all - wireless networks : np - completeness and distribution issues , '' _ in proceedings of acm international conference on mobile computing and networking ( mobicom ) _ , new york , ny , usa , pp.172182 , 2002 . c. hasan , e. altman , and j.m .gorce , `` a coalition game approach to the association problem of mobiles in broadcast transmission , '' _ wiopt 2011 _ pp.236240 , 913 may 2011 .a. p. bianzino , c. chaudet , d. rossi , j. l. rougier , `` a survey of green networking research , '' _ ieee communications surveys and tutorials _ , vol.14 , no.1 , pp.320 , 2012. h. claussen , `` the future of small cell networks , '' _ ieee commsoc mmtc e - lett ._ [ online ] , pp.3236 , sept .available : http://committees.comsoc.org/mmc/e-news/e-letter-september10.pdf e. balas and m. w. padberg , `` set partitioning : a survey , '' _ siam review _ , vol.18 , pp.710760 , 1976 . c. lund , m. yannakakis , `` on the hardness of approximating minimization problems , '' _ journal of the acm _vol.41 , no.5 , pp.960981 , sept . 1994 . c. k. singh and e. altman , `` the wireless multicast coalition game and the non - cooperative association problem , '' in _30th ieee infocom _ , shanghai , china , 1015 , apr .chvtal , v. , `` a greedy heuristic for the set - covering problem , '' _ mathematics of operations research _ , vol .3 , pp . 233235 , 1979 .g. hollard , `` on the existence of a pure strategy nash equilibrium in group formation games , '' _ elsevier economics letters _ , vol .283287 , 2000 .konishi , h. , le breton , m. , weber , s.,``pure strategy nash equilibrium in a group formation game with positive externalities , '' _ games and economic behavior _ vol .21 , 161182 .i. milchtaich , `` stability and segregation in group formation , '' _ games and economic behavior _ , vol .38 , pp . 318346 , 2002 .d. monderer and l. s. shapley , `` potential games , '' _ games and economic behavior _ vol .14 , pp . 124143 , 1996 .r. w. rosenthal , `` a class of games possessing pure - strategy nash equilibria , '' _ international journal of game theory _ vol .2 , 6567 , 1973 .j. h. aldrich , w. t. bianco , `` a game - theoretic model of party affiliation of candidates and office holders , '' _ mathematical and computer modelling _8 - 9 , pp .103 - 116 , aug .- sept . , 1992 .d. leibovic and e. willett , `` selfish set covering , '' _ in midstates conference for undergraduate research in computer science and mathematics _ , nov .2009 . x .- y .li , z. sun , w. wang , `` cost sharing and strategyproof mechanisms for set cover games , '' _ lecture notes in computer science _, vol . 3404 , pp .218 - 230 , 2005 .li , z. sun , w. wang , x. chu , s. j. tang , p. xu , `` mechanism design for set cover games with selfish element agents , '' _ theoretical computer science _ ,1 , pp . 174187 , 2010 . m. blum , r. w. floyd , v. pratt , r. l. rivest , and r. e. tarjan , `` time bounds for selection , '' _ j. comput . syst ._ vol.7 , no.4 , pp.448461 , aug . 1973 . t. feder , and d. greene , `` optimal algorithms for approximate clustering , '' _ proceedings of the twentieth annual acm symposium on theory of computing _ , pp . 434444 , 1988 .a. fabrikant , c. papadimitriou , and k. talwar , `` the complexity of pure nash equilibria , '' _ in proceedings of the thirty - sixth annual acm symposium on theory of computing ( stoc 04 ) _, new york , ny , usa , 604612 , 2004 . f. baccelli , and b. baszczyszyn , `` stochastic geometry and wireless networks : volume i theory , '' _ foundations and trends in networking _ , vol.3 , pp.249449 , 2009 .j. g. andrews , f. bacelli , and r. k. ganti , `` a tractable approach to coverage and rate in cellular networks , '' _ ieee trans .on commun ._ , vol.59 , no.11 , pp.31223134 , nov . 2011 . j. hoydis , m. debbah , `` green , cost - effective , flexible , small cell networks , '' _ ieee comsoc mmtc e - letter special issue on `` multimedia over femto cells '' _ , 2010 .
this paper addresses the mobile assignment problem in a multi - cell broadcast transmission seeking minimal total power consumption by considering both transmission and operational powers . while the large scale nature of the problem entails to find distributed solutions , game theory appears to be a natural tool . we propose a novel distributed algorithm based on group formation games , called _ the hedonic decision algorithm_. this formalism is constructive : a new class of group formation games is introduced where the utility of players within a group is separable and symmetric being a generalized version of parity - affiliation games . the proposed hedonic decision algorithm is also suitable for any set - covering problem . to evaluate the performance of our algorithm , we propose other approaches to which our algorithm is compared . we first develop a centralized recursive algorithm called _ the hold minimum _ being able to find the optimal assignments . however , because of the np - hard complexity of the mobile assignment problem , we propose a centralized polynomial - time heuristic algorithm called _ the column control _ producing near - optimal solutions when the operational power costs of base stations are taken into account . starting from this efficient centralized approach , a _ distributed column control algorithm _ is also proposed and compared to _ the hedonic decision algorithm_. we also implement the nearest base station algorithm which is very simple and intuitive and efficiently manage fast - moving users served by macro bss . extensive simulation results are provided and highlight the relative performance of these algorithms . the simulated scenarios are done according to poisson point processes for both mobiles and base stations . broadcast transmission , green networking , combinatorial optimization , game theory
a city is a highly complex system where a large number of agents interact , leading to a dynamics seemingly difficult to understand .many studies in history , geography , spatial economics , sociology , or physics discuss various facets of the evolution of the city . from a very general perspective, the large number and the diversity of agents operating simultaneously in a city suggest the intriguing possibility that cities are an emergent phenomenon ruled by self - organization . on the other hand, the existence of central planning interventions might minimize the importance of self - organization in the course of evolution of cities .central planning here understood as a top - down process controlled by a central authority plays an important role in the city , leaving long standing traces , even if the time horizon of planners is limited and much smaller than the age of the city .one is thus confronted with the question of the possiblity of modelling a city and its expansion as a self - organized phenomenon .indeed central planning could be thought of as an external perturbation , as if it were foreign to the self - organized development of a city .the recent digitization and georeferentiation of old maps will enable us to test quantitatively this effect , at least at the level of the structure of the road network .such a transportation network is a crucial ingredient in cities as it allows individuals to work , transport and exchange goods , etc . ,and the evolution of this network reflects the evolution of the population and activity densities .these network aspects were first studied in the 1960s in quantitative geography , and in the last decade , complex networks theory has provided significant contributions to the quantitative characterization of urban street patterns . in this article, we will consider the case of the evolution of the street network of paris over more than 200 years with a particular focus on the 19th century , period when paris experienced large transformations under the guidance of baron haussmann .it would be difficult to describe the social , political , and urbanistic importance and impact of haussmann works in a few lines here and we refer the interested reader to the existing abundant literature on the subject ( see , and and references therein ) .essentially , until the middle of the 19th century , central paris has a medieval structure composed of many small and crowded streets , creating congestion and , according to some contemporaries , probably health problems . in 1852 ,napoleon iii commissioned haussmann to modernize paris by building safer streets , large avenues connected to the new train stations , central or symbolic squares ( such as the famous place de letoile , place de la nation and place du panthon ) , improving the traffic flow and , last but not least , the circulation of army troops .haussmann also built modern housing with uniform building heights , new water supply and sewer systems , new bridges , etc ( see fig . [fig : map_h ] where we show how dramatic the impact of haussmann transformations are ) .the case of paris under haussmann provides an interesting example where changes due to central planning are very important and where a naive modelling is bound to fail .we analyze here in detail the effect of these planned transformations on the street network . by introducing physical quantitative measures associated with this network, we are able to compare the effect of the hausmann transformation of the city with its ` natural ' evolution over other periods . by digitizing historical maps ( for details on the sources used to construct the maps ,see the methods section ) into a geographical information system ( gis ) environment , we reconstruct the detailed road system ( including minor streets ) at six different moments in time , , respectively corresponding to years : . for each time, we constructed the associated primal graph ( see the methods section and ) , i.e. the graph where the nodes represent street junctions and the links correspond to road segments .in particular , it is important to note that we have thus snapshots of the street network before haussmann works ( 1789 - 1836 ) and after ( 1888 - 2010 ) .this allows us to study quantitatively the effect of such central planning . in fig .[ fig : maps](a ) , we display the map of paris as it was in 1789 on top of the current map ( 2010 ) . in order to use a single basis for comparison , we limited our study over time to the portion corresponding to 1789 .we note here that the evolution of the outskirts and small villages in the surroundings has certainly an impact on the evolution of paris and even if we focus here ( mainly because of data availability reasons ) on the structural modifications of the inner structure of paris , a study at a larger scale will certainly be needed for capturing the whole picture of the evolution of this city .we then have 6 maps for different times and for the same area ( of order ) .we also represent on fig .[ fig : maps](b ) , the new streets created during the haussmann period which covers roughly the second half of the 19th century .even if we observe some evolution outside of this portion , most of the haussmann works are comprised within this portion .* simple measures . * in the following we will study the structure of the graph at different times ( see the methods section for precise definitions ) , having in mind that our goal is to identify the most important quantitative signatures of central planning during the evolution of this road network .first basic measures include the evolution of the number of nodes , edges , and total length of the networks ( restricted to the area corresponding to 1789 ) . in fig .[ fig : general_results ] we show the results for these indicators which display a clear acceleration during the haussmann period ( 1836 - 1888 ) .the number of nodes increased from about 3000 in 1836 to about 6000 in 1888 and the total length increase from about 400 kms to almost 700kms , all this in about 50 years .it is interesting to note that this node increase corresponds essentially to an important increase in the population . in particular , we note ( see the supplementary information for more details ) that the number of nodes is proportional to the population and that the corresponding increase rate is of order , similar to what was measured in a previous study about a completely different area .the rapid increase of nodes during the haussmann period is thus largely due to demographic pressure .now , if we want to exclude exogeneous effects and focus on the structure of networks , we can plot the various indicators such as the number of edges and the total length versus the number of nodes taken as a time clock .the results shown fig . [ fig : general_results](d - f ) display a smoother behavior .in particular , is a linear function of , demonstrating that the average degree is essentially constant since 1789 .the total length versus also displays a smooth behavior consistent with a perturbed lattice . indeed , if the segment length is roughly constant and equal to where is the density of nodes ( is the area considered here ) , we then obtain for the total length a fit of the type is shown in fig .[ fig : general_results](d ) and the value of measured gives an estimate of the area , in agreement with the actual value ( for the 1789 portion ) .this agreement demonstrates that all the networks at different times are not far from a perturbed lattice .we also plot the average route distance defined as the average over all pairs of nodes of the shortest route between them ( see methods for more details ) . for a two dimensional spatial network, we expect this quantity to scale as and thus increases with .the ratio is thus better suited to measure the efficiency of the network and we observe ( fig .[ fig : general_results](c , f ) ) that it decreases with time and .this result simply demonstrate that if we neglect delays at junctions , it becomes easier to navigate in the network as it gets denser .* typology of new links * we can have three different types of new links depending on the number of new nodes they connect .we denote by ( ) the number of new links appearing at time connecting new nodes .for example counts the new links appearing at time connecting two nodes existing at time . in order to categorize more precisely these new links, we use the betweenness centrality impact defined in and which measures how a new link ( absent at time and present at time ) affects the average betweenness centrality ( see methods section for definitions of the betweenness centrality impact ) . in ,the distribution of this quantity displays two peaks which corresponds to two types of links belonging to two distinct processes : densification and exploration .we first observe ( see figure 2 of si ) that in the first period , the majority of new links are of the type and correspond to construction of new streets with new nodes . we see that the haussmann transition period ( 1836 - 1888 ) is not particularly different from the other previous periods . in the modern period ( after 1999 ) , becomes dominant and consistent with the idea of a mature street network where densification dominates the evolution of the urban tissue .obviously , this is also an effect of limiting ourselves to the 1789 portion : in a wider area , many new roads were created and both densification and exploration coexist .we note here that the structure of the street network of central paris remained remarkably stable from 1888 until now ( and in this period also , densification was the main process in this area ) .we then plot the distribution of this quantity for the different transition periods and the result is shown in fig .[ fig : bcimpact ] .these figures show that for all periods most new links belong to the densification process with a small peak of exploration in the period 1836 - 1888 .in well - developed , mature systems , it is expected that densification is the dominant growth mechanism . here also , we see that the haussmann period is not significantly different from previous periods .* evolution of the spatial distribution of centrality * the betweenness centrality ( bc ) of a node is defined in the methods section and essentially measures the fraction of times a given node is used in the shortest paths connecting any pair of nodes in the network , and is thus a measure of the contribution of a link in the organisation of flows in the network . in our case where we consider a limited portion of a spatial network , two important effects need to be taken into consideration .first , as we consider a portion , only paths within this portion are taken into account in the calculation of the bc and this usually does not reflect the reality of the actual origin - destination matrix . in particular , flows with the exterior of the portion and surrounding villages are not taken into account . as a result , the bc will be able to detect important routes and nodes in the internal structure of the network but will miss large - scale communication roads such as a north - south or east - west road connecting the portion with the surroundings of paris . in ,the scale of the network was large enough so that the bc could recover important central roads such as roman streets .the bc in the present case has then to be used as a structural probe of the network , enabling us to track the important modifications .the second point concerns the spatial distribution of the bc which will be important in the following . for a lattice the most central nodes( see the discussion in for example ) are close to the barycenter of the nodes : spatial centrality and betweenness centrality are then usually strongly correlated . in and itis shown that the most central points display interesting spatial structures which still need to be understood , but which represent an important signature of the networks topology .we first consider the time evolution of the node betweenness centrality ( with similar results for the edge bc ) . in the si( see figure 3 of si ) , we show the distribution of the node bc at different times . apart from the fact that the average bc varies, we see that the tail of the distribution remains constant in time , showing that the statistics of very central nodes is not modified . from this point of view, the evolution of the road network follows a smooth behavior , even in the haussmann period .so far , most of the measures indicate that the evolution of the street network follows simple densification and exploration rules and is very similar to other areas studied . at this point, it appears that haussmann works did nt change radically the structure of the city .however , we can suspect that haussmann s impact is very important on congestion and traffic and should therefore be seen on the spatial distribution of centrality . in the figure [ fig : maxvbc ] , we show the maps of paris at different times and we indicate the most central nodes ( such that their centrality is larger than with see the si , for a discussion on the effect of the value of ) .we can clearly see here that the spatial distribution of the bc is not stable , displays large variations , and is not uniformly distributed over the paris area ( we represented here the node centrality , and similar results are obtained for the edge centrality , see the si for plots for the edge centrality and more details ) .in particular , we see that between 1836 and 1888 , the haussmann works had a dramatical impact on the spatial structure of the centrality , especially near the heart of paris .central roads usually persist in time , but in our case , the haussmann reorganization was acting precisely at this level by redistributing the shortest paths which had certainly an impact on congestion inside the city .after haussmann we observe a large stability of the network until nowadays .it is interesting to note that these maps also provide details about the evolution of the road network of paris during other periods which seems to reflect what happened in reality and which we can relate to specific local interventions .for example , in the period 1789 - 1826 between the french revolution and the napoleonic empire , the maps shown in fig .[ fig : maxvbc ] display large variations with redistribution of central nodes which probably reflects the fact that many religious and aristocratic domains and properties were sold and divided in order to create new houses and new roads , improving congestion inside paris . during the period 1826 - 1836 which corresponds roughly to the beginning of the the july monarchy , the maps in fig .[ fig : maxvbc ] suggests an important reorganization on the east side of paris .this seems to correspond very well to the creation during that period of a new channel in this area ( the channel ` saint martin ' ) which triggered many transformations in the eastern part of the network . in order to analyse the spatial redistribution effectmore quantitatively , we compute various quantities inside a disk of radius centered on the barycenter of all nodes ( which stays approximately at the same location in time ) .we first study the number of nodes ( fig .[ fig : ripley ] ) , its variation between and , and the number of central nodes ( such that ) .we see that the largest variation of the number of nodes ( see [ fig : ripley](b ) ) is indeed in the haussmann period 1836 - 1888 , especially for distance meters .more interesting , is the variation of the most central nodes ( fig .[ fig : ripley]d ) .in particular , we observe that during the pre - haussmann period , even if in the period 1789 - 1826 there was an improvement of centrality concentration , there is an accumulation of central nodes both at short distances ( meters ) and at long distances ( meters ) in the following period ( 1826 - 1836 ) . as a result , visually clear in fig .[ fig : maxvbc ] , there is a large concentration of centrality in the center of paris until 1836 at least .the natural consequence of this concentration is that the center of paris was very probably very congested at that time . in this respect , what happens under the haussmann supervision is natural as he acts on the spatial organization of centrality .we see indeed that in 1888 , the most central nodes form a more reticulated structure excluding concentration of centrality .a structure which remained stable until now .interestingly , we note that haussmann s new roads and avenues represent approximately of the total length only ( compared to nowadays network ) , which is a small fraction , considered that it has a very important impact on the centrality spatial organization .this reorganization of centrality was undertaken with creation of new roads and avenues destroying parts of the original pattern ( see fig . [fig : map_h ] and fig .[ fig : maps](b ) ) resulting in the modification of the geometrical structure of blocks ( defined here as the faces of the planar street network ) .the effect of haussmann modifications on the geometrical structure of blocks can be quantitatively measured by the distribution of the shape factor ( see methods ) shown in fig .[ fig : phi ] .we see that before the haussmann modifications , the distribution of is stable and is essentially centered around which corresponds to rectangles . from 1888 ,the distribution is however much flatter showing a larger diversity of shapes . in particular, we see that for small values of there is an important increase of demonstrating an abundance of elongated shapes ( triangles and rectangles mostly ) created by haussmann s works .these effects can be confirmed by observing the angle distribution of roads shown on fig .[ fig : angles ] where we represent on a polar plot with the probability that a road segment makes an angle with the horizontal line .before haussmann s modifications , the distribution has two clear peaks corresponding to perpendicular streets and in 1888 we indeed observe a more uniform distribution with a large proportion of various angles such as diagonals .in this paper , we have studied the evolution of the street network of the city of paris .this case is particularly interesting as paris experienced large modifications in the 19th century ( the haussmann period ) allowing us to try to quantity the effect of central planning .our results for central paris reveal that most indicators follow a smooth evolution , dominated by a densification process , despite the important perturbation that happened during haussmann . in our results ,the important quantitative signature of central planning is the spatial reorganization of the most central nodes , in contrast with other regions where self - organization dominated and which did nt experience such a large - scale structure modification .this structural reorganization was obtained by the creation at a large scale of new roads and avenues ( and the destruction of older roads ) which do not follow the constraints of the existing geometry .these new roads do not follow the densification / exploration process but appear at various angles and intersect with many other existing roads . while the natural , self - organized evolution of roads seems in general to be local in space ,the haussmann modifications happen during a relatively short time and at a large spatial scale by connecting important nodes which are far away in the network . following the haussmann interventions ,the natural processes take over on the modified substrate .it is unclear at this stage if haussmann modifications were optimal and more importantly , if they were at a certain point inevitable and would have happened anyway ( due to the high level of congestion for example ) .more work , with more data on a larger spatial scale are probably needed to study these important questions .* temporal network data * we denote by the obtained primal graph at time , where and are respectively the set of nodes and links at time .the number of nodes at time is then and the number of links is . using common definitions , we thus have and , where and are respectively the new street junctions and the new streets added in time -1,t] ] we consider the new graph obtained by removing the link from and we denote this graph as .we compute again the average edge betweenness centrality , this time for the graph .finally , the impact of edge on the betweenness centrality of the network at time is defined as } { \overline{g}(g_t)}\ ] ] the bc impact is thus the relative variation of the graph average betweenness due to the removal of the link .* form factor * the shape or form factor of blocks is defined as the ratio of the area of the block and the area of the circumscribed circle of diameter ( see ) the more anisotropic the block and the smaller the factor .in figure [ fig : popu ] , we show the evolution of the number of nodes and of the population of paris ( for the 12 districts delimited by the ` fermiers generaux ' for the period 1789 - 1851 and after for the 20th districts of paris ) .the area under consideration for the calculation of the population is not exactly the same , and only the order of magnitude can be trusted here .we can compute the number of nodes versus the population and we observe a linear dependence with coefficient ( in previous studies , we also found a linear dependence [ 24 ] , but with a linear coefficient equal to ) .it is thus clear that the number of nodes follows the demographic population and that the large increase observed during the haussmann period is largely due to the demographic pressure . in figure[ fig : newlinks1 ] , we show the evolution of the proportion of the different types of new links . we see in this figure that the evolution is rather smooth and that from this point of view , the haussmann period is not radically different from previous ones .we consider here the evolution of the vertex bc with time . in figure[ fig : si_fig2 ] , we see that the average bc decreases slightly and that the overall probability distribution remains constant in time .the most central nodes are such as their centrality is . in the letter we consider and we show in figure [ fig :alpha ] the results for and .a visual inspection shows that the patterns are rather robust versus and that corresponds to an intermediate situation displaying interesting patterns . instead of the most central nodes, we can also represent the most central edges such that their centrality is .if we consider here we obtain for the different dates the results presented in fig .[ fig : ebc ] .we can see that the pattern for the edges is naturally consistent with the one obtained with the node centrality .99 * acknowledgements . *mb acknowledges funding from the eu commission through project eunoia ( fp7-dg.connect-318367 ) .hb acknowledges funding from the european research council under the european union s seventh framework programme ( fp/2007 - 2013 ) / erc grant agreement n.321186 - readi -reaction - diffusion equations , propagation and modelling .
interventions of central , top - down planning are serious limitations to the possibility of modelling the dynamics of cities . an example is the city of paris ( france ) , which during the 19th century experienced large modifications supervised by a central authority , the ` haussmann period ' . in this article , we report an empirical analysis of more than 200 years ( 1789 - 2010 ) of the evolution of the street network of paris . we show that the usual network measures display a smooth behavior and that the most important quantitative signatures of central planning is the spatial reorganization of centrality and the modification of the block shape distribution . such effects can only be obtained by structural modifications at a large - scale level , with the creation of new roads not constrained by the existing geometry . the evolution of a city thus seems to result from the superimposition of continuous , local growth processes and punctual changes operating at large spatial scales .
buckling is a common mode of mechanical failure , and its prevention is key to any successful engineering design . as early as 1759, euler gave an elegant description of the buckling of a simple beam , from which the so - called euler buckling limit was derived .works which cite the goal of obtaining structures of least weight stable against buckling can be found throughout the literature , and much understanding has been gained on optimal structural design .designs of ever increasing complexity have been analysed and recent work suggested that the optimal design of non - axisymmetric columns may involve fractal geometries . with the development of powerful computers , more and more complicated structures can be designed with optimised mechanical efficiency. however , understanding and preventing buckling remains as relevant as ever . in this paper , we consider a simple uniform elastic beam , freely hinged at its ends and subjected to a compressive force and therefore vulnerable to buckling .however , in contrast to euler s original problem , we specify that the beam is stabilized by restoring forces , perpendicular to its length , which are provided by an elastic foundation ( as illustrated in figure [ beam1 ] ) .this represents a simple and practical method of protecting against buckling instabilities . in the simplest case figure[ springs ] , we can imagine this elastic foundation as a finite collection of linear springs at points along the beam .each has a spring constant , and so provides a restoring force at this point , proportional to the lateral deflection of the beam .more generally , the elastic foundation could be distributed as a continuous function along the length of the beam , rather than being concentrated into discrete springs ( figure [ cont ] ) . in this case, there is a spring constant per unit length , which may vary along the beam .we are interested in optimising this elastic support , and so we need to specify a cost function for it .this we take to be the sum of the spring constants ( if there are a discrete collection of springs ) or the integral over the spring constant per unit length along the beam ( if the elastic foundation is continuous ) . by choosing the optimal distribution of these spring constants ,we wish to find the minimum cost of elastic support which will protect against buckling under a given compressive load ( or equivalently , the distribution of an elastic support of fixed cost which will support the maximum force ) .the optimal position of one or two deformable or infinitely stiff supports have been studied in the literature ( see for example ref . and references therein ) , and general numerical approaches established for larger numbers of supports .however , in the present paper , we consider the general case where any distribution of support is in principle permitted .a perturbation analysis shows that in the limit of weak support strength , the optimal elastic foundation is a concentrated delta - function at the centre of the beam , but when stronger supports are permitted , we show that the optimal solution has an upper bound on the proportion of the beam that remains unsupported . in this sense ,the optimum distribution becomes more uniform for higher values of support strength . to tackle the problem in more detail , we develop a transfer matrix description for the supported beam , and we find numerically that the optimal supports undergo a series of bifurcations , reminiscent of those encountered in iterated mapshowever , we are only able to proceed a limited distance in the parameter space and we are unable to explore for more complex behaviour ( for example , any possible signature of chaos ) .we obtain analytic expressions for the buckling load in the vicinity of the first bifurcation point and a corresponding series expansion for the optimal placement of elastic support . following this optimizationwe show that a mathematical analogy between the behaviour exhibited in this problem and that found in landau theory of second order phase transitions exists .however , the analogue of free energy is non - analytic , while in landau theory it is a smooth function of the order parameter and the control variable . our results , including critical exponentsare confirmed by computer simulations , and should provide a basis for future analysis on higher order bifurcations .a slender beam of length , hinged at its ends , under a compressive force , is governed by the euler - bernoulli beam equation : where is the young modulus of the beam , is the second moment of its cross sectional area about the neutral plane , is the lateral deflection , the distance along the beam and is the lateral force applied per unit length of beam . the beam is freely hinged at its end points and therefore the deflection satisfies at and . if the lateral force is supplied by an elastic foundation , which provides a restoring force proportional to the lateral deflection , then through rescaling we introduce the following non - dimensional variables , , and .( [ eq : ebu ] ) becomes where and represents the strength of the lateral support ( for example the number of springs per unit length ) at position .we are always interested in the minimum value of that leads to buckling [ in other words , the smallest eigenvalue of eq .( [ eq : eb ] ) ] . for the case of no support ( ) ,the possible solutions to eq .( [ eq : eb ] ) are , and so buckling first occurs when .lateral support improves the stability ( increasing the minimum value of the applied force at which buckling first occurs ) , but we imagine that this reinforcement also has a cost .in particular , for a given value of we seek the optimal function which maximises the minimum buckling force . the simplest choice we can imagine is that takes the uniform value , so that the form of deflection is , for some integer , which represents a wavenumber .this leads immediately to the result that in this case .\label{const}\ ] ] eq .( [ const ] ) has a physical interpretation : the first term comes from the free buckling of the column which is most unstable to buckling on the longest allowed length scales ( i.e. the smallest values of ) , as demonstrated by euler .the second term represents the support provided by the elastic foundation , which provides the least support at the shortest length scales ( largest values of ) .the balance between these two terms means that as , the uniformly supported column buckles on a length scale of approximately and can support a load now , although a uniform elastic support is easy to analyse , it is clear that this is not always optimal .consider the case where is very small , so that provides a small correction in eq .( [ eq : eb ] ) . in this case, the eigenvalues remain well - separated , and we can treat the equation perturbatively : let then from eq .( [ eq : eb ] ) , if we multiply through by ( the lowest unperturbed eigenfunction ) and integrate , we have to leading order : + y_{0}\sin^{2}x \left[\rho - f_{1}\right ] \right\ } { \rm d}x=0.\ ] ] repeated integrations by parts with the boundary conditions at establishes the self - adjointness of the original operator , and we arrive at we therefore see that in the limit , the optimal elastic support is , and for this case , . the requirement for optimal supporthas therefore concentrated the elastic foundation into a single point , leaving the remainder of the beam unsupported .in order to proceed to higher values of in the optimization problem , we assume that there are discrete supports at the positions , with corresponding set of scaled spring constants , adding up to the total : these discrete supports divide the beam into ( not necessarily equal ) segments , and for convenience in later calculations , we also define the end points as and . for each segment of the beam given by , the euler - bernoulli equation ( [ eq : eb ] )can be solved in the form + b_n \cos[f^{1/2}(x - x_{n } ) ] \nonumber \\+ c_n ( x - x_{n})+ d_n . \label{piece}\end{aligned}\ ] ] if we integrate eq .( [ eq : eb ] ) over a small interval around , we find that , where and are values infinitesimally greater and less than than respectively .defining , these continuity constraints on the piecewise solution of eq .( [ piece ] ) can be captured in a transfer matrix where is given by and ,\nonumber \\k_{n } & \equiv & \cos[f^{1/2}(x_{n}-x_{n-1})].\nonumber\end{aligned}\ ] ] at the two end - points at , , we have the boundary conditions that and vanish , which leads to the following four conditions if we now define a matrix then eqs .( [ bc1]-[bc3 ] ) lead to where for the beam to buckle , there needs to be non - zero solutions for and/or .therefore , the determinant of , which is a function of , must go to zero .the smallest , , at which , gives the maximum compression tolerated by the beam and its support .the task , thus , is to find the set of and which maximise .any definite choice of provides a lower bound on the maximum achievable value of , so before discussing the full numerical optimization results on , we consider here a simple choice of which illuminates the physics .suppose that consists of equally spaced , equally strong delta - functions : the value of can be found by a straight - forward calculation for each value of , using the transfer matrix formulation above .the results are plotted in figure [ const_and_equal ] , and we see that in general , it is better to concentrate the elastic support into discrete delta functions , rather than having a uniform elastic support . however , it is important to choose the appropriate number of delta functions : if the number is too few , then there will always be a buckling mode with which threads through the comb of delta functions without displacing them .however , apart from this constraint , it appears to be advantageous to choose a smaller value of ; in other words , to concentrate the support .results of the restricted optimization , obtaining the set with constant .,width=288 ] before we look at the general optimization problem where we will seek the optimal set of and for a given cost , we investigate a simplified problem to give us further insight into the nature of the problem .we set and then find the set which maximises .the results obtained from an exhaustive search are shown in figure [ res_bif ] , where we find two bifurcation points in the range the critical exponent of each has been obtained through simulation as , for the first and second bifurcation respectively .figure [ exponents ] shows the data from which the exponents are taken , where values of and used are , for the first and second bifurcation respectively .the value of for the lower branching event at is related to the upper branch by symmetry about the midpoint of the beam .as discussed previously , the optimal solution must split further at higher values of .we hypothesize that within this restricted problem these splits will take the form of bifurcations similar in nature to those found here .showing the critical exponents for the first and second bifurcation in the restricted problem of .,width=288 ] now we turn to the full optimization problem , where the values as well as the positions of the supports may vary . using the transfer matrix formulation , we seek the optimal elastic support consisting of delta functions .figure [ optimal_f ] shows the best solutions , found from an exhaustive search of four delta functions ( ) , up to .we see in figure 4 that there are two bifurcation events , and one coalescence of the branches . because the optimal solution can not contain long intervals with no support ( see section [ largem ] below ), we expect that if continued to larger values of and , a series of further bifurcation events would lead to a complex behaviour which would eventually fill the interval with closely spaced delta functions as .value of for the optimal form of and also for comparison constant , and for equally spaced , equally strong delta functions . , width=288 ] position of optimal springs as a function of .the area of each circle is proportional to the strength of the relevant support , with the total area of all the circles at each value of chosen to be a constant , independent of ., width=288 ]numerical results ( figure [ optimal_f ] ) indicate that although a single delta function at is the optimal form for in the limit , at some point the optimal support bifurcates .it is clear that this first bifurcation must happen at , since this represents the excitation of the first anti - symmetric buckling mode in the unsupported beam , and the delta function at provides no support against this mode .although the value of at this first branch point is clear , neither the value of at which it occurs , nor the nature of the bifurcation are immediately obvious. three dimensional plot of as a function of the position parameter and . ] in order to clarify the behaviour at this first branch point , we perform a perturbation expansion : let us suppose that and where and are clearly equivalent , and we will quote only the positive value later . thus are given by and .we wish to evaluate the matrix in eq .( [ m ] ) and seek the smallest giving a zero determinant . on performing a series expansion of the determinant for near , we find that the critical value of is . furthermore , if we define small quantities and through where and and are order quantities and then we can perform a series expansion of in the neighbourhood of , to obtain term by term a series expansion for .we find that there are two solutions , and , which correspond to functions symmetric and anti - symmetric about respectively : \nonumber \\+ |\xi| \left[0 + o(\mu^5 ) \right ] \nonumber \\ + \xi^2 \left[\frac{2\pi}{9}\mu+\frac{\pi^2}{72}\mu^2 -\frac{\pi^3\left(3+\pi^2\right)}{93312}\mu^3+o(\mu^4)\right ] \nonumber \\+ |\xi^3 |\left [ -\frac{128}{9\pi}-\frac{40}{27}\mu -\frac{\pi\left(15-\pi^2\right)}{486}\mu^2 + o(\mu^3)\right ] \nonumber \\+ \xi^{4 } \left [ 0+o(\mu^2)\right ] + |\xi^5 |\left [ -\frac{1024}{135\pi}+o(\mu)\right ] \label{f+}\end{aligned}\ ] ] + \xi^2 \left[\frac{32}{\pi^2}+\frac{2}{\pi}\mu + o(\mu^4)\right ] \nonumber \\+ |\xi^3| \left[0 + o(\mu^3 ) \right]\nonumber\\ + \xi^4 \left [ -\frac{(128\pi^2 + 576)}{3\pi^4}-\frac{(8\pi^2 + 72)}{3\pi^3}\mu + o(\mu^2)\right ] \nonumber \\ + |\xi^5|\left[\frac{512}{3\pi^3}+o(\mu)\right ] .\label{f-}\end{aligned}\ ] ] the final value for in this neighbourhood is then .the results are plotted in figure [ series ] , and we see that the behaviour of around the bifurcation point is not analytic , since the transition between the two branches and leads to a discontinuity in the derivatives of .the maximal value of ( i.e the optimum we are seeking ) , occurs for when , and along the locus when . from eqs .( [ f+ ] ) and ( [ f- ] ) , this leads to the optimal value of being this is shown in figure [ locus ] , together with the regions of the plane in which and apply .curve shows the locus of optimal values for near the first bifurcation point .this divides the plane into three regions , in which is given by either eq .( [ f+ ] ) or ( [ f- ] ) as indicated ., width=288 ]the results of our numerical optimisation suggests that the optimum support continues to take the form of a discrete set of delta - functions .here we investigate the possible form of the optimal support in the limit of large . as increases , the optimal distribution function must become more evenly distributed over the interval . to see inwhat sense this is true , we note that the eigenvalue problem for buckling modes given by eq .( [ eq : eb ] ) can be derived from an energy approach : suppose that is any deformation of the beam , then the energy of our system is given by {\rm d}x.\ ] ] any deformation which results in <0 $ ] means that the beam will be energetically allowed to buckle under this deflection .furthermore , the associated value of which just destabilises the system against this deformation can not be smaller than the lowest buckling mode . consider therefore a particular choice for , namely & x\in(x_{1},x_{2 } ) \\ 0 & x\in(x_{2},1 ) \end{array}\right . , \ ] ] which vanishes everywhere except on the interval , which is of length .then eq . ( [ u ] ) , together with the observation above about leads to \le\frac{2\pi^{2}}{\lambda^{2}}+\frac{\lambda}{\pi^{2 } } \int_{\omega}\rho(x)\sin^{4}\left[\frac{\pi(x - x_{1})}{\lambda } \right]{\rm d}x.\ ] ] trivially , we note from the definition of , that \le f_{\min}[\rho_{\rm opt}(x)],\ ] ] so that from eqs .( [ uni ] ) , ( [ u2 ] ) and ( [ op ] ) , we finally arrive at a condition for how evenly distributed must be for large : {\rm d}x\ge \frac{2\pi^{3/2}m^{1/2}}{\lambda}-\frac{2\pi^{4}}{\lambda^{3}}.\ ] ] a simple corollary of eq . ( [ gaps ] ) is that if is zero on any interval of length , then it must be the case that the scaling of this length with is the same as the effective buckling length of a uniformly supported beam discussed earlier .the optimal elastic support for our column appears to display complex behaviour : at small values of the support is a single delta function , and even at large values of , it appears to be advantageous for to be concentrated into discrete delta - functions rather than to be a smooth distribution .furthermore , the manner in which the system moves from a single to multiple delta functions is not trivial , and appears to be through bifurcation events . in the full optimization problemwe find that the first bifurcation event occurs with critical exponent of one half .inverting eq .( [ xi_opt ] ) and substituting it into either eq .( [ f+ ] ) or ( [ f- ] ) we find that , while to leading order , in this form , the mathematical similarities to landau theory of second order phase transitions become apparent , with playing the role of the order parameter , the reduced temperature and the free energy to be minimized . however , there is an important difference . in landau theory of second orderphase transitions , the free energy is assumed to be a power series expansion in the order parameter with leading odd terms missing : where , the reduced temperature . in our case , the buckling force has to be first optimised for even and odd buckling .thus ( which is the analogue of ) is a minimum over two intersecting surfaces ( figure [ series ] ) and so non - analytic at the point of bifurcation . nevertheless , the mathematical form of the solution in eq .( [ xiopt ] ) is the same , including the critical exponent .furthermore , our numerical results show that , for the equal support case , the critical exponent is preserved for the next bifurcation , suggesting that the nature of subsequent bifurcations will also remain the same .the details of the behaviour for larger values of is as yet unclear : we speculate that there will be a cascade of bifurcations , as seen in the limit set of certain iterated maps ; it remains an open question whether there is an accumulation point leading to potential chaotic behaviour .further investigation of this regime may shed light on structural characteristics required to protect more complex engineering structures against buckling instabilities .the authors wish to thank edwin griffiths for useful discussions .the figures were prepared with the aid of ` grace ' ( plasma-gate.weizmann.ac.il/grace ) , ` gnuplot ' ( http://www.gnuplot.info ) and ` xfig ' ( www.xfig.org ) .series expansions were derived with the aid of ` maxima ' ( maxima.sourceforge.net ) .
we investigate the buckling under compression of a slender beam with a distributed lateral elastic support , for which there is an associated cost . for a given cost , we study the optimal choice of support to protect against euler buckling . we show that with only weak lateral support , the optimum distribution is a delta - function at the centre of the beam . when more support is allowed , we find numerically that the optimal distribution undergoes a series of bifurcations . we obtain analytical expressions for the buckling load around the first bifurcation point and corresponding expansions for the optimal position of support . our theoretical predictions , including the critical exponent of the bifurcation , are confirmed by computer simulations .
a model serves as a mathematical abstraction of the physical system , providing a framework for system analysis and controller synthesis .since such mathematical representations are based on assumptions specific to the process being modeled , it s important to quantify the reliability to which the model is consistent with the physical observations .model quality assessment is imperative for applications where the model needs to be used for prediction ( e.g. weather forecasting , stock market ) or safety - critical control design ( e.g. aerospace , nuclear , systems biology ) purposes .here it is important to realize that a model can only be validated against experimental observations , not against another model .thus a _ model validation problem _ can be stated as : _ given a candidate model and experimentally observed measurements of the physical system , how well does the model replicate the experimental measurements ? _ it has been argued in the literature that the term ` model validation ' is a misnomer since it would take infinite number of experimental observations to do so .hence the term ` model invalidation ' or ` falsification ' is preferred . in this paper , instead of hard invalidation , we will consider the validation / invalidation problem in a probabilistically relaxed sense. broadly speaking , there have been three distinct frameworks in which the model validation problem has been attempted till now .* one * is a discrete formulation in _ temporal logic framework _ which has been extended to account probabilistic models . *second * is the _ control framework _ where time - domain , frequency domain and mixed domain model validation methods have been studied assuming structured norm - bounded uncertainty in linear dynamics setting .the * third * framework involves deductive inference based on barrier certificates which was shown to encompass a large class of nonlinear models including differential - algebraic equations , dynamic uncertainties described by integral quadratic constraints , stochastic and hybrid dynamics . in statistical setting, model validation has been addressed from system identification perspective where the main theme is to validate an identified nominal model through correlation analysis of the residuals .a polynomial chaos framework has also been proposed for model validation .gevers _ et . have connected the robust control framework with prediction error based identification for frequency - domain validation of linear systems . in another vein , using bayesian conditioning , lee and poolla showed that for _ parametric _ uncertainty models , the statistical validation problem may be reduced to the computation of relative weighted volumes of convex sets .however , for _ nonparametric _ models : the situation is significantly more complicated " and to the best of our knowledge , has not been addressed in the literature .recently , in the spirit of weak stochastic realization problem , ugrinovskii investigated the conditions for which the output of a stochastic nonlinear system can be realized through perturbation of a nominal stochastic _ linear _ system . in practice, one often encounters the situation where a model is either proposed from physics - based reasoning or a reduced order model is derived for computational convenience . in either case , the model can be linear or nonlinear , continuous or discrete - time , and in general , it s not possible to make any a - priori assumption about the noise .given the experimental data and such a candidate model for the physical process , our task is to answer : to what extent , the proposed model is valid ? " in addition to quantify such degree of validation , one must also be able to demonstrate that the answer is _ provably correct _ in the face of uncertainty .this brings forth the notion of _ probabilistically robust model validation_. in this paper , we will show how to construct such a _ robust validation certificate _, guaranteeing the performance of probabilistic model validation algorithm . with respect to the literature ,the contributions of this paper are as follows. 1 . instead of interval - valued structured uncertainty ( as in control framework ) or moment based uncertainty ( as in parametric statistics framework ), this paper deals with model validation in the sense of nonparametric statistics .uncertainties in the model are quantified in terms of the probability density functions ( pdfs ) of the associated random variables .we argue that such a formulation offers several advantages . _firstly _ , we show that model uncertainties in the parameters , initial states and input disturbance , can be propagated accurately by spatio - temporally evolving the joint state and output pdfs .since experimental data usually come in the form of histograms , it s a more natural quantification of uncertainty than specifying sets to which the trajectories are contained at each instant of time . however , if needed , such sets can be recovered from the supports of the instantaneous pdfs . _secondly _ , as we ll see in section 5 , instead of simply invalidating a model , our methodology allows to estimate the probability that a proposed model is valid or invalid .this can help to decide which specific aspects of the model need further refinement .hard invalidation methods do nt cater such constructive information ._ thirdly _ , the framework can handle both discrete - time and continuous - time nonlinear models which need not be polynomial .previous work like dealt with semialgebraic nonlinearities and relied on sum of squares ( sos ) decomposition for computational tractability . from an implementation point of view , the approach presented in this paper does nt suffer from such conservatism .2 . due to the uncertainties in initial conditions , parameters , and process noise, one needs to compare output ensembles instead of comparing individual output realizations .this requires a metric to quantify closeness between the experimental data and the model in the sense of distribution .we propose _ wasserstein distance _ to compare the output pdfs and argue why commonly used information - theoretic notions like _ kullback - leibler divergence _ may not be appropriate for this purpose .we show that the uncertainty propagation through continuous or discrete - time dynamics can be done via numerically efficient meshless algorithms , even when the model is high - dimensional and strongly nonlinear .moreover , we outline how to compute the wasserstein distance in such settings .further , bringing together ideas from analysis of randomized algorithms , we give sample - complexity bounds for robust validation inference .the paper is organized as follows . in section 2, we describe the problem setup .then we expound on the three steps of our validation framework , viz .uncertainty propagation , distributional comparison and construction of validation certificates in section 3 , 4 and 5 , respectively .we provide numerical examples in section 6 , to illustrate the ideas presented in this paper .the concept of worst - case initial uncertainty related to model discrimination , is addressed in section 7 .section 8 presents some results for discrete - time linear gaussian systems , followed by conclusions in section 9 .we use the superscript to denote matrix transpose , to denote kronecker product , and the symbol to denote minimum of two real numbers .the notation stands for generalized hypergeometric function .the symbols , , and are used for normal , uniform and arcsine distributions , respectively .we use the notation to denote the joint pdf over initial states and parameters . and denote joint pdfs over instantaneous states and parameters , for the true and model dynamics , respectively .similarly , and , respectively denote joint pdfs over output spaces and at time , for the true and model dynamics . the symbol is used to denote the extended state vector obtained by augmenting the state ( ) and parameter ( ) vectors .we use to denote indicator function and # to denote cardinality . unless stated otherwise , stands for dirac delta .the symbol denotes the -by- identity matrix , denotes gradient operator with respect to vector , stands for the vectorization operator , and denotes the frobenius norm . and stand for trace and determinant of a matrix ._ and _ i.p ._ refer to convergence in _ almost sure _ and _ in probability _ sense .the shorthand means partial derivative with respect to variable , denotes support of a function , and stands for error function .the proposed framework is based on the evolution of densities in output space , instead of evolution of individual trajectories , as in the lyapunov framework .intuitively , characteristics of the input to output mapping is revealed by the growth or depletion of trajectory concentrations in the output - space .growth in concentration , or increased density , defines regions in where the trajectories accumulate .this corresponds to regions with slow time scale dynamics or time invariance .similarly , depletion of concentration in a set implies fast - scale dynamics or unstable manifold .we refer the readers to for an introduction to analysis of dynamical systems using trajectory densities .this idea of comparing dynamical systems based on density functions , have been presented before by sun and mehta in the context of filtering , and by georgiou in the context of matching power spectral densities . given the experimental measurements of the physical system in the form of a time - varying distribution ( such as histograms ) , we propose to compare the _ shape _ or _ concentration profile _ of this measured output density , with that predicted by the model . at every instant of time ,if the model - predicted density matches with the experimental one _ reasonably well _ " ( to be made precise later in the paper ) , we conclude that the model is validated with high _ confidence _ ( to be computed for guaranteeing quality of inference ) . the rationale behind comparing the distributional shapes for model validation comes from the fact that the presence of uncertainties mask the difference between individual output realizations .uncertainties in initial conditions , parameters and noise result different realizations of the trajectory or integral curve of the dynamical system .regions of high ( low ) concentration of trajectories correspond to regions of high ( low ) probability .thus a model validation procedure should naturally aim to compare concentrations of the trajectories between the measurements and model - predictions , instead of comparing individual realizations of them , which would be meaningful only in the absence of uncertainties .we would like to point out that in some applications , the measurement naturally arises in the form of a distribution .this includes ( i ) * process industry applications * like measurement made at the wet end of papermaking machines that involves the fibre length and filler size distribution sensed via vision sensors , ( ii ) * nuclear magnetic resonance ( nmr ) spectroscopy and imaging ( mri ) applications * where the measurement variable is magnetization distribution , ( iii ) * neuroscience applications * where the measurement variable is the distribution of frequency across a collection of neurons , and ( iv ) * social systems * where the measurement variable could be an ensemble of crowd sensed via cameras or motion detectors .notice that for ( i ) and ( iii ) , distributional measurement is a design choice ; for ( ii ) it is motivated by technological limitations of sensing individual magnetization states where the number of states are of the order of avogadro number ; and for ( iv ) individual measurement may raise privacy concerns . density based model validation provides natural advantages over moment based or set containment methods for the following reasons .moment based methods can be erroneous for nonlinear non - gaussian systems , as two different trajectory densities may provide the same correlation information .this can be circumvented by including higher order moments , but it is not computationally tractable for high dimensional systems .set containment arguments can also be erroneous as it is possible that at a given time , two systems have trajectory densities with identical supports but different concentrations ( fig .[ intuition1 ] ( c ) ) .a proposed model is validated , if the distance " between its predicted density and the measured density , remains below a user - specified tolerance level , which need not be fixed over time .for example , take - off and landing are critical operational segments during the flight of a commercial aircraft , and it s unacceptable to have a controller that does not guarantee the robust performance for these critical time - segments with very high probability .this motivates the computation of probability of validation as part of the model validation oracle . in this section ,we formalize the ideas presented above .[ intuition1 ] and [ blockdiagram ] show the outline of the model validation framework proposed here . in this formulation ,the systems under comparison are excited with a _ known _ input signal , and an initial pdf , supported over the extended state space , where the states , and the parameters .given the pdf supported over the true output space , and a candidate model , we compute and then compare the model predicted output pdf , with at each instances of measurement availability .thus , one can think of three distinct steps of such a model validation framework .these are : 1 . evolving using the proposed model , to compute , 2 . measuring an appropriate notion of distance , denoted as in fig .[ blockdiagram ] , between and at , 3 .probabilistic quantification of provably correct inference in this framework and providing sample complexity bounds for the same .now we will elicit each of these steps .consider the continuous - time nonlinear model with state dynamics given by the ode , where is the state vector , is the parameter vector , the dynamics , and is at least locally lipschitz .it can be put in an extended state space form the output equation can be written as where is the output vector . if uncertainties in the initial conditions and parameters are specified by the initial joint pdf , then the evolution of uncertainties subject to the dynamics ( [ extendedstatespace ] ) , can be described by evolving the joint pdf over the extended state space .such spatio - temporal evolution of is governed by the _ stochastic liouville equation _ ( sle ) given by ( section 7.6 in ) which is a quasi - linear partial differential equation ( pde ) , first order in both space and time .notice that , the spatial operator is a drift operator that describes the _ advection _ of the pdf in extended state space .the output pdf can be computed from the state pdf as where is the ^th^ root of the inverse transformation of ( [ outputdynamics ] ) with , and is the jacobian of this inverse transformation .consider the continuous - time nonlinear model with state dynamics given by the it sde where is the -dimensional wiener process at time , and the noise coupling .for the wiener process , at all times = 0 , \ ; \mathbb{e}\left[d\beta_{i}d\beta_{j}\right ] = q_{ij } = \alpha_{i } \ : \delta_{ij } \ ; \forall \ : i , j = 1,\hdots,\omega , \label{wienerprocess}\end{aligned}\ ] ] where ] , its symmetrized version , are not metrics .on the other hand , hellinger distance , and the square - root of jensen - shannon divergence ] , where , is the complete beta function , and denotes the gamma function .the differential entropy for beta family can be computed as where , is the digamma function .since ( [ betaentropy ] ) remains invariant under , , and have same entropy , but one is skewed to right and the other to left , as shown in fig .[ betasymmeric ] . fig .[ isoentropybetafamily ] shows the isentropic contours of beta pdfs in space . any pair of * distinct * points chosen on these contours , results two beta pdfs with non - identical shapes , as revealed by fig .[ somethingelse ] and appendix a. ( * shape difference * ) consider two -dimensional homoscedastic gaussian pdfs and , such that .since the only * difference * between the two pdfs is the location of their means , a shape - discriminating distance is expected to be a function of , and should not depend on the covariance matrix i.e. * shapes of the individual pdfs*. in this situation , and . if we introduce , then , where is the rayleigh quotient corresponding to the positive semi - definite precision matrix .it s known ( chap .7 , ) that if we denote as the convex hull of the eigenvalues of the precision matrix , then . in particular , and these extrema are attained when respectively coincides with the minimum and maximum eigenvector of .thus the spectrum of governs the magnitude of the ratio , even when is kept fixed .in particular , the ratio assumes unity iff .further discussions on the inadequacy of for capturing shape characteristics and the utility of wasserstein distance for the same , can be found in .( * single output systems*) at time , let and be the cumulative distribution functions ( cdfs ) corresponding to the univariate pdfs and , respectively .then where is the optimizer in ( [ wasserstein ] ) .( * linear gaussian systems * ) consider stable , observable lti system pairs in continuous and discrete time : where . are wiener processes with auto - covariances , , and are gaussian white noises with covariances . if the initial pdf , then the wasserstein distance between output pdfs , is given by where , . for the continuous - time case , and for the discrete - time case , to be solved with , and .deterministic results are recovered from above by setting the diffusion matrix .( * asymptotic wasserstein distance * ) in table [ wasstable ] , we have listed asymptotic wasserstein distances between different pairs of stable dynamical systems .the asymptotic between two deterministic linear systems ( * first row * ) is zero since the origin being unique equilibria for both systems , dirac delta is the stationary density for both . for a pair of deterministic affine systems ( * second row * ) ,asymptotic is simply the norm between their respective fixed points .this holds true even for a pair of nonlinear systems , each having a * unique * globally asymptotically stable equilibrium . for the stochastic linear case ( * third row * ) , , and ; where respectively solve , and . and are process noise covariances associated with wiener processes and . for the * fourth * and * fifth row * , the set of stable equilibria for the true and model nonlinear system , are given by and , respectively .further , we assume that the nonlinear systems have no invariant sets other than these stable equilibria .in such cases , the stationary densities are convex sum of dirac delta densities , located at these equilibria .the weights for this convex sum , denoted as and , depend on the initial pdf .in particular , if we denote as the * region - of - attraction * of the ^th^ equilibrium , then ( see appendix b ) .\label{massfractionformula}\end{aligned}\ ] ] to further illustrate this idea , a numerical example corresponding to the * fourth row * in table [ wasstable ] , will be provided in section 6 .[ massfractionremark ] [ cols="^,^,^,^",options="header " , ] [ wasstable ] computing wasserstein distance from ( [ wasserstein ] ) calls for solving _ monge - kantorovich optimal transportation plan _ . in this formulation ,the difference in shape between two statistical distributions is quantified by the minimum amount of work required to convert a shape to the other .the ensuing optimization , often known as _ hitchcock - koopmans problem _ , can be cast as a linear program ( lp ) , as described next .consider a complete , weighted , directed bipartite graph with and .if , and , then the edge weight denotes the cost of transporting unit mass from vertex to .then , according to ( [ wasserstein ] ) , computing translates to subject to the constraints the objective of ( [ hitchcockkoopmanslp ] ) is to come up with an optimal mass transportation policy associated with cost .clearly , in addition to constraints ( c1)(c3 ) , ( [ hitchcockkoopmanslp ] ) must respect the necessary feasibility condition denoting the conservation of mass . in our context of measuring the shape difference between two pdfs, we treat the joint probability mass function ( pmf ) vectors and to be the marginals of some unknown joint pmf supported over the product space .since determining joint pmf with given marginals is not unique , ( [ hitchcockkoopmanslp ] ) strives to find that particular joint pmf which minimizes the total cost for transporting the probability mass while respecting the normality condition .notice that the finite - dimensional lp ( [ hitchcockkoopmanslp ] ) is a direct discretization of the wasserstein definition ( [ wasserstein ] ) , and it is known that the solution of ( [ hitchcockkoopmanslp ] ) is asymptotically consistent with that of the infinite dimensional lp ( [ wasserstein ] ) . for a desired accuracy of wasserstein distance computation ,we want to specify the bounds for number of samples , for a given initial pdf .since the finite sample estimate of wasserstein distance is a random variable , we need to answer how large should be , in order to guarantee that the empirical estimate of wasserstein distance obtained by solving the lp ( [ hitchcockkoopmanslp ] ) , ( c1)(c3 ) with , is close to the true deterministic value of ( [ wasserstein ] ) in probability . in other words ,given , we want to estimate a lower bound of as a function of and , such that similar consistency and sample complexity results are available in the literature ( see corollary 9(i ) and corollary 12(i ) in ) for wasserstein distance of order . from hlder s inequality , for , and hence that sample complexity bound , in general , does not hold for .to proceed , we need the following results .( appendix c ) given random variables , , , such that , then for , we have [ randomvariableinequality ] ( * transportation cost inequality*) a probability measure is said to satisfy the -_transportation cost inequality _( tci ) of order , if there exists some constant such that for any probability measure , . in short , we write .in particular , for , we have .( * rate - of - convergence of empirical measure in wasserstein metric*)(thm .5.3 , ) for a probability measure , , and its -sample estimate , we have and .the optimization takes place over all probability measures of finite support , such that .[ rocempiricalmeasure ] we now make few notational simplifications . in this subsection , we denote and by and , and their finite sample representations by and , respectively .then we have the following result .( * rate - of - convergence of empirical wasserstein estimate * ) ( appendix d ) for true densities and , let corresponding empirical densities be and , evaluated at respective uniform sampling of cardinality and .let , , be the tci constants for and , respectively and fix .then [ wassestimateupperbound ] at a fixed time , , , and are constants in a given model validation problem , i.e. for a given pair of experimental data and proposed model .however , values of these constants depend on true and model dynamics . in particular , the tci constants and depend on the dynamics via respective pdf evolution operators .the constants and depend on and , which in turn depend on the dynamics . for pedagogical purpose , we next illustrate the simplifying case , .( * sample complexity for empirical wasserstein estimate * ) for desired accuracy , and confidence , , the sample complexity , for finite sample wasserstein computation is given by the lp formulation ( [ hitchcockkoopmanslp ] ) , ( c1)(c3 ) , requires solving for unknowns subject to constraints . for , it can be shown that the runtime complexity for solving the lp is .notice that the output dimension enters only through the cost in ( [ hitchcockkoopmanslp ] ) and hence affects the computational time linearly . in actual simulations, we found the runtime of the lp ( [ hitchcockkoopmanslp ] ) to be sensitive on how the constraints were implemented .suppose , we put ( [ hitchcockkoopmanslp ] ) in standard form where , , ^{\top} ] , then the implementation was found to achieve fast offline construction of the constraint matrix . for ,the constraint matrix in ( [ standardlpform ] ) , is a binary matrix of size , whose each row has ones .consequently , there are total ones in the constraint matrix and the remaining elements are zero . hence at any fixed time, the sparse representation of the constraint matrix needs # non - zero elements storage .the pmf vectors are , in general , fully populated .in addition , we need to store the model and true sample coordinates , each of them being a -tuple . hence at any fixed time , constructing cost matrix requires storing values .thus total storage complexity at any given snapshot , is , assuming .however , if the sparsity of constraint matrix is not exploited by the solver , then storage complexity rises to .for example , if we take samples and use double precision arithmetic , then solving the lp at each time requires either megabytes or gigabytes of storage , depending on whether or not sparse representation is utilized by the solver . for ,it is easy to verify that the sparse storage complexity is , and the non - sparse storage complexity is .often in practice , the exact initial density is not known to facilitate our model validation framework ; instead a class of densities may be known .for example , it may be known that the initial density is symmetric unimodal but its exact shape ( e.g. normal , semi - circular etc . ) may not be known . even when the distribution - type is known ( e.g. normal ) , it is often difficult to pinpoint the parameter values describing the initial density function . to account such scenarios ,consider a random variable , that induces a probability triplet on the space of initial densities . here and .the random variable picks up an initial density from the collection of admissible initial densities according to the law of .for example , if we know with a given joint distribution over the space , then in our model validation framework , one sample from this space will return one distance measure between the instantaneous output pdfs .how many such samples are necessary to guarantee the robustness of the model validation oracle ?the chernoff bound provides such an estimate for finite sample complexity . at timestep , let the _ validation probability _be . here is the prescribed instantaneous tolerance level . if the model validation is performed by drawing samples from , then the _ empirical validation probability _is where .consider as the desired accuracy and confidence , respectively .( * chernoff bound*) for any , if , then .[ chernoffbound ] the above lemma allows us to construct _ probabilistically robust validation certificate _( prvc ) through the algorithm below . the prvc vector , with accuracy , returns the probability that the model is valid at time , in the sense that the instantaneous output pdfs are no distant than the required tolerance level .lemma [ chernoffbound ] lets the user control the accuracy and the confidence , with which the preceding statement can be made .thus the framework enables us to compute a provably correct validation certificate on the face of uncertainty with finite sample complexity . following , one can also define a probabilistic notion of the worst - case model validation performance as , and its empirical estimate .the sample complexity for probabilistically worst - case model validation is given by the lemma below .( * worst - case bound * ) ( p. 128 , ) for any , if , then . [ worstcasebound ] notice that in general , there is no guarantee that the empirical estimate is close to the true worst - case performance .also , the performance bound is obtained _ a posteriori _ while the robust validation framework accounted for _ a priori _ tolerance levels .the corresponding _ probabilistically worst - case validation certificate _( pwvc ) can be computed from the following algorithm . in summary , the algorithm , with high probability , only ensures that the output pdfs are at most far .the preceding statement can be made with probability at least .continuous - time deterministic dynamics consider the following nonlinear dynamical system the system has five fixed points , , , which can be solved by noting the abscissa values of the points of intersection of two curves and , as shown in fig [ fixedptabscissa ] . from linear analysis, it is easy to verify that and are stable foci while are saddles ( fig .[ contdetphaseportrait ] ) . to illustrate our model validation framework ,let s assume that ` true data ' is generated by the dynamics ( [ contdeterministic ] ) .however , this true dynamics is unknown to the modeler , whose proposed model is a linearization of ( [ contdeterministic ] ) about the origin .we emphasize here that the purpose of ( [ contdeterministic ] ) is only to create the synthetic data and to demonstrate the proof - of - concept . in a realistic model validation ,the data arrives from experimental measurements , not from another model . for simplicity , we take the outputs same as states for both true and model dynamics .starting from the bivariate uniform distribution \times \left[-\pi , \pi\right]\right ) = : \xi_{0} ] .continuous - time stochastic dynamics here we assume the true data to be generated by ( [ contdeterministic ] ) with additive white noise having autocorrelation , . letting and , the associated it sde can be written in state - space form similar to ( [ itosde ] ) where is a wiener process with autocorrelation . the stationary fokker - planck equation for ( [ contstochastic ] ) can be solved in closed form ( appendix e ) and one can verify that peaks of ( [ stationarydensitycontstoc ] ) appear at the fixed points of the nonlinear drift . let the proposed model be the linearization of ( [ contstochastic ] ) about the origin .it is well - known that the stationary density of a linear sde of the form , is given by provided is hurwitz and is a controllable pair .the steady - state covariance matrix solves . for the linearized version of ( [ contstochastic ] ) , and satisfy the aforementioned conditions and the stationary densityis obtained from ( [ linearstationarydensity ] ) .taking the initial density same as in example 3.1 , we propagated the joint pdfs for ( [ contstochastic ] ) and the linear sde using the klpf method described in .the _ dashed line _ in fig . [ wcontinuous ] shows the wasserstein trajectory for this case . the dash - dotted line in fig .[ wcontinuous ] shows the asymptotic wasserstein gap between the respective stationary densities ( [ stationarydensitycontstoc ] ) and ( [ linearstationarydensity ] ) . due to randomized sampling ,all stochastic computations are in probabilistically approximate sense .discrete - time deterministic dynamics let the true data be generated by the chebyshev map \mapsto \left[-1 , 1\right] ] , is proposed to model the data generated by ( [ chebyshevmap ] ) : the pf operator , for ( [ logisticmap ] ) is given by , \label{pfoperatorlogistic}\end{aligned}\ ] ] with stationary pdf , and cdf .taking the outputs identical to states , the asymptotic wasserstein distance between ( [ chebyshevmap ] ) and ( [ logisticmap ] ) , becomes given an initial density , the transient pdfs and can be computed from ( [ pfoperatorchebyshev ] ) and ( [ pfoperatorlogistic ] ) ( fig . [ mixedbagexamples](a ) and ( b ) ) .[ mixedbagexamples](c ) shows the transient wasserstein time - histories for various initial pdfs , which converge to its asymptotic value obtained analytically in ( [ asympwasschebyshevlogistic ] ) .discrete - time stochastic dynamics consider the true data being generated from the logistic map with multiplicative stochastic perturbation : where \mapsto \left[0 , 1\right] ] , drawn from noise density .this map has found applications in population dynamics and size - dependent branching processes .the pf operator for ( [ logisticmultiplicativenoise ] ) is given by ( p. 330- 331 , ) with the _ multiplicative _ stochastic kernel .in particular , results . the asymptotic behavior of ( [ logisticmultiplicativenoise ] ) is known to depend on the noise density . specifically , < 0 , = 0 ] .the measurement data are interval - valued sets ] at .a barrier certificate of the form was found in through sum - of - squares ( sos ) optimization where , and .the model was thereby invalidated by the existence of such certificate , i.e. the model , with parameter was shown to be inconsistent with measurements . to tackle this problem in our model validation framework , consider the spatio - temporal evolution of the joint pdf over the extended state space ^{\top} ] .our objective then , is to prove that for , the pdf is not finite - time reachable from , subject to the proposed model dynamics on the extended state space .the two - point boundary value problem has no solution for , such that , .[ recoverprajna ] proof .moc ode corresponding to the liouville pde , yields a solution of the form for the model dynamics , we have and .consequently ( [ liouvillemocgeneralformsolution ] ) results in particular , for , , and , ( [ prajnacubicmodelpdf ] ) requires us to satisfy since is an increasing function in both and , we need at least , which is incorrect .thus the pdf is not finite - time reachable from for , via the proposed model dynamics .hence our measure - theoretic formulation recovers prajna s invalidation result as a special case .* ( relaxation of set - based invalidation ) * instead of binary ( in)validation oracle , we can now measure the degree of validation " by computing the wasserstein distance between the model predicted and experimentally measured joint pdfs .more importantly , it dispenses off the conservatism in barrier certificate based model validation by showing that the goodness of a model depends on the measures over the same pair of supports and , than on the supports themselves .indeed , given a joint pdf supported over at , from ( [ prajnacubicmodelpdf ] ) we can explicitly compute the initial pdf supported over that , under the proposed model dynamics , yields the prescribed pdf , i.e. in other words , if the measurements find the initial density given by ( [ kindofinverseproblem ] ) and final density at , then the wasserstein distance at will be zero , thereby perfectly validating the model .this reinstates the importance of considering the * reachability of densities * over sets than * reachability of sets * , for model validation . *( connections with rantzer s density function - based invalidation ) * similar to barrier certificates , rantzer s density functions can provide deductive invalidation guarantees ( cf .theorem 1 in ) by constructing a scalar function via convex program .various applications of these two approaches for temporal verification problems have been reported .it is interesting to note that the main idea of rantzer s density function stems from an integral form of liouville equation , given by ( cf .lemma a.1 in ) where the initial set gets mapped to the set at time , under the action of the flow associated with the nonlinear dynamics .the convex relaxation proposed for invalidation / safety verification ( theorem 1 in ) , strives to construct an artificial density " satisfying three conditions , viz .( i ) , ( ii ) , and ( iii ) . from ( [ liouvilleintegralform ] ), such a construction results a sign - based invalidation " , and is only * sufficient * unless a slater - like condition is satisfied . on the other hand , the validation in probability " framework proposed in this paper ,relies on liouville pde - based exact arithmetic computation of , and is a direct simulation - based non - deductive formulation .in this approach , model invalidation equals violation of ( [ liouvilleintegralform ] ) , not just the sign - mismatch of its left - hand and right - hand side , and hence is * necessary and sufficient*. as shown in this subsection , for liouville - integrable nonlinear vector fields ( not necessarily semi - algebraic ) , our framework can recover the deductive falsification inference while bypassing the * additional conservatism * due to sos - based computation .the inference for probabilistic model validation depends on the initial pdf . to account robust inference in the presence of initial pdf uncertainty , the notion of prvcwas introduced in section 5 . however , for many applications , it is desirable to characterize the sensitivity of the gap on the choice of initial pdf .we motivate this issue from two different perspectives .\(i ) in predictive modeling applications like systems biology , an important problem is of _ model discrimination _ , where one looks for an initial pdf that _ maximizes _ the gap between two models , which seem to exhibit comparable performance .this idea is similar to optimal input design for system identification .\(ii ) in general , ] , is defined as the generalized inverse of the cdf for , i.e. here ] , i.e. the set of all scaled beta pdfs supported on ] , and for , \right) ] , . from theorem [ thscalarlincontdet ] , trajectory for uniform initial pdf , stays below the same for arcsine initial pdf , as shown in fig .[ examplescalarinitunclinearpdf ] .( * discrete - time linear systems * ) consider the true and model maps , where , denotes the discrete time index . from linear recursion, one can obtain a result similar to ( [ wassclosedformscalarlincontdet ] ) : .( * linear gaussian systems * ) for the linear gaussian case , one can verify ( [ wassclosedformscalarlincontdet ] ) without resorting to the qfpe .to see this , notice that if , then the state pdfs evolve as , where and satisfy their respective state and lyapunov equations , which , in the scalar case , can be solved in closed form . since , and between two gaussian pdfs is known to be , the result follows .( * affine dynamics * ) instead of ( [ scalarlincontdet ] ) , if the dynamics are given by , then by variable substitution , one can derive that .hence , we get where , , and . consider two stochastic dynamical systems with linear drift and constant diffusion coefficients , given by where is the standard wiener process . for any initial density , the wasserstein gap between the systems in ( [ scalarlincontstoc ] ) ,is given by where , and ] . for two schur - cohn stable matrices and , fig . [ bounddiscretetimelti ] illustrates ( [ sharperltibound ] ) with .we have presented a probabilistic model validation framework for nonlinear systems .the notion of soft validation allows us to quantify the degree of mismatch of a proposed model with respect to experimental measurements , thereby guiding for model refinement .a key contribution of this paper is to introduce transport - theoretic wasserstein distance as a validation metric to measure the difference between distributional shapes over model - predicted and experimentally observed output spaces .the framework presented here applies to any deterministic or stochastic nonlinearity , not necessarily semialgebraic type .in addition to providing computational guarantees for probabilistic inference , we also recover existing nonlinear invalidation results in the literature .novel results are given for discriminating linear models .this research was supported by nsf award # 1016299 with helen gill as the program manager .the authors would like to thank p. khargonekar at university of florida , and s. chakravorty at texas a&m university , for insightful discussions . 1 a. halder , and r. bhattacharya , model validation : a probabilistic formulation " ._ ieee conference on decision and control _ , orlando , florida , 2011 .a. halder , and r. bhattacharya , further results on probabilistic model validation in wasserstein metric " ._ ieee conference on decision and control _ , maui , hawaii , 2012 .k. popper , _ conjectures and refutations : the growth of scientific knowledge_. routledge , second ed . , 2002 .smith , and j.c .doyle , model validation : a connection between robust control and identification " ._ ieee transactions on automatic control _ ,37 , no . 7 , pp .942952 , 1992 .k. poolla , p. khargonekar , a. tikku , j. krause , and k. nagpal , a time - domain approach to model validation " . _ ieee transactions on automatic control _ , vol .5 , pp . 951959 , 1994 .s. prajna , barrier certificates for nonlinear model validation " ._ automatica _ , vol .1 , pp . 117126 , 2006 .brugarolas , and m.g .safonov , a data driven approach to learning dynamical systems " ._ ieee conference on decision and control _ , las vegas , nevada , 2002 . c. baier , and j.p .katoen , _ principles of model checking . _ the mit press , first ed . , 2008 .f. ciesinski , and m. grer , on probabilistic computation tree logic " ._ validation of stochastic systems _ , springer , eds .baier , c. , haverkort , b.r . ,hermanns , h. , katoen , j.p . , and siegle , m. , lecture notes in computer science 2925 , pp . 147188 , 2004 .r. smith , g.e . continuous - time control model validation using finite experimental data " ._ ieee transactions on automatic control _ , vol .8 , pp . 10941105 , 1996 .j. chen , and s. wang , validation of linear fractional uncertain models : solutions via matrix inequalities " ._ ieee transactions on automatic control _ , vol .41 , no . 6 , pp . 844849 , 1996 .b. wahlberg , and l. ljung , hard frequency - domain model error bounds from least - squares like identification techniques " ._ ieee transactions on automatic control _ ,37 , no . 7 , pp . 900912 , 1992 .d. xu , z. ren , g. gu , and j. chen , lft uncertain model validation with time and frequency domain measurements " ._ ieee transactions on automatic control _ , vol .44 , no . 7 , pp . 14351441 , 1999 .campbell , _ singular systems of differential equations . _pitman , first ed . , 1980 .a. megretski , and a. rantzer , system analysis via integral quadratic constraints " ._ ieee transactions on automatic control _ , vol .42 , no . 6 , pp . 819830 , 1997 .b. ksendal , _ stochastic differential equations : an introduction with applications ._ springer , sixth ed . , 2003 .van der schaft , and h. schumacher , _ an introduction to hybrid dynamical systems ._ springer , lncs 251 , first ed . , 1999 .l. ljung , and l. guo , the role of model validation for assessing the size of the unmodeled dynamics " . _ ieee transactions on automatic control _ , vol .9 , pp . 12301239 , 1997 .l. ljung , _ system identification : theory for the user ._ printice - hall inc ., second ed . , 1999 .ghanem , a. doostan , and j. red - horse , a probabilistic construction of model validation " ._ computer methods in applied mechanics and engineering _ , vol . 197 , no .29 - 32 , pp . 25852595 , 2008 .m. gevers , x. bombois , b. codrons , g. scorletti , and b.d.o .anderson , model validation for control and controller validation in a prediction error identification framework part i : theory " ._ automatica _ , vol .3 , pp . 403415 , 2003 .lee , and k. poolla , on statistical model validation " ._ journal of dynamic systems , mesurement , and control _ , vol .2 , pp . 226236 , 1996 .j. van schuppen , stochastic realization problems " ._ three decades of mathematical system theory : a collection of surveys at the occasion of the 50th birthday of jan c. willems _ , lecture notes in control and information sciences , springer , vol .135 , pp . 480523 , 1989 .ugrinovskii , risk - sensitivity conditions for stochastic uncertain model validation " ._ automatica _ , vol .11 , pp . 26512658 , 2009 .parrilo , _ structured semidefinite programs and semialgebraic geometry methods in robustness and optimization _ , phd thesis , california institute of technology , pasadena , ca , 2000 .a. lasota , and m. mackey , _ chaos , fractals and noise : stochastic aspects of dynamics ._ applied mathematical sciences , vol .97 , springer - verlag , ny , second ed . , 1994 .y. sun , and p.g .mehta , the kullback - leibler rate pseudo - metric for comparing dynamical systems " ._ ieee transactions on automatic control _ , vol .55 , no . 7 , pp . 15851598 , 2010 .georgiou , distances and riemannian metrics for spectral density functions " . _ ieee transactions on signal processing _ , vol .39954003 , 2007 .wang , h. , robust control of the output probability density functions for multivariable stochastic systems with guaranteed stability . _ieee transactions on automatic control_. vol .44 , no . 11 , 1999 , pp . 21032107 .wang , h. , baki , h. , and kabore , p. , control of bounded dynamic stochastic distributions using square root models : an applicability study in papermaking systems ._ transactions of the institute of measurement and control_. vol .23 , no . 1 , 2001 , pp. 5168 .li , j .- s . , and khaneja , n. ensemble control of bloch equations . _ ieee transactions on automatic control_. vol .54 , no . 3 , 2009 ,528536 .brown , e. , moehlis , j. , and holmes , p. , on the phase reduction and response dynamics of neural oscillator populations ._ neural computation_. vol .16 , no . 4 , 2004 , pp .673715 .wadoo , s.a . , and kachroo , p. , feedback control of crowd evacuation in one dimension , _ ieee transactions on intelligent transportation systems_. vol .11 , no . 1 , 2010 , pp . 182193 .s. meyn , and r.l .tweedie , _ markov chains and stochastic stability . _ cambridge university press , second ed ., 2009 .a. papoulis , _ random variables and stochastic processes . _ mcgraw - hill , ny , second ed . , 1984 .a. halder , and r. bhattacharya , dispersion analysis in hypersonic flight during planetary entry using stochastic liouville equation " ._ journal of guidance , control , and dynamics _34 , no . 2 , 2011 .a. halder , and r. bhattacharya , beyond monte carlo : a computational framework for uncertainty propagation in planetary entry , descent and landing " ._ aiaa guidance , navigation and control conference _ ,toronto , 2010 .hsu , _ cell - to - cell mapping : a method of global analysis for nonlinear systems _, applied mathematical sciences , vol .64 , springer - verlag , ny ; 1987 .h. risken , _ the fokker - planck equation : methods of solution and applications _ , springer - verlag ,ny ; 1989 .m. kumar , s. chakravorty , and j.l .junkins , a semianalytic meshless approach to the transient fokker - planck equation " ._ probabilistic engineering mechanics _ , vol . 25 , no .3 , pp . 323331 , 2010 .p. dutta , a. halder , and r. bhattacharya , uncertainty quantification for stochastic nonlinear systems using perron - frobenius operator and karhunen - love expansion " ._ ieee multi - conference on systems and control _ , dubrovnik , croatia , 2012 .a. edelman , and b.d .sutton , from random matrices to stochastic operators " , _ journal of statistical physics _127 , no . 6 , 2007 , pp . 11211165gibbs , and f.e . su , on choosing and bounding probability metrics " ._ international statistical review _ , vol .3 , pp . 419435 , 2002 .i. csiszr , information - type measures of difference of probability distributions and indirect observations " , _ studia scientiarium mathematicarum hungarica _ ,vol . 2 , 1967 , pp .299318 .a. mller , integral probability metrics and their generating classes of functions " , _ advances in applied probability _ , vol . 29 , 1997 , pp . 429443 . c. villani , _ topics in optimal transportation_ american mathematical society , first ed ., 2003 .r. jordan , d. kinderlehrer , and f. otto , the variational formulation of the fokker - planck equation " ._ siam journal of mathematical analysis _ ,1 , pp . 117 , 1998 .s. t. rachev , _ probability metrics and the stability of stochastic models . _john wiley , first ed ., 1991 .q. wang , s.r .kulkarni , s. verd , divergence estimation of continuous distributions based on data - dependent partitions " , _ ieee transactions on information theory _51 , no . 9 , 2005 , pp . 30643074 .x. nguyen , m.j .wainwright , m.i .jordan , estimating divergence functionals and the likelihood ratio by convex risk minimization " , _ ieee transactions on information theory _ ,56 , no . 11 , 2010 , pp . 58475861 .lazo , and p. rathie , on the entropy of continuous probability distributions " . _ ieee transactions on information theory _ , vol .1 , pp . 120122 , 1978 .givens , and r.m .shortt , a class of wasserstein metrics for probability distributions " ._ michigan mathematical journal _ ,2 , pp . 231240 , 1984 .r. kullhav , _ recursive nonlinear estimation : a geometric approach ._ lecture notes in control and information sciences , vol .216 , springer - verlag , 1996 .a. poznyak , _ advanced mathematical tools for automatic control engineers ._ vol . 1 : deterministic techniques , elsevier science , 2008 . b.w . hong , s. soatto , k. ni , and t. chan , the scale of a texture and its application to segmentation " ._ ieee conference on computer vision and pattern recognition _ , anchorage , alaska , 2008 . k. ni , x. bresson , t. chan , and s. esedoglu , local histogram based segmentation using the wasserstein distance " ._ international journal of computer vision _1 , pp . 97111 , 2009 .vallander , calculation of the wasserstein distance between distributions on the line " ._ theory of probability and its applications _ , vol .18 , pp . 784786 , 1973 .rachev , the monge kantorovich mass transference problem and its stochastic applications " ._ theory of probability and its applications _ , vol .29 , pp . 647676 , 1985 .f. hitchcock , the distribution of a product from several sources to numerous localities " ._ journal of mathematics and physics _ ,2 , pp . 224230 , 1941 .koopmans , optimum utilization of the transportation system " ._ econometrica : journal of the econometric society _ , vol .17 , pp . 136146 , 1949 .koopmans , efficient allocation of resources " ._ econometrica : journal of the econometric society _ , vol .4 , pp . 455465 , 1951 .sriperumbudur , b.k . ,fukumizu , k. , gretton , a. , schlkopf , b. , and lanckriet , g.r.g ., on the empirical estimation of integral probability metrics , _ electronic journal of statistics_. vol . 6 , 2012 , pp . 15501599 . b.k .sriperumbudur , k. fukumizu , a. gretton , b. schlkopf , and g.r.g .lanckriet , on integral probability metrics , -divergences and binary classification " ._ preprint _ , arxiv:0901.2698v4 , available at http://arxiv.org/abs/0901.2698v4 , 2009 .m. talagrand , transportation cost for gaussian and other product measures " ._ geometric and functional analysis _ , vol . 6 , no .3 , pp . 587600 , 1996 .h. djellout , a. guillin , and l. wu , transportation cost - information inequalities and applications to random dynamical systems and diffusions " . _the annals of probability _ , vol .3b , pp . 27022732 , 2004 .e. boissard , and t. le gouic , exact `` deviations in wasserstein distance for empirical and occupation measures '' ._ preprint _ , arxiv:1103.3188v1 , available at ` http://arxiv.org/abs/1103.3188v1 ` , 2011 .burkard , m. dellamico , and s. martello , _ assignment problems _ , siam , pa ; 2009 .r. julien , g. peyr , j. delon , and b. marc , wasserstein barycenter and its application to texture mixing " , _ preprint _ , available at http://hal.archives-ouvertes.fr/hal-00476064/fr/ , 2010 .r. tempo , g. calafiore , and f. dabbene , _ randomized algorithms for analysis and control of uncertain systems _ , springer - verlag , first ed . , 2004 .p. khargonekar , and a. tikku , randomized algorithms for robust control analysis and synthesis have polynomial complexity " . _ieee conference on decision and control _ , kobe , japan , dec . 1113 , 1996 .r. tempo , e .- w .bai , and f. dabbene , probabilistic robustness analysis : explicit bounds for the minimum number of samples " ._ systems & control letters _ , vol .30 , pp . 237242 , 1997 .x. chen , and k. zhou , order statistics and probabilistic robust control " . _ systems & control letters _ , vol .35 , pp . 175182 , 1998 .h. niederreiter , _ random number generation and quasi - monte carlo methods_. cbms - nsf regional conference series in applied mathematics , siam , 1992 .d. liberzon , and r.w .brockett , nonlinear feedback systems perturbed by noise : steady - state probability distributions and optimal control " ._ ieee transactions on automatic control _ , vol .45 , no . 6 , pp . 11161130 , 2000 .m. vidyasagar , randomized algorithms for robust controller synthesis using statistical learning theory " ._ automatica _ , vol .10 , pp . 15151528 , 2001 .t. geisel , and v. fairen , statistical properties of chaos in chebyshev maps " . _ physics letters a _ , vol . 105 , no . 6 , pp . 263266 , 1984 .m. mackey , and m. tyran - kamiska , deterministic brownian motion : the effects of perturbing a dynamical system by a chaotic semi - dynamical system " . _ physics reports _ , vol .5 , pp . 167222 , 2006 .athreya , and j. dai , random logistic maps ._ journal of theoretical probability _ ,2 , pp . 595608 , 2000 .klebaner , population and density dependent branching processes " . in k.b .athreya , and p. jagers ( eds . ) , _ classical and modern branching processes _ , vol .84 , i m a , springer - verlag , 1997 .s. prajna , a. papachristodoulou , and p.a .parrilo , introducing sostools : a general purpose sum of squares programming solver " . _ ieee conference on decision and control _ , 2002 .a. rantzer , a dual to lyapunov s stability theorem . " _ systems & control letters _ , vol .3 , pp . 161168 , 2001 .a. rantzer , and s. prajna , on analysis and synthesis of safe control laws " ._ proceedings of the allerton conference on communication , control , and computing _ , 2004 .s. prajna , and a. rantzer , convex programs for temporal verification of nonlinear dynamical systems " ._ siam journal on control and optimization _ ,3 , pp . 9991021 , 2007 .s. prajna , and a. rantzer , on the necessity of barrier certificates " ._ proceedings of the ifac world congress _ , 2005 .d. georgiev , and e. klavins , model discrimination of polynomial systems via stochastic inputs " ._ 47th ieee conference on decision and control _ , 2008 .a. kremling , s. fischer , k. gadkar , f.j .doyle , t. sauter , e. bullinger , f. allgwer , and e.d .gilles , a benchmark for methods in reverse engineering and model discrimination : problem formulation and solutions " . _ genome research _9 , pp . 17731785 , 2004 .g. steinbrecher , and w.t .shaw , quantile mechanics " , _european journal of applied mathematics _ ,2 , pp . 87112 , 2008 .gilchrist , _ statistical modeling with quantile functions _, crc press ; 2000 .r. motwani , and p. raghavan , _ randomized algorithms _ , cambridge university press , ny ; 1995 .klyatskin , _ dynamics of stochastic systems_. translated from russian by a. vinogradov , first ed . , elsevier , 2005 .bernstein , _ matrix mathematics : theory , facts , and formulas _, second ed ., princeton university press ; 2009 .we denote as the inverse of the beta cdf , as the regularized incomplete beta function , and as the incomplete beta function . , + + .[ wassersteindistancebetweenisentropicbeta ] proof . from ( [ singleoutputw ] ) , we have the following identities , stated without proof , will be useful for the evaluation of ( [ betawassbasicform ] ) . [ indefiniteintegralinverseregularizedincompletebeta ] [ indefiniteintegralsquareofinverseregularizedincompletebeta ] + , and .[ boundaryvaluesofinverseregularizedincompletebeta ] ( gauss theorem ) + .[ hypergeomasratioofgammafunctions ] + .[ derivativeofinverseregularizedincompletebeta ] using properties [ indefiniteintegralsquareofinverseregularizedincompletebeta ] and [ boundaryvaluesofinverseregularizedincompletebeta ] , we get .\nonumber\\ \label{pausehere}\end{aligned}\ ] ] recalling that , property [ hypergeomasratioofgammafunctions ] results substituting the above expressions in ( [ pausehere ] ) , we obtain thus , ( [ betawassbasicform ] ) simplifies to to evaluate the remaining integral in ( [ finalfrontier ] ) , we employ integration - by - parts with as the first function and as the second .now , we know that equals \bigg\rvert_{t=0}^{t=1}}_{\mathcal{i } } - \underbrace{\displaystyle\int_{0}^{1 } \left(f^{\prime}\left(t\right ) \displaystyle\int g\left(t\right ) \ : dt\right ) \ : dt}_{\mathcal{j}}. \label{byparts}\end{aligned}\ ] ] from properties [ indefiniteintegralinverseregularizedincompletebeta ] and [ boundaryvaluesofinverseregularizedincompletebeta ] , we get \bigg\rvert_{t=0}^{t=1 } \nonumber\\ & = & \displaystyle\frac { _ { 2}f_{1}\left(\beta+1 , 1-\alpha ; \beta+2 ; 1\right)}{\left(\beta + 1\right ) b\left(\alpha , \beta\right ) } = \displaystyle\frac{\beta}{\alpha + \beta}. \label{nicesimplification}\end{aligned}\ ] ] further , properties ( [ indefiniteintegralinverseregularizedincompletebeta ] ) and ( [ derivativeofinverseregularizedincompletebeta ] ) yield combining ( [ finalfrontier ] ) , ( [ byparts ] ) , ( [ nicesimplification ] ) and ( [ canwesimplifythis ] ) , the result follows .consider a nonlinear dynamical system , having multiple stable equilibria .let us assume that the system does not admit any invariant set other than these stable equilibria . also , let be the region - of - attraction for the ^th^ equilibrium point .if the dynamics evolves from an initial pdf , then its stationary pdf is given by where . proof .since is the unique set of attractors , it is easy to verify that the stationary pdf is of the form ( [ stationarypdf ] ) ; however , it remains to determine the weights .we observe that * either * , for some , * or * intersects multiple . now , recall that .thus , if , then , and consequently , , , , since . in this case , notice that .on the other hand , if intersects multiple , then only for , the integral curves of , will satisfy .in other words , only the set contributes to , i.e. . combining the above two cases , we conclude . , hence we have , . thus , we get , from boole - bonferroni inequality ( appendix c , ) . wasserstein distance is a metric , from triangle inequality combining the above with lemma [ randomvariableinequality ] , we have where each term in the rhs of ( [ almostthere ] ) can be separately upper - bounded using theorem 1 with . hence the resultwe re - write the it sde ( [ contstochastic ] ) as with . an it sde with drift nonlinearity of the form ( [ hamiltonianformcontstoch ] ) ,admits stationary pdf , where the hamiltonian function .it is known ( fact 8.19.21 , ) that for , .taking , we get where the last equality follows from the symmetry of wasserstein distance , and can be separately proved by noting that for .next , recall that square root of a positive definite matrix is unique , and matrix square root commutes with matrix transpose .thus , we have \nonumber\\ & = & \text{tr}\left[\left(p_{k}^{\frac{1}{2}}\right)^{\top } p_{k}^{\frac{1}{2 } } - \left(p_{k}^{\frac{1}{2}}\right)^{\top } \widehat{p}_{k}^{\frac{1}{2 } } - \left(\widehat{p}_{k}^{\frac{1}{2}}\right)^{\top } \widehat{p}_{k}^{\frac{1}{2 } } + \left(\widehat{p}_{k}^{\frac{1}{2}}\right)^{\top } \widehat{p}_{k}^{\frac{1}{2 } } \right ] \nonumber\\ & = & \text{tr}\left[p_{k}\right ] + \text{tr}\left[\widehat{p}_{k}\right ] - 2 \ : \text{tr}\left[p_{k}^{\frac{1}{2 } } \widehat{p}_{k}^{\frac{1}{2}}\right ] \nonumber\\ & \geq & \underbrace{\text{tr}\left[p_{k}\right ] + \text{tr}\left[\widehat{p}_{k}\right ] - 2 \ : \text{tr}\left(p_{k}^{\frac{1}{2 } } \widehat{p } p_{k}^{\frac{1}{2}}\right)^{\frac{1}{2}}}_{\left ( _ { 2}w_{2}(k)\right)^{2 } } \qquad \text{(using ( \ref{theprodterm}))}\nonumber \label{makingthebound } \end{aligned}\ ] ] and hence , . from ( [ theprodterm ] ) ,the equality condition is .
this paper presents a probabilistic model validation methodology for nonlinear systems in time - domain . the proposed formulation is simple , intuitive , and accounts both deterministic and stochastic nonlinear systems with parametric and nonparametric uncertainties . instead of hard invalidation methods available in the literature , a relaxed notion of validation in probability is introduced . to guarantee provably correct inference , algorithm for constructing probabilistically robust validation certificate is given along with computational complexities . several examples are worked out to illustrate its use . , model validation , uncertainty propagation , optimal transport , wasserstein distance .
massive sediment transport phenomena , such as dust storms and drifting snow , pose a considerable threat to human life .further , the formation of geomorphological patterns on sand - desert and snowfield surfaces as a result of sediment transport , such as dunes and ripples , is of considerable research interest . to elucidate the granular transport that occurs near the surfaces of sand deserts and snow fields , it is necessary to focus on the collisions between wind - blown grains and these surfaces along with the resultant ejection of grains from the surfaces .this approach is merited because , in the case of wind - blown grain transport , the major component of the grain entrainment into the air is caused by both the collision and ejectioncitebagnold , sus .this mechanism is called the `` splash process . ''splash processes have been widely studied using various techniques .for example , werner _et al . _ have simulated grain - bed collision processes in a two - dimensional system , while nishida _et al . _ have performed numerical simulations of granular splash behavior in a three - dimensional ( 3d ) system and analyzed the relation between the impact and ejection angles ( and , respectively ) projected onto the surface of a granular bed .further , xing and he have performed 3d collision simulations with mixed binary grains , and wada _have numerically modeled the impact cratering process on a granular target . in a physical experiment ,katsuragi _ et al ._ created small - scale craters in a laboratory system , whereas sugiura _estimated the splash function of snow grains via wind - tunnel experiments .in addition , ammi _ et al . _ performed a 3d splash experiment and recorded the results using two high - speed cameras , demonstrating that the mean ejection angle of a series of splashed grains is independent of both and the velocity of the incident grains , and it is close to . in their experiment ,a randomly packed ( rp ) bed was considered , and the final result suggests that the behavior at the first instance of impact during a splash process involving a granular bed has no influence on the later behavior . in the present study , we perform numerical simulations in order to investigate the splash processes in more detail . assuming that the packing structure of a granular bed affects the splash behavior , we consider not only an rp bed ( an rp bed corresponds to the scenario examined in the experiment of ammi _et al._ , except for differences in the dimensions of the simulation space and the grain features ) , but also an fcc - structured bed ( hereafter , fcc bed " ) .thus , we analyze the dependence of the splash process on the bed structure . in addition , we investigate the details of the ejection grains for each splash paying attention to their ejection timing ., incident velocity , ejection angle , and ejection velocity .,height=113 ] in this study , the splash processes are examined using the discrete element method ( dem ) . the translational movement of the grains obeys newton s law of motion , and grain rotation is neglected .thus , the equation of motion is where and are the position and the mass of the _ i_-th grain , respectively ; and and are the gravity constant and the vertical unit vector in the upward direction , respectively . and represent the repulsive and dissipative forces acting between the _ i_-th and _ j_-th grains , respectively , as explained in sect . 2.2 .our `` simulation box '' consists of a fixed bottom and walls , which make up a roofless 3d cubic container ( fig.[f1 ] ) .the walls and the bottom floor are made of the same material as the grains .two types of initial granular bed structures are prepared ( rp and fcc ) , as explained in greater detail in sect . 2.3 .a grain is fired at the bed at a certain incident angle and incident speed ( fig . [ f1](b ) ) . as a result of the collision between the projectile grain and the granular bed ,a number of grains are expelled from the bed . here, we define the initially projected grain as the `` incident grain '' and the expelled grains that reach a certain threshold height ( see sect .2.4 ) as the `` ejected grains '' ( fig .[ f1](a ) ) .we exclude the rebounding incident grain from consideration as an ejected grain . in this study, we consider monodispersed grains only ; therefore , all of the grains comprising the granular bed and the incident grain have the same mass and radius .the parameters used in the simulation are summarized in table [ t1 ] ..simulation parameters [ cols= " < , < " , ] we treat grains as viscoelastic spheres . for the elastic force ,we adopt the hertzian force , with where , and are the young s modulus , the radius of the _ i_-th grain , and the displacement from the natural contact position , respectively .further , \mbox{\boldmath}_{i , j } , & { \rm{(contact ) } } , \end{cases } \label{eq : displacement}\ ] ] where is the unit vector in the normal direction . to represent the energy dissipation , we adopt the friction force , with where , , and , are the relative normal velocity , damping coefficient , and the reduced mass , respectively .note that is relative to the restitution coefficient . in our simulation , the value of fixed at 0.9 .when the grains reach the boundaries , the sum of the hertzian and friction forces acts on the grains such that the work force is expressed as where is the damping coefficient between grains and boundaries .this equation corresponds to eqs .( [ eq : hertzian ] ) and ( [ eq : disspation ] ) , with the plain wall limit : .we construct the initial rp and fcc beds as follows .the rp bed is created through the free falling of 32,768 grains . at first , all grains are placed at random positions in the simulation box , with no overlap .then , they fall to the bottom as a result of the effects of , losing kinetic energy through the dissipative repulsive force of eqs .( [ eq : hertzian])([eq : boundary ] ) .the packing process is completed after a sufficient relaxation time has elapsed . on the other hand ,the initial positions of the grains in the fcc bed are approximately determined , except for the fine tuning of their positions according to and the nonlinear interactions of eqs .( [ eq : hertzian])([eq : boundary ] ) . similar to the previous rp procedure, the packing of the fcc structure is completed after a sufficient relaxation time has elapsed .the volume fractions of the rp and fcc beds are approximately 0.63 and 0.74 , respectively . in previous experiments with monodispersed spherical beads ,the volume fraction of the grains was approximately 0.6 in rp beds .it has been reported for a two - dimensional system that a bed thickness of more than 24 layers is needed to exclude the shockwave effects .the average height of the rp bed surface is approximately 22 grains . to construct the fcc bed , 36,639 grains and a 24-layer pileare used .the pair of and characterize the injection of the incident grain ( fig .[ f1 ] ) . for a given and ,the incident velocity is determined from . in this study , is set to 10.0 , 25.0 , or 40.0 m / s while is varied among , and .to obtain sufficient data for statistically meaningful results , 100 splash simulations are conducted for each set of , with different initial positions .the horizontal coordinate of the `` collision point '' between the incident grain and the granular bed surface is given randomly within a central horizontal circle on the bed surface , which we call the `` incident circle . ''the radius of this circle is three times the grain diameter .the center of the incident circle is , where and are the lengths of the and sides of the simulation box , respectively ( fig . [ f1](b ) ) , and is the highest coordinate of the grain surface ( the upper edge of the highest grain ) within the above - mentioned incident circle .the initial position of the incident grain ( grain center ) is , where is the radius of the incident grain , and is the time required for collision with the surface , calculated from the given .we define grains with centers that reach as `` ejected grains '' and record their ejection velocity , where is above the average bed surface height around the contact point .furthermore , we define the rebound of the incident grain ( `` rebound grain '' ) and its velocity with same definition as the ejected grains .this ejected - grain criterion roughly corresponds to those of previous 3d splash experiments . in this paper, we also define as where , and are the components of ( fig .[ f1](b ) ) .for ( a ) various incident angles ( : fixed ) and ( b ) incident speeds ( ( circle ) and ( square ) : fixed ) in rp ( filled symbol ) and fcc ( open symbol ) beds .the error bars indicate standard deviations , and the dashed line is the best fit of the form for the rp bed ( and ).,title="fig : " ] for ( a ) various incident angles ( : fixed ) and ( b ) incident speeds ( ( circle ) and ( square ) : fixed ) in rp ( filled symbol ) and fcc ( open symbol ) beds . the error bars indicate standard deviations , and the dashed line is the best fit of the form for the rp bed ( and ).,title="fig : " ]the mean incident energy , which means the energy transferred from the incident grain to the granular beds , is important to consider for the ejected grains . since is equal to the energy lost by the incident grain , we obtain the following relation : , where , , and is the mean speed of the incident grain at after impact. therefore , we focus on to characterize the incident energy transferred to the bed .figure [ energy ] shows that only depends on the incident angle and does not depend on the incident speed ; these results reproduce those in previous experiments .our result obtained for the rp bed corresponds well with the fitting function which was proposed in a previous study ( fig .[ energy](a ) ) . in our simulation , and for ; these values are close to those from previous collision experiments .the value of obtained for the fcc bed is larger than that for the rp bed for same pair of and . because of the lower roughness of the fcc bed surface , the error bars are very small . from aforementioned results ,the fraction of incident energy increases with and is independent of , and for the fcc bed is smaller than that for the rp bed .( is the mean restitution coefficient of the rebounded incident grain ) versus the incident speed for various incident angles ( circles : , and squares : ) .the filled and open symbols correspond to rp and fcc beds , respectively .the dashed lines are fits based on eq .( [ eq : num ] ) .the error bars indicate standard deviations ., height=170 ] the number of ejected grains after each splash process is related to the amount of kinetic energy transferred from the incident grain to the granular bed .kinetic energy propagates into the granular bed , in which the energy is dissipated via the interactions between the grains . because increases in and produce a high value of , the ensemble averages of the mean number of ejected grains for each splash increase with and . in the previous study by ammi __ , the relation between and was obtained from ,\ ] ] where and are the fitting parameters .our numerical results fit well with eq .( [ eq : num ] ) , where the values of the parameter pair are ( 43,26 ) for the rp bed and for the fcc bed ( fig .[ num ] ) .the value of for the case of the rp bed is more than twice that for the fcc bed .this reflects the facts that the grains in the fcc bed experience a stronger geometrical constraint from the neighboring grains than those in the rp bed because of the higher volume fraction of the former , and for the fcc bed is less than that for the rp bed in all pairs of and ( fig [ energy ] ) . for various ( a ) incident angles ( : fixed ) and ( b ) incident speeds ( : fixed ) in rp ( filled circles ) and fcc ( open squares ) beds .the error bars indicate standard deviations.,title="fig : " ] for various ( a ) incident angles ( : fixed ) and ( b ) incident speeds ( : fixed ) in rp ( filled circles ) and fcc ( open squares ) beds . the error bars indicate standard deviations.,title="fig : " ] figure [ angmean ] shows the ensemble averages of the mean ejection angle for each splash for various values of ( fig .[ angmean](a ) ) and ( fig .[ angmean](b ) ) . according to the previously reported rp bed experiment, remains constant as and are varied .figure [ angmean](a ) shows the dependence of for m / s . in this figure ,our for the rp bed remains almost constant and independent of , which is consistent with the previous experiment . on the other hand ,the for the fcc bed clearly varies with , especially at low ( fig .[ angmean](a ) ) . forthe fcc bed , become small at low ( fig .[ energy ] ) .this means that the fcc bed obtains insufficient energy to break the geometric constraint caused by the presence of the neighboring grains ; hence , the ejection directions are strongly limited to high angles .however , the result for the fcc bed has not been confirmed experimentally .figure [ angmean](b ) shows the dependence of for fixed . for the rp bed ,only a weak dependence is observed at low , although this has not been confirmed experimentally . on the other hand, exhibits an obvious dependence on for the fcc bed .that is , decreases as increases .this is attributed to the magnitude of , as discussed above . for rp ( filled circles ) and fcc ( open squares ) beds .the incident angle and incident speed are fixed ( and , respectively ) . for ( b ) rp and ( c ) fcc beds for various values of ( m / s : fixed ) ., title="fig : " ] for rp ( filled circles ) and fcc ( open squares ) beds .the incident angle and incident speed are fixed ( and , respectively ) . for ( b ) rp and ( c ) fcc beds for various values of ( m / s : fixed ) ., title="fig : " ] for rp ( filled circles ) and fcc ( open squares ) beds .the incident angle and incident speed are fixed ( and , respectively ) . for ( b ) rp and ( c ) fcc beds for various values of ( m / s : fixed ) ., title="fig : " ] the ejection angle distributions are shown in fig . for the rp bed obviously differs from that obtained for the fcc bed .the majority of grains ejected from the fcc bed have greater than those ejected from the rp bed ( fig .[ f2](a ) ) .on the other hand , for the rp bed is independent of , and the shapes and locations of the peaks around exhibit good agreement with the findings of a previous numerical experiment using binary grains ( fig .[ f2](b ) ) . , and groups on the plane .the filled circles and open squares represent grains ejected from the rp and fcc beds , respectively .ejection angle distributions ) for ( b ) rp and ( c ) fcc beds . is the ejection angle of ( , , and ) . ] to investigate each splash process in greater detail , we classify the ejected grains into three groups on the basis of their ejection timing .the first group consists of grains that were ejected in the period between the moment of impact and the first third of the total ejection period of each splash process .the ejection angles of the particles in this group are labeled .similarly , the ejection angles of the grains in groups and , which were ejected within the intermediate period and the last third of each splash process , respectively , are labeled and , respectively .figure [ proborder ] shows scatter plots for grains belonging to the , and groups on the plane , where indicates the projection of onto the bed surface .the ejection angle is defined as the angle between the horizontal axis and the line connecting the origin and each point in fig .[ proborder](a ) , which indicates that the magnitude of varies depending on the ejection timing .the distributions of , and ( , , and , respectively ) for and are also shown for both bed types ( fig .[ proborder](b ) and ( c ) ) .since the peaks of and obtained for the rp bed and those for the fcc bed are at greater angles , these grains seem to be affected by their neighboring grains .this is particularly true in the fcc case ( fig .[ proborder](c ) ) , where the grain movements are obviously restricted to the higher angles : both and have peaks around , but the peak of is higher than that of .the profiles of for both bed types are different than those of and ; the peaks are clearly located within a lower range of angles compared to those of and . as supported by the discussion of the results in the next section, these results for the fcc bed suggest that the grain ejection direction is more strongly restricted by geometrical constraints compared to the rp bed ., , , and , for ( a ) various incident angles ( ) and ( b ) incident speeds ( ) in rp ( filled symbols ) and fcc ( open symbols ) beds , where .,title="fig : " ] , , , and , for ( a ) various incident angles ( ) and ( b ) incident speeds ( ) in rp ( filled symbols ) and fcc ( open symbols ) beds , where .,title="fig : " ] figure [ meanvel ] shows the ensamble averages of the mean ejection speed for each splash and its components for various values of ( fig .[ meanvel](a ) ) and ( fig .[ meanvel](b ) ) , where . for all pairs of and ,the greater part of is .figure [ meanvel](a ) shows the dependence of and for m / s .although there is a slight fluctuation within the low - incident - angle region , remains almost constant as is varied for both bed structures . in the rp bed , there is a small gap between and for .in contrast , and are the almost same for .figure [ meanvel](b ) shows the dependency of and for . in this figure ,the mean ejection speed increases as increases .these and dependencies are consistent with a previous study .we next investigate the distributions of each component of , , and , for the different bed structures in fig .for the rp bed , both and have gaussian distributions ( fig .[ vel ] ( a ) and ( b ) ) , whereas has a log - normal appearance ( fig .[ vel ] ( c ) ) .these results are consistent with the findings of previous experimental studies .note that these forms are independent of both and ( fig .[ vel](a ) , ( b ) and ( c ) ) . for the fcc bed, appears to be similar to that obtained for the rp bed ( fig .[ vel ] ( f ) ) , but both and are more concentrated around 0 m / s than those for the rp bed ( fig .[ vel ] ( d ) and ( e ) ) .regarding the difference between the for the rp and fcc beds , the latter has a bump within the large range ( see also fig .[ proborder](a1 ) ) . ,( b ) , and ( c ) obtained for the rp bed and those for ( d ) , ( e ) , and ( f ) obtained for the fcc bed for various incident angles and speeds .all values are normalized by .the dashed lines represent the best fit for each distribution for and m / s ( filled circles ) ., title="fig : " ] , ( b ) , and ( c ) obtained for the rp bed and those for ( d ) , ( e ) , and ( f ) obtained for the fcc bed for various incident angles and speeds .all values are normalized by .the dashed lines represent the best fit for each distribution for and m / s ( filled circles ) ., title="fig : " ] , ( b ) , and ( c ) obtained for the rp bed and those for ( d ) , ( e ) , and ( f ) obtained for the fcc bed for various incident angles and speeds .all values are normalized by .the dashed lines represent the best fit for each distribution for and m / s ( filled circles ) ., title="fig : " ] we also define the timing - dependent ejection velocities in conformity to the groups , , and , as , , and , respectively .figure [ divvel ] shows the vertical ejection speed distributions , , and obtained for and for m / s in the rp bed . for all , and have gaussian - like form , but their forms are different and depend on the ejection timing ; and have large variances , and the others have small variances ( fig .[ divvel](a ) and ( b ) ) . and fit well with the log - normal distributions , but the higher - ejection - speed region of seems to have a power - law form ( fig . [divvel](c ) ) .that is , the distributions change from a power - law form to a log - normal form as the ejection speed is decreases ( or with increasing elapsed time since impact ) .as this power - law region is only a small fraction of the total vertical ejection speed distribution , the overall distribution throughout each splash process is fit well with a log - normal distribution .this distribution deformation becomes clear with increasing incident angle .further , these types of distribution transformations have been reported in various fields .for example , fragment experiments have confirmed that the fragment size distribution of glass qualitatively changes from a log - normal distribution to power - law form in accordance with the incident energy .specifically , log - normal and power - law distributions are exhibited at lower and higher energies , respectively . therefore , our results may be related to these findings . and( b ) for ( open symbols ) and ( closed symbols ) , and vertical ejection speed distributions for ( c1 ) and ( c2 ) for the rp bed ( m / s : fixed ) . is the ejection velocity of ( ) .the dashed lines represent the fits obtained for a log - normal distribution . ]we show the energy balances in fig .[ enebalance ] . the energy balance between the incident energy and the total kinetic energy of the ejected grains shown in fig .[ enebalance](a ) .as noted from previous experiments , the relation between and is , where is a constant parameter ( in our result ) . because the rotational motion of a grain is not considered in this study , that is , the obtained kinetic energy reflects only translational motion , in our study may be greater than in the experiment of ammi _ et al _( ) .previously , it was found that depends on the restitution coefficient in a binary collision . .all values are normalized by .energy ratio for ( b ) various incident angles ( : open symbols and : closed symbols ) and ( c ) incident speeds ( : open symbols and : filled symbols ) .all points are obtained for the rp bed ., title="fig : " ] .all values are normalized by .energy ratio for ( b ) various incident angles ( : open symbols and : closed symbols ) and ( c ) incident speeds ( : open symbols and : filled symbols ) . all points are obtained for the rp bed ., title="fig : " ] .all values are normalized by .energy ratio for ( b ) various incident angles ( : open symbols and : closed symbols ) and ( c ) incident speeds ( : open symbols and : filled symbols ) .all points are obtained for the rp bed ., title="fig : " ] figure [ enebalance](b ) and ( c ) show the energy ratio for the rp bed , where is the total kinetic energy of ejected grains belonging to , and is mean number of ejected grains per impact for . figure [ enebalance](b ) shows the dependence of for m / s and m / s , and fig .[ enebalance](c ) shows the dependence of for and . in these figures, is almost independent of ; in particular , for larger values of , the values of for and are mostly coincident , and more than 80% of the total ejection energy is used for grains .we performed 3d splash process simulations using the dem for two kinds of granular bed structures : a randomly structured bed and an fcc - structured bed .it was found that the mean number of ejected grains for each collision was related to the injection energy .after renormalization by the energy transferred from the incident grain to the granular bed , a good linear fit was obtained between the mean number of ejected grains and the incident speed , with the rp bed ejecting twice as many grains as the fcc bed . moreover ,the ejection angle distributions obtained from the rp and fcc beds were shown to be clearly different .the peak of the ejection angle distribution for the rp bed was approximately ; on the other hand , the distribution obtained for the fcc bed distinctively shifted to greater ejection angles , with a peak of over .this difference is assumed to originate from the geometrical constraints . in other words ,the grain movement direction is strongly affected by the surrounding grains in the fcc bed .furthermore , the ejection velocity distributions for the rp bed exhibited qualitatively good agreement with the results of previous experiments . on the other hand , coupled with the ejection angle results , the distributions obtained for the fcc bed indicate that the vertical movement of the ejected grains is dominant and that movement in the horizontal direction is significantly smaller than that for the rp bed .in addition , the ejected - grain characteristics , i.e. , the ejection angle and speed , evidently depend on the ejection timing after the initial grain impact . for the ejection angle, the difference between the ejected grain angles at the beginning and end of each splash is apparent .regarding the vertical ejection speed , the ejection timing determines the distribution , and this distribution changes from a power - law form to a log - normal form according to the ejection timing .furthermore , the splashed grains at the beginning of each splash gain retains around 80% of the total kinetic energy of the ejected grains .these results are assumed to be related to the propagation of the impact energy , both along and beneath the surface of the granular bed .the authors thank a. awazu and h. niiya for useful discussions .this research is partially supported by the platform project for supporting in drug discovery and life science research ( platform for dynamic approaches to living system ) from japan agency for medical research and development ( amed ) 20 r. a. bagnold , _ the physics of blown sand and desert dunes _ , methuen , london , ( 1941 ) .r. s. anderson and p. k. haff , science , * 241 * , 820 - 823 , ( 1988 ) .b. t. werner and p. k. haff , sedimentology , * 35 * , 189 - 196 , ( 1988 ) .m. nishida , j. nagamatsu and k. tanaka , journal of solid mechanics and materials engineering , * 5 * , 164 - 178 , ( 2011 ) .m. xing and c. he , geomorphology , * 187 * , 94 - 100 , ( 2013 ) .k. wada , s. senshu and t. matsui , icarus , * 180 * , 528 - 545 , ( 2006 ) .h. katsuragi and d. j. durian , nature physics , * 3 * , 420 - 423 , ( 2007 ) .k. sugiura and n. maeno , boundary - layer meteorology , * 95 * , 123 - 143 , ( 2000 ) .j. n. mcelwaine , n. maeno and k. sugiura , annals of glaciology , * 38 * , 71 - 78 , ( 2004 ) .m. ammi , l. oger , d. beladjine and a. valance , phys .e , * 79 * , 021305 , ( 2009 ) .h. j. hertz , reine angrew .math , * 92 * , 156 - 171 , ( 1881 ) .y. tsuji , t. kawaguchi and t. tanaka , powder technology , * 77 * , 79 - 87 , ( 1993 ) .d. beladjine , m. ammi , l. oger and a. valance , phys .e , * 75 * , 061305 , ( 2007 ) .f. rioual , a. valance and d. bideau , phys .e , * 62 * , 2450 , ( 2000 ) . t. ishii and m. matsushita , j. phys .jap . , * 61 * , 3474 - 3477 , ( 1992 ) .h. katsuragi , d. sugino and h. honjo , phys .e , * 70 * , 065103 , ( 2004 ) .j. crassous , d. beladjine , and a. valance , phys .rev.lett . , * 99 * , 248001 , ( 2007 ) .
using the discrete element method ( dem ) , we study the splash processes induced by the impact of a grain on two types of granular beds , namely , randomly packed and fcc - structured beds . good correspondence is obtained between our numerical results and the findings of previous experiments , and it is demonstrated that the packing structure of the granular bed strongly affects the splash process . the mean ejection angle for the randomly packed bed is consistent with previous experimental results . the fcc - structured bed yields a larger mean ejection angle ; however , the latter result has not been confirmed experimentally . furthermore , the ejection angle distributions and the vertical ejection speeds for individual grains vary depending on the relative timing at which the grains are ejected after the initial impact . obvious differences are observed between the distributions of grains ejected during the earlier and later splash periods : the form of the vertical ejection speed distribution varies from a power - law form to a lognormal form with time , and more than 80% of the kinetic energy of all ejected grains is used for earlier ejected grains .
it is well known that there are only two central force laws for which all bounded orbits are closed. by `` closed , '' we mean that the orbiting object returns to the same spatial location with the same velocity in a finite amount of time ( specifically , it returns to the same location in phase space ) . by `` bounded , '' we mean that the distance between the orbiting object and the central object always remains between two fixed values , , called the radial turning points , or in the case of elliptical orbits they are called periapsis and apoapsis .this result is known as bertrand s theorem, first obtained in 1873 . if the attractive force is represented by a power law , , then only ( an inverse square force given by newton s law of gravitation or coulomb s law ) and ( a spring - like force given by hooke s law ) admit closed orbits , both of which happen to be elliptical .in fact , the orbits in these two potentials satisfy the additional criterion that they are `` non - crossing . ''a bounded orbit does not cross itself in configuration space if the ratio of its orbital period , , to the period of its radial oscillations , , is an integer . here, we use the parameter to denote this ratio . in the case of newtonian gravity , which means that there is only one periapsis andone apoapsis per orbit , and the central body resides at one focus of the elliptical orbit . in the case of hooke s law , however , , and the central body is located at the center of the elliptical orbit .this means that there are _ four _ turning points ( two close , two far ) in each orbit .the implications of bertrand s theorem have been investigated extensively , ranging from the symmetries inherent in the potentials to the deep connections between classical and quantum mechanics that it reveals. the fact that an orbit is closed means that , besides energy and angular momentum , there must be an additional conserved quantity the runge - lenz vector. also , closely related to the fact that only and admit closed classical orbits is the result that these two potentials result in an exactly solvable schrodinger equation. in addition , these two potentials are `` dual '' in the sense that one problem can be obtained from the other by a change of variable. in addition , many authors have obtained proofs of the theorem that are more elegant and pedagogical than the original, and central potentials other than power - law have been investigated. in this work we focus on analytical methods suitable for the intermediate mechanics student , as well as numerical techniques that can be used to find closed orbits ( especially those with interesting shapes ) in central forces other than inverse square or hooke s law .the types of closed orbits that can be obtained are introduced in sec .[ sec : classify ] . in sec .[ sec : brownbertrand ] , bertrand s proof of his eponymous theorem is briefly outlined , and a more pedagogical proof , first given by brown, is covered in detail . this detailis needed because brown s method includes the mathematical insight necessary to analyze large amplitude perturbations from stable circular orbits . finally , in sec .[ sec : finitepert ] we obtain conditions that must be satisfied so that these large - amplitude orbits are closed , and several representative trajectories are obtained numerically .for all power law central forces other than inverse - square and hooke s law , most orbits , while they may remain bounded , are not closed . however , there are three cases in which the orbits _ are _ closed .first , as long as , all power law central forces exhibit a stable , closed , circular orbit at the radial location where the effective force is zero where the second term is the `` centrifugal force , '' is the ( constant ) angular momentum , and is the mass of the orbiting object .this stable radial location is given by .of course , if there are no stable circular orbits . indeed , when , newton showed that the trajectory is a so - called cotes spiral. any deviation from a circular trajectory allows the possibility that the orbit may no longer be closed .however , the second case in which closed orbits arise is when the orbiting object is perturbed only slightly from the stable circular orbit .if is infinitesimally close to , then the effective potential energy ( defined as usual by ) can be expanded about where the effective spring constant is two types of periodic motion are now superposed , the previous orbital motion as well as a radial oscillation in the simple - harmonic effective potential of eq .( [ eq : ueff ] ) .the period of these radial oscillations is . since the orbital period , obtained by taking a ratio of the circumference , , to the orbital velocity , , is given by , the ratio of the two periods is if is such that is a rational fraction , , where and are integers , then this `` almost - circular '' orbit will be closed .hence , for certain forms of the power law that satisfy orbits that are only slightly perturbed from a circular orbit are closed .bertrand was able to show that in two special cases , and , corresponding to and , respectively , orbits with large ( not just infinitesimal ) deviations from a circular trajectory remain closed .this analysis suggests that other solutions of eq .( [ eq : lambdasmall ] ) , e.g. , ( ) and ( ) , admit closed orbits for infinitesimal perturbations from a circular orbit . however , we show in sec . [sec : finitepert ] that for _ all _ values of that exhibit stable circular orbits , _ finite _ perturbations from a circular orbit can result in values of that are rational fractions .this is the third case , mentioned above , in which the orbits are closed.in this situation , most of the energy - angular momentum parameter space results in non closed orbits , but certain discrete values of these two parameters result in closed orbits .further , most of these are `` crossing '' orbits in which the trajectory crosses itself one or more times before returning to the original location , which means that they correspond to a rational fraction where .there are a few instances in which is an integer , though , and these orbits can be triangular ( ) or even square ( ) in shape .these large values of , however , require large positive values of .of course , the previous conclusions , as well as the analysis below , are not limited to power law central forces .gauss s law implies that an arbitrary ( but spherically symmetric ) mass density distribution results in a central force law for any particular density distribution of interest , the effective force and potential energy , the stable circular orbit radius , and the ratio of the orbital and radial oscillation periods , eqs .( [ eq : feff])-([eq : betalambdaplus3 ] ) , can all be obtained .any parameters describing will of course replace . for power lawcentral forces , the self - consistent density distribution is in fact , quite a bit of theoretical work has been done on the problem of orbits in the gravitational potentials of galaxies and globular clusters. for example , adams and bloch analyzed orbits in the so - called hernquist potential where is the length scale of the potential , and the potential is due to an extended mass distribution with density .this distribution turns out to be a good approximation for elliptical galaxies and dark matter haloes .the focus in these studies has been on understanding how the orbits affect the dynamics of the system , and not on whether each individual orbit is closed or not .also , struck was able to analytically solve for the orbits using the so - called `` epicycloid '' approximation , which assumes the orbit is a precessing ellipse whose shape can be expressed as a function of the type \ ] ] where is the eccentricity and determines the precession rate . of course , the parameters and , along with the function must be determined from the form of the potential .this technique allowed him to obtain the result in eq .( [ eq : betalambdaplus3 ] ) above , and therefore obtain orbital resonance conditions that can assist understanding galactic dynamics , such as bars in spiral galaxies .bertrand used the well - known orbit equations to express as an integral over the radial excursion , where is the angle swept out by the trajectory . in order for the orbit to be closed, he then required that this integral , when evaluated between two neighboring turning points , be a rational fraction times , or in our notation , - \frac{1}{r^2 } } } , \ ] ] where and are roots of the denominator .he took a global approach , simultaneously expanding the integral for small oscillations about a stable circular orbit as well as letting .he was then able to show that the requirement in eq .( [ eq : thetar2 ] ) means that must be a power law with or .unfortunately , his proof does not easily divulge any physical insight . on the other hand , brown s method, in which he solved for the periodic motion in the anharmonic potential [ see eq .( [ eq : ueffcubic ] ) ] near the radius of the stable circular orbit , not only proves bertrand s theorem , but also allows the derivation of a closed orbit criterion that is valid for any power .here we outline brown s method , and quote the results that are relevant to the present discussion .first , he solved the dynamical equation for radial motion in the potential given by eq .( [ eq : ueff ] ) by assuming that the object is in an initially stable , circular orbit with and orbital speed .then a small radial impulse is imparted to the object ( in order to conserve the angular momentum ) which results in a nonzero radial velocity .of course , the subsequent trajectory consists of a harmonic oscillation of the radial coordinate , , where is just the frequency of small radial oscillations , as we obtained above , and is the amplitude of the radial oscillations . there is a simple relation between the initial radial velocity and the amplitude , which is , or and which comes from elementary simple - harmonic - motion theory. bertrand s theorem , however , is a statement about the character of _ finite _ radial oscillations , and the restriction to infinitesimal amplitudes must therefore be relaxed .it turns out that it is sufficient to retain one more term , the cubic term , in the expansion in eq .( [ eq : ueff ] ) , which becomes and then apply the classic solution to this anharmonic oscillator problem , which was given by landau and lifshitz. the technique consists of seeking a solution that is a series of `` successive approximations . '' the first order approximation is just eq .( [ eq : rone ] ) , while the second and third order approximations include oscillations at harmonics of the fundamental frequency , and . here, is the exact anharmonic oscillation frequency , slightly shifted from by a term that is proportional to .\ ] ] a well - known example of this effect is the large - amplitude pendulum , whose exact restoring force is proportional to , and an inclusion of the cubic term results in an amplitude - dependent period .brown also showed that for large amplitudes the angular velocity of the orbital motion is also slightly shifted by a term that is proportional to . using his notation , \ ] ] where is the angular orbital velocity , the brackets indicate an average over one orbital period , and the subscript denotes the stable circular orbit value in the limit . retaining only terms of lowest order in ,the ratio of the two periods is .\ ] ] this is brown s main result , and it proves bertrand s theorem . for _ all _ orbits to be closed , the ratio of the two periods , , must be _ independent _ of the radial amplitude , and this is only true when the coefficient of is zero .that is , or , as previously stated . as it must , eq . ([ eq : brownresult ] ) also contains the limit given in eq .( [ eq : betalambdaplus3 ] ) , which might be called a `` restricted version '' of bertrand s theorem : `` for infinitesimal perturbations , , eq .( [ eq : brownresult ] ) reduces to eq .( [ eq : betalambdaplus3 ] ) , which means that the condition for closed orbits is eq .( [ eq : lambdasmall ] ) . '' from a practical perspective , however , to integrate newton s second law numerically and obtain a trajectory , it is the initial conditions , and , that must be specified . in addition , it is the parameter that is of primary interest , not the radial amplitude .it is useful , therefore , to eliminate from eq .( [ eq : brownresult ] ) , using eq .( [ eq : vr0andepsilon ] ) , to obtain trajectories that demonstrate the restricted version of bertrand s theorem ( obtained by numerically integrating newton s second law using a runge - kutta 4th order method ) are shown in figs .[ fig : lambda61 ] and [ fig : lambda62 ] for a force law parameter . in fig .[ fig : lambda61 ] , an initial condition of results in an almost circular orbit . since the initial radial velocity is small , the radial amplitude is likewise small , and eq .( [ eq : vr0andepsilon ] ) predicts for the parameters chosen , which agrees with the numerical result shown in fig .[ fig : lambda61](b ) .in addition , eq . ( [ eq : betalambdaplus3 ] ) predicts , which is also seen clearly in fig .[ fig : lambda61](b ) , even though the radial oscillation is not perceptible in fig .[ fig : lambda61](a ) .all trajectories in this paper share the following initial conditions : , , and .this means that if , then the orbit is stable and circular .it also means that the angular momentum remains fixed .varying the initial radial velocity changes the orbit shape because the total energy varies .when the radial impulse imparts a large radial velocity , say , the closed nature of the orbit is lost , even though it is still bounded .this can be seen in fig .[ fig : lambda62 ] . for , eq .( [ eq : brownresult ] ) becomes or , which shows that the orbital period decreases to _ less _ than three times the radial oscillation period as the radial amplitude increases .this is indicated in fig .[ fig : lambda62](b ) by the fact that the radial position does not quite return to after one complete orbit .we can confirm this mismatch quantitatively using eq .( [ eq : brownresult6 ] ) , which gives , and this means that when , the radial oscillation should have a phase , and a displacement of , and this is just what is observed in fig . [ fig : lambda62](b ) .the amplitude is also consistent , for eq .( [ eq : vr0andepsilon ] ) predicts , which again agrees with the numerical result in fig .[ fig : lambda62](b ) .now that we have confirmed numerically the restricted version of bertrand s theorem , along with the fact that the orbit does not remain closed when the radial amplitude is not infinitesimal , we can now investigate the conditions that allow large amplitude orbits ( in power laws other than ) to be closed .in fact , eq . ( [ eq : brownresultv ] ) is just such a condition . above, we used eq .( [ eq : brownresultv ] ) to predict the value of ( and whether it is a rational fraction or not ) from a knowledge of the initial conditions ( e.g. , ) and it worked as long as was small .now , however , it is clear that eq .( [ eq : brownresultv ] ) also indicates that there can be closed orbits for _ any _ value of , as long as has the correct value . to see this , invert eq .( [ eq : brownresultv ] ) to obtain as a function of in this case , we first choose the force law parameter and then the desired ratio of the periods , . then , eq .( [ eq : browninvert ] ) predicts the initial radial velocity needed to obtain that particular closed orbit .of course , the larger that the difference is between and , the larger the radial oscillation , and eq .( [ eq : browninvert ] ) represents a poorer approximation .for example , again considering the force law parameter , eq .( [ eq : browninvert ] ) reduces to it is clear that is the small radial oscillation limit since it predicts an initial radial velocity of .in addition , since must be real , will always be _ less than _ 3 as the orbit deviates from a stable circle .this fact was already clear from eq .( [ eq : brownresult ] ) .as increases from zero , will take on a continuum of real values less than 3 , most of which will not be rational .however , _ will _ pass through an infinite number of discrete values that are rational , implying that the corresponding orbit will be closed .for the trajectory in fig .[ fig : lambda62 ] , , and it is probably not rational , since it was obtained by fixing .a rational fraction near this value is , which means that the orbit will have 59 radial oscillations for every 20 orbits about the center .such an orbit is shown in fig .[ fig : lambda63 ] , which is clearly closed with the correct value of . however , the initial radial velocity needed to obtain this orbit is not quite the prediction of eq .( [ eq : browninvert6 ] ) , which is .this is because although eq .( [ eq : browninvert6 ] ) follows from eq .( [ eq : brownresult ] ) , which is valid for large enough radial amplitudes to prove bertrand s theorem , it represents a poorer approximation as increases . in order to determine the correct value of needed for such an orbit ,a more sophisticated numerical technique is required .there are two methods that can be used to find the necessary value of that results in an orbit with a particular : brute force search and root finding .both methods can successfully utilize the technique of poincar s surface of section, which takes the continuous time evolution of a high - dimensional trajectory and replaces it with a discrete mapping in fewer dimensions , usually two . in the present case , we plot in radial phase space ( i.e. , versus ) the locations where a particular trajectory crosses the positive axis , for example .then , closed orbits can be found when the trajectory returns to the same phase space location after an integral number of orbits .the surface of section for the trajectory in fig .[ fig : lambda63 ] is shown in fig .[ fig : lambda6phase ] . since the initial conditions were and , where is positive , the initial location in fig .[ fig : lambda6phase ] is denoted by a circle . after 20 orbits , andtherefore 20 crossings of the positive axis , the trajectory returns to the same phase space location .this confirms that the orbit is closed .in fact , the trajectory can be followed for several `` recurrence periods , '' i.e. , 40 or 60 orbits , to make sure that the poincar section is periodic in the long term .in addition to confirming that the orbit is closed , the surface of section suggests a technique that works for the second method : root finding _ via _ the shooting method. here , the shooting method works in the standard way , by casting the problem as a two - point boundary value problem .the initial condition is varied in this case ( the other three initial conditions , , , and , remain fixed ) and the equation of motion is integrated until the desired final condition is obtained .the final condition here is that for an orbit with , the distance in phase space between the `` zeroth '' crossing of the positive axis and the crossing be zero , i.e. , they must be identical . of course ,a good initial guess for is needed , and this is supplied by eq .( [ eq : browninvert ] ) .in addition , a robust root - finding method must be employed .since the derivative of our `` function '' ( distance in phase space as a function of ) is not available analytically , and since the tolerance of the root - finding method should not exceed the tolerance of the numerical integration , the simple secant method should work fine . on the other hand , since the phase space distance is a positive definite quantity , the desired distance is not just a root , but also a minimum .for this reason , a minimization method , such as brent s method, can also be used .it turns out that in practice , either method works fine . in principle, orbits with any allowed value of can be found provided the initial guess for is accurate enough . even if eq .( [ eq : browninvert ] ) does not supply a sufficiently accurate first guess , the `` distance function '' _ versus _ can easily be calculated and plotted , and a better first guess obtained . for ,several closed orbits were found using this method , and the values of and for each orbit are shown in fig .[ fig : lambda6beta ] . the small amplitude relationship , eq .( [ eq : browninvert6 ] ) , is also shown , and it can be seen that the two deviate when the radial amplitude becomes large . what do these large amplitude orbits look like ? besides the orbits with large values of , which are close to circular , the crosses in fig .[ fig : lambda6beta ] indicate a few orbits with small values of ( of course with still less than 3 ) . the orbit with the smallest value of is . the initial radial velocity and amplitude predicted by eqs .( [ eq : browninvert6 ] ) and ( [ eq : vr0andepsilon ] ) are , and . since this radial oscillation amplitude is large , the small amplitude result in eq .( [ eq : brownresult ] ) is not applicable .a search of parameter space ( using the secant method explained above ) reveals that the necessary initial radial velocity is , and this orbit is shown in fig .[ fig : lambda64 ] . even though it can be classified as spirograph - like ,because is a ratio of two small integers ( and is greater than unity ) , the orbit has the appearance of being more `` star''-like .other similar values of , for example the cross labeled in fig .[ fig : lambda6beta ] , are consistent with orbits that also have a star - like appearance .the radial displacement of the star - like orbit turns out _ not _ to be centered on , which is to be expected from a large - amplitude , anharmonic oscillator .a rough estimate from the numerical solution gives , which is significantly larger than that predicted by eq .( [ eq : vr0andepsilon ] ) . for a given value of ,what is the range of possible values of ?we have seen that for , must remain less than three . for other values of , is also restricted , and this restriction is determined by eq .( [ eq : brownresultv ] ) , which shows that must be either greater than or less than depending on the sign of the coefficient of . for ,the coefficient is negative , , which means that , as we have already discovered .this result divides the parameter space into three regimes , and the boundaries between these regimes are just the two special cases of bertrand s theorem : regime i is actually restricted to because we are only interested in bounded orbits. the character of these orbits can be different from those we have already studied , because will always be less than one .in fact , as we have defined it , must be positive definite , so for regime i it must lie in the range .if , for example , . in this regime , when is a ratio of two fairly large integers , the orbits are similar to the orbit in fig .[ fig : lambda63 ] . to see this , the case of and shown in fig .[ fig : lambda2510 ] .the only difference in character between the two orbits is that in fig . [fig : lambda2510 ] the number of orbits is greater than the number of `` furthest approaches '' where , rather than vice - versa .on the other hand , regime i allows a new type of orbit because when is the ratio of two small integers , the nature of the trajectory radically changes .again , for , the closed orbit where is shown in fig. [ fig : lambda253 ] . because only two values of ( and ) can occur during the course of three orbits , the trajectory looks very different from fig .[ fig : lambda63 ] . in fact , this orbit appears more `` loop''-like than spirograph - like .this character comes from the fact that is the ratio of two small integers and is _ less _ than one .regime i is the only case where can be less than unity .in addition to non - power law forces , struck focused on power - laws in regime ii , because these describe galactic potentials well .he showed that in addition to the criterion in eq .( [ eq : betalimits ] ) , was restricted to .the global analysis of bertrand also reveals this fact , and in particular shows that in the limit that .( for the numerical solutions in this study , this limit corresponds to . )this limit also explains why the exact numerical solutions in fig . [ fig : lambda6beta ] are all in the range .the allowed values of for all three regimes are shown in fig .[ fig : betalambda ] . because of the restriction in regime ii , the types of orbits have the same character as fig .[ fig : lambda63 ] .that is , they are of the spirograph type , and except for and , they can not be non - crossing .the final type of orbit with an interesting character occurs only in regime iii .these are characterized by , which means that the orbit is non - crossing .the case of , studied above , does not admit a large - amplitude , non - crossing orbit , since is restricted to the nearly circular case , and is not accessible with a finite value of .however , if , then a finite amplitude orbit can be consistent with , resulting in a closed , non - crossing , `` triangular''-shaped orbit .this is shown in fig .[ fig : lambda81 ] for the force law parameter . any value of greater than 6 will , of course , admit a triangular orbit if has the proper value .`` square''-shaped orbits can also occur when , i.e. , , and one is shown in fig .[ fig : lambda201 ] where .higher order `` polygonal '' orbits are also possible , but they require increasingly larger minimum values of .closed orbit trajectories of several different types have been found for central forces that are of a power - law type . besides the well - known elliptical orbits that arise from coulomb s law ( ) and hooke s law ( ) , we have shown that closed orbits exist for all power law central forces , , when . over the largest part of parameter space , the closed orbits are spirograph - like , with many self crossings before they return to their original location .however , when is a ratio of two small integers , then the orbits become more `` star''-like ( fig . [fig : lambda64 ] ) or `` loop''-like ( fig .[ fig : lambda253 ] ) .finally , non - crossing orbits , when is an integer , occur for large values of , and can be triangular , square , or polygonal .the authors would like to thank j. m. hughes for useful discussions .m. j. bertrand , `` thorme relatif au mouvement dun point attir vers un centre fixe , '' c. r. acad .sci . * 77*(16 ) 849 - 853 ( 1873 ) .h. goldstein , _ classical mechanics _ , 2nd ed .( addison - wesley , menlo park , 1980 ) , 3 - 6 and app . a. the result in eq .( [ eq : vr0andepsilon ] ) can also be obtained from an application of conservation of energy . from a stable circular orbit ,a radial impulse increases the kinetic energy , which becomes potential energy at the maximum radial distance , . , ) for the case , with the initial conditions , , and .( b ) radial location , , as a function of angular position .the three radial excursions of the first orbit can be clearly seen , with an amplitude , as well as the fact that when , returns to .,width=384 ] , except that .this implies that , and .this means that the period of radial oscillations is slightly more than one third of the orbital period , so that when , the radial oscillation should have a phase , and a displacement of , as observed.,width=384 ] ( a ) , except that .the orbit is closed , with .the value that the small amplitude approximation , eq .( [ eq : browninvert6 ] ) , predicts for the radial impulse is .however , the radial amplitude is too large , , for this prediction to be exact .a numerical root finding search was used to obtain the correct value.,width=384 ] . the trajectory s location in phase spaceis plotted each time it crosses the positive -axis .forty revolutions about the center are shown , so that each cross is really two crosses , depicting subsequent passings .the fact that the crossing locations are identical means that the orbit is closed.,width=384 ] and for from eq .( [ eq : browninvert6 ] ) ( solid line ) and exact numerical calculation for closed orbits ( crosses ) .the values of corresponding to the five largest amplitude closed orbits are also indicated.,width=384 ] , except that , . the orbit is closed , with .again , the anharmonicity of the potential means that the radial amplitude is not symmetric about .the numerical result gives and .,width=384 ] for the three regimes .the curve is the linear relationship between and for nearly circular orbits , given by eq .( [ eq : betalambdaplus3 ] ) .the shaded regions indicate allowed values of for large radial oscillations in the three regimes , whose limits are given by eq .( [ eq : betalimits ] ) and in the subsequent text.,width=384 ]
bertrand s theorem proves that inverse square and hooke s law - type central forces are the only ones for which all bounded orbits are closed . similar analysis was used to show that for other central force laws there exist closed orbits for a discrete set of angular momentum and energy values . these orbits can in general be characterized as `` spirograph''-like , although specific orbits look more `` star''-like or `` triangular . '' we use the results of a perturbative version of bertrand s theorem to predict which values of angular momentum and energy result in closed orbits , and what their shapes will be . this article has been submitted to the american journal of physics . after it is published , it will be found at ` http://scitation.aip.org/ajp/ ` .
active optical fibres are becoming more and more important in the field of detection and measurement of ionising radiation and particles .light is generated inside the fibre either through interaction with the incident radiation ( scintillating fibres ) or through absorption of primary light ( wavelength - shifting fibres ) .plastic fibres with large core diameters , i.e. where the wavelength of the light being transmitted is much smaller than the fibre diameter , are commercially available and readily fabricated , have good timing properties and allow a multitude of different geometrical designs .the low costs of plastic materials make it possible for many present day or future experiments to use such fibres in large quantities ( see for a review of active fibres in high energy physics ) . for the construction of the highly segmented tracking detector of the atlas experiment approved for the lhc collider at cern morethan 600,000 wavelength - shifting fibres have been used .our work is also motivated by the fact that spiral fibres embedded in scintillators are being used for calorimetric measurements in long base line neutrino oscillation experiments , most recently in the minos experiment .the treatment of small diameter optical fibres involves electromagnetic theory applied to dielectric waveguides , which was first achieved by snitzer and kapany _ et al _ .although this approach provides insight into the phenomenon of total internal reflection and eventually leads to results for the field distributions and electromagnetic radiation for curved fibres , it is advantageous to use ray optics for applications to large diameter fibres where the waveguide analysis is an unnecessary complication . in ray opticsa light ray may be categorised by its path along the fibre .the path of a meridional ray is confined to a single plane , all other modes of propagation are known as skew rays .the optics of meridional rays in fibres was developed in the 1950s and can be found in numerous textbooks , e.g.in . since then, the scientific and technological progress in the field of fibre optics has been enormous . despite the extensive coverage of theory and experiment in this field , only fragmentary studies on the trapping efficiencies and refraction of skew rays in curved multimode fibrescould be found .we have therefore performed a three - dimensional simulation of photons propagating in simple circularly curved fibres in order to quantify the losses and to establish the dependence of these losses on the angle of the bend .we have also briefly investigated the time dispersion in fibres . for our calculations a common type of fibre in particle physicsis assumed , specified by a polystyrene core of refractive index 1.6 and a thin polymethylmethacrylate ( pmma ) cladding of refractive index 1.49 , where the indices are given at a wavelength of 590 nm .another common cladding material is fluorinated polymethacrylate with .typical diameters are in the range of 0.5 1.5 mm .this paper is organised as follows : section 2 describes the analytical expressions of trapping efficiencies for skew and meridional rays in active , i.e. light generating , fibres .the analytical description of skew rays is too complex to be solved for sharply curved fibres and the necessity of a simulation becomes evident . in section 3 a simulation codeis outlined that tracks light rays in cylindrical fibres governed by a set of geometrical rules derived from the laws of optics .section 4 presents the results of the simulations .these include distributions of the characteristic properties which describe light rays in straight and curved fibres , where special emphasis is placed on light losses due to the sharp bending of fibres .light dispersion is briefly reviewed in the light of the results of the simulation .the last section provides a short summary .when using scintillating or wavelength - shifting fibres in charged particle detectors the trapped light as a fraction of the intensity of the emitted light is very important in determining the light yield of the system .for very low light intensities as encountered in many particle detectors the photon representation is more appropriate to use than a description by light rays .whether the fibres are scintillating or wavelength - shifting one is only ever concerned with a few 10 s or 100 s of photons propagating in the fibre and single photon counting is often necessary .the geometrical path of any rays in optical fibres , including skew rays , was first analysed in a series of papers by potter and kapany .the treatment of angular dependencies in our paper is based on that .the angle is defined as the angle of the projection of the light ray in a plane perpendicular to the axis of the fibre with respect to the normal at the point of reflection .one may describe as a measure of the ` skewness ' of a particular ray , since meridional rays have this angle equal to zero .the polar angle , , is defined as the angle of the light ray in a plane containing the fibre axis and the point of reflection with respect to the normal at the point of reflection .it can be shown that the angle of incidence at the walls of the cylinder , , is given by .the values of the two orthogonal angles and will be preserved independently for a particular photon at every reflection along its path . in general for any ray to be internally reflected within the cylinder of the fibre , the inequality must be fulfilled , where the critical angle , , is given by the index of refraction of the fibre core , , and that of the cladding , . in the meridional approximationthe above equations lead to the well known critical angle condition for the polar angle , , which describes an acceptance cone of semi - angle , ] , where the exponential function describes light losses due to bulk absorption and scattering ( bulk absorption length ) , and the second factor describes light losses due to imperfect reflections ( reflection coefficient ) which can be caused by a rough surface or variations in the refractive indices . a comparison of some of our own measurements to determine the attenuation length of plastic fibres with other available data indicates that a reasonable value for the bulk absorption length is m .most published data suggest a deviation of the reflection coefficient , which parameterises the internal reflectivity , from unity between and .a reasonable value of is used in the simulation to account for all losses proportional to the number of reflections .internal reflections being less than total give rise to so - called leaky or non - guided modes , where part of the electromagnetic energy is radiated away .rays in these modes populate a region defined by axial angles above the critical angle and skew angles slightly larger than the ones for totally internally reflected photons .these modes are taken into account by using the fresnel equation for the reflection coefficient , , averaged over the parallel and orthogonal plane of polarisation where is the angle of incidence and is the refraction angle . however , it is obvious that non - guided modes are lost quickly in a small fibre .this is best seen in the fraction of non - guided to guided modes , , which decreases from at the first reflection of the ray over at the second reflection to at further reflections . since the average reflection length of non - guided modes is mm those modes do not contribute to the flux transmitted by fibres longer than a few centimeters .the absorption and emission processes in fibres are spread out over a wide band of wavelengths and the attenuation is known to be wavelength dependent . for simplicity only monochromatic light is assumed in the simulation and highly wavelength - dependent effects like rayleigh scattering are not included explicitly .a question of practical importance for the estimation of the light output of a particular fibre application is its transmission function . in the meridional approximation and substituting by the attenuation length can be written as ^{-1}\ .\ ] ] only for small diameter fibres ( mm ) are the attenuation lengths due to imperfect reflections of the same order as the absorption lengths . because of the large radii of the fibres discussed reflection lossesare not relevant for the transmission function and the attenuation length contracts to . for the simulated bulk absorption length this evaluates to m .the transmission function outside the meridional approximation can be found by integrating over the normalised path length distribution , where represents the number of photons per path length interval , weighted by the exponential bulk absorption factor : figure [ fig : absorption ] shows this transmission function versus the ratio of fibre to absorption length , .a simple exponential fit , ] .a transition in the transmission function should occur at bending angles between , where all photons emitted towards the tensile side have experienced a reflection , and , where this is true for all photons .figure [ fig : bending ] shows the transmission as a function of bending angle , , for a standard fibre as defined before .once a sharply curved fibre with a ratio is bent through angles light losses do not increase any further .the transition region ranges from to and is indicated in the figure by arrows . at much smaller ratios the model is no longer valid to describe this behaviour .experimental results on losses in curved multimode fibres along with corresponding predictions are best known for silica fibres with core radii m .calculations on the basis of ray optics for a plastic fibre with mm can be found in .our result on the transmission function in the meridional approximation at is in good agreement with the two - dimensional calculation .the larger value of predicted by the simulation is explained by the small loss of skew rays , clearly seen in figure [ fig : rlambda ] .it should be noted that the difference between finite and infinite cladding and the appearance of oscillatory losses in the transition region has not been investigated in the simulation .figure [ fig : phasespace ] shows contours of the angular phase space for photons which were trapped in the straight fibre section but are refracted out of sharply curved fibres with radii of curvature 2 and 5 cm .the contours demonstrate that only skew rays from a small region close to the boundary curve are getting lost. the smaller the radius of curvature , the larger the affected phase space region .the timing resolution of scintillators are often of paramount importance , but a pulse of light , consisting of several photons propagating along a fibre , broadens in time . in active fibres ,three effects are responsible for the time distribution of photons reaching the fibre exit end .firstly the decay time of the fluorescent dopants , usually of the order of a few nanoseconds , secondly the chromatic dispersion in a dispersive medium , and thirdly the fact that photons on different paths have different transit times to reach the fibre exit end , known as inter - modal dispersion .the chromatic dispersion is due to the spectral width , , of the emitter .it is the combination of material dispersion and waveguide dispersion .if the core refractive index is explicitly dependent on the wavelength , , photons of different wavelengths have different propagation velocities along the same path , called material dispersion .the broadening of a pulse is given by .the fwhm of the emission peaks of scintillating or wavelength - shifting fibres is approximately nm . the material dispersion in the used polymers ( mostly polystyrene ) is of the order of ns km and thus negligible for multimode fibres .the transit time in ray optics is simply given by , where is the speed of light in the fibre core .the simulation results on the transit time are shown in figure [ fig : timing ] .the full widths at half maximum ( fwhm ) of the pulses in the time spectrum are presented for four different fibre lengths .the resulting dispersion has to be compared with the time dispersion in the meridional approximation which is simply the difference between the shortest transit time and the longest transit time : , where is the total axial length of the fibre .the dispersion evaluates for the different fibre lengths to 197ps for 0.5 m , 393ps for 1 m , 787ps for 2 m and 1181ps for 3 m .those numbers are in good agreement with the simulation , although there are tails associated to the propagation of skew rays . with the attenuation parameters of our simulationthe fraction of photons arriving later than decreases from 37.9% for a 0.5 m fibre to 32% for a 3 m fibre due to the stronger attenuation of the skew rays in the tail . due to inter - modal dispersion the pulse broadeningis quite significant .we have simulated the propagation of photons in straight and curved optical fibres .the simulations have been used to evaluate the loss of photons propagating in fibres curved in a circular path in one plane .the results show that loss of photons due to the curvature of the fibre is a simple function of radius of curvature to fibre radius ratio and is if the ratio is .the simulations also show that for larger ratios this loss takes place in a transition region ( ) during which a new distribution of photon angles is established .photons which survive the transition region then propagate without further losses .we have also used the simulation to investigate the dispersion of transit times of photons propagating in straight fibres . for fibre lengths between 0.5 and 3 mwe find that approximately two thirds of the photons arrive within the spread of transit times which would be expected from the use of the simple meridional ray approximation and the refractive index of the fibre core .the remainder of the photons arrive in a tail at later times due to their helical paths in the fibre .the fraction of photons in the tail of the distribution decreases only slowly with increasing fibre length and will depend on the attenuation parameters of the fibre .we find that when realistic bulk absorption and reflection losses are included in the simulation for a straight fibre , the overall transmission can not be described by a simple exponential function of propagation distance because of the large spread in optical path lengths between the most meridional and most skew rays .we anticipate that these results on the magnitude of transition losses will be of use for the design of particle detectors incorporating sharply curved active fibres .this research was supported by the uk particle physics and astronomy research council ( pparc ) .23 press w h , teukolsky s a , vetterling w t and flannery b p 1992 numerical recipes in fortran77 : the art of scientific computing _ fortran numerical recipes _ vol 1 , 2nd edn ( cambridge : cambridge university press ) davis a j , hink p , binns w , epstein j , connell j , israel m , klarmann j , vylet v , kaplan d and reucroft s 1989 scintillating optical fiber trajectory detectors _ nucl .methods phys .res . _ a * 276 * 34758 ( 20,23)(0,-23 ) ( 0,-1 ) define fibre parameters : ( 0,-2 ) - straight and bent section length , , , , ( 0,-3 ) over photons ( 1,-4 ) generate one photon : ( 1,-5 ) - position and angle distribution according to emitter type ( 1,-6 ) over reflections at core - cladding interface ( 2,-7 ) calculate photon parameters at point : ( 2,-8 ) - axial and azimuthal angle , skew angle ( 2,-9 ) find next reflection point : ( 2,-10 ) photon absorbed on path ?( 2.7,-10.7)(2,1 ) ( 2,-11 ) yes ( 1.7,-10.7)(-1,0)1.2 ( 2,-12 ) photon reached fibre end face ?( 2.7,-12.7)(2,1 ) ( 2,-13 ) yes ( 1.7,-12.7)(-1,0)1.2 ( 2,-14 ) photon reflected at ?( 2.4,-14.7)(1.5,1 ) ( 2,-15 ) no ( 1.65,-14.7)(-1,0)1.15 ( 2,-16 ) calculate propagation parameters : ( 2,-17 ) - reflection length , total path length , no . of reflections ( 2,-18 )coordinate transformation ( 1,-19 ) ( 1.2,-18.2)(0,1)11.95 ( 0,-20.2 ) ( 0.5,-19.4)(0,1)16.15 ( 0,-21.2 ) calculate flux parameters : ( 0,-22.2 ) - bending , absorption and reflection losses , trapping efficiency
a monte carlo simulation has been performed to track light rays in cylindrical multimode fibres by ray optics . the trapping efficiencies for skew and meridional rays in active fibres and distributions of characteristic quantities for all trapped light rays have been calculated . the simulation provides new results for curved fibres , where the analytical expressions are too complex to be solved . the light losses due to sharp bending of fibres are presented as a function of the ratio of curvature to fibre radius and bending angle . it is shown that a radius of curvature to fibre radius ratio of greater than 65 results in a light loss of less than 10% with the loss occurring in a transition region at bending angles . + * keywords : * fibre optics , propagation and scattering losses , geometrical optics , wave fronts , ray tracing
this paper presents an analysis of the projected capability of a detector design , heron , based on a target material of superfluid helium to make a precise measurement of both the and 7 solar neutrino fluxes ( and , resp . ) in a single , real - time experiment .the detection reaction used would be the elastic scattering of neutrinos by electrons ( es ) .in addition to the novel use of helium , the detector also includes the novel application of a coded aperture technique for accurate measurement of the location and recoil energy of each elastic scattering event and to aid in background discrimination . according to models of the sun , the neutrinos from the -i and -ii branches of the fusion chain ( known as the and 7 neutrinos , respectively )are , when taken together , of the neutrino flux and are associated with the reactions producing a similar fraction of the solar energy . at the present writingthere have been no real - time experiments to measure the flux and spectra of but recently the borexino collaboration has made the first real - time spectral detection of ( ) . as we explain in sec .[ sec : goal ] there are several important physics issues which can be addressed if a detector can be constructed to measure both these fluxes and spectra with sufficient precision . for the neutrinos ( ) there does not yet exist any detector with demonstrated feasibility to measure either their flux or spectra ; however , there are a number of efforts which aim to do so .section 2 of this paper discusses the physics goals motivating the heron detector .briefly presents the requirements for and description of the heron detector design .sec . 4 and 5 provide the details of the heron capability analysis . in an appendixwe discuss an application to measuring the solar orbit eccentricity .a principal goal of heron would be to make an accurate measurement of the luminosity of the sun using precise measurements of active neutrino fluxes . an experiment capable of measuring both and sufficiently well for an accurate luminosity measurement can also address several other interesting topics .these include testing for the relative rates of the 3(4 , 2p)4 and 3(4 , )7 reactions which terminate the -i and -ii branches in the sun and also for testing the msw effect ( after mikheyev , smirnov , wolfenstein ) in the lma ( large mixing angle ) solution to the `` solar neutrino problem '' . additionally , if new measurements of and are successful at the few - percent level then , via new , luminosity - constrained global fits to all neutrino data , some modest improvement can be made in the knowledge of , and limits on sterile neutrinos .lastly , since a real - time low - energy solar neutrino experiment opens a new window in neutrino physics , the possibility of surprises in the physics should not be discounted .the radiant photon energy reaching the earth from the sun ( the irradiance , ) is believed to result from the nuclear fusion reactions of light elements .the energy released in each of the chains producing neutrinos is well known from laboratory experiments . consequently ,if the flux of the associated neutrinos can be determined then the photon irradiance and luminosity can be inferred from those fluxes .this is usually formulated as : where is the total solar luminosity in photons , is the average earth - sun distance , is the mean irradiance determined by earth - orbit satellites to be with a systematic uncertainty of about .the s are the coefficients giving the energy provided by and associated with the -th neutrino flux and is the total solar luminosity inferred from the neutrino fluxes .if the sun operates as we presently think it does then the ratio should be unity .significant departure from that expectation would signal the presence of different sources of energy within the sun .another important point is that , from the reaction positions in the solar interior , the energy carried by the photons and by the neutrinos reaches the earth with a huge separation in arrival times .the neutrinos arrive directly in 8 minutes while the thermal photon energy arrives from the solar plasma after approximately years .consequently , finding a disagreement between and would have significant implications for environmental consequences in the long term . because the sum of ( ) and 7 ( ) neutrinos are expected to be associated with of the total flux, it follows that a precision measurement of , either alone or together with , will provide the major direct test of .currently this comparison is only known to about .there are additional reasons to make a more precise determination of : the fact that the average photon irradiance is very well measured has previously led to its use as a constraint in global analyses of solar neutrino experimental data .used first as a demonstration of possible flavor oscillations of neutrinos and more recently , as additional and more precise solar and reactor neutrino experimental results have become available , as a powerful constraint to aid in establishing best present knowledge of solar neutrino mass - mixing parameters and individual fluxes . at the same time , the standard solar models ( ssm ) have continued to make significant improvements so that quite precise predictions for the fluxes have been made .for example , is predicted to and to of the is contributed from the experimental uncertainty in nuclear cross - section factor ; however , new data from the luna collaboration suggests this contribution may be reduced to .private communication c. pea - garay . ] . when these predictions are compared against the fluxes found from global fits to all of the existing solar and reactor data the levels of agreement differ significantly depending on whether the photon luminosity is used as a constraint or not .for example , the ratios of global - fit fluxes to ssm predictions are ( at ) : for , and for 7 _ with the luminosity constraint _ but are and , respectively , _ without the constraint _ and leads to the poor knowledge of noted above .the question of how precisely direct measurements , of either or must be made in future experiments has been cogently addressed in an important paper by bahcall and pea - garay . related andmore recent considerations also have been made by others .the level of precision required depends strongly upon the specific physics questions to be addressed . in all cases ,the demands on experimental techniques are severe .for example , the authors of ref . carried out simulations of global analyses utilizing all present data plus inclusion of potential future and 7 experiments with assumed capability of precisions ranging from to ( at ) .they find that a 7 result of could improve knowledge of from to .increased precision on 7 alone would not yield further improvement ; while a on could achieve a remarkable on the luminosity comparison by neutrinos .the authors note that a result of this accuracy would be _ `` a truly fundamental contribution to our knowledge of stellar energy generation and place a bound on all sources of energy other than low energy fusion of light elements ( i.e. , and cno chains ) '' . _the heron detector , as shown in the present paper , is intended to be capable of reaching precisions on both fluxes commensurate with these goals .the relative magnitude of versus is a particularly relevant parameter , on the one hand , bearing on the accuracy of the ssm and , on the other , as evidence for the msw effect in the lma .it is valuable to have an experiment which measures both and since several systematic errors tend to cancel in the ratio . in the ssmthere is a very strong anti - correlation between the two fluxes with a coefficient of to . if the reaction of the -ii chain were the only terminating branch , only one and one 7 neutrino would be produced in each cycleotherwise there would be two and no 7 neutrinos if of the -i branch were the terminating reaction of the full fusion cycle .what the actual relative reaction rates are depends on several not yet accurately known details within the sun such as elemental abundances , temperatures and density .( present versions of the ssm predict a ratio of the two reaction rates as which implies , prior to oscillation , a value of for . )an independent and precise measurement of these relative rates would be an important contribution to the understanding of stellar processes and would permit a refinement of the use of the ssm in global analyses of neutrino data .the physics of the msw effect is embodied in the flavor - dependent interaction differences for neutrinos propagating in matter as opposed to vacuum . due to the differences in neutrino energies and solar density at theirproduction points the , 7 and 8 neutrinos are expected to have quite different survival probabilities .the lma - msw solution specifies what this energy dependence must be .the oscillations of the much higher energy 8 neutrino should be strongly suppressed by matter dominance and the neutrinos much less since they should be vacuum - dominated .the 7 and pep neutrinos , having energy intermediate to and 8 , are in the crucial energy region where the transition between matter dominance and vacuum oscillations is to be expected .the 8 flux is now very well measured by the super kamiokande ( sk ) and sudbury ( sno ) experiments [ in ref . , see e.g. fukuda et al . and ahmed et al . ] ; however , direct experimental evidence for this msw transition is still lacking and could be provided by an experiment such as heron . the and measured in es are , by necessity , the fluxes of active neutrinos .consequently , in the measurement of an alternative interpretation of a result consistent , within errors , to unity can be taken as the establishment of a limit on the presence of sterile neutrinos .due largely to the lack of precision experiments on these two major low - energy fluxes , there has been some leeway in the recent analyses of the solar and reactor data which allows for consideration of several well - motivated proposals for `` new physics '' in the neutrino sector . among theseare possibilities for non - standard neutrino interactions with their environments ( nsi ) . within this class of modelsthe additional effects to be expected are strongly constrained by existing data ; however , in some cases they should be most pronounced in the energy dependence of the fluxes of solar neutrinos . in these casesthe effect is qualitatively similar , but differs quantitatively , from that to be expected from the lma - msw transition in the matter- to vacuum - dominated neutrino energy regions .two examples of this class , are models with flavor - non - conserving neutral currents or with mass - varying - neutrinos ( `` mavans '' ) ; the latter inspired by possible insights into understanding `` dark energy '' .confirmed evidence found for nsi would place our knowledge for the mass - mixing parameters in doubt ; alternatively , such precision measurements of the matter - vacuum transition region would also serve to establish new limits on the existence of nsi .new measurements of low energy solar fluxes can play only a rather limited role in improving on present knowledge of and limits . to be useful, the new data would need to be folded into a comprehensive analysis with all the other data ( solar , atmospheric and reactor ) which from present data have already established impressive errors on the parameters .the potential for improvement in these parameters by low energy es or cc experiments has been subjected to a detailed study by the authors of ref . who conclude that without new physics even on would make a negligible improvement on and a result is required to improve the error by more than .the authors conclusions on any improvements to be expected on are similar .there are stringent requirements placed on any detector designed to achieve these goals . in order to be sensitive to all active neutrino flavors ,the detection reaction is that of the elastic scattering from atomic electrons ( es ) in the target : .the es event signature is the occurrence of only a single , low energy recoiling electron in the detector medium .the recoil spectra are continuous from zero with the decreasing monotonically to a maximum energy of while the 7 spectrum is nearly flat up to maximum ( see points labeled input in fig . [fig : spectra ] ) .these recoil spectra place a premium on achieving as low an energy threshold as possible .the most dangerous backgrounds are those which can be created by the appearance of electrons from compton recoils of gamma rays or radioactive decays within the medium of the target .these backgrounds need to be mitigated by a combination of event signatures , use of low activity materials , depth and shielding . in order to achieve the desired few - percent precision ,very large statistics event samples are required and systematic errors arising from effects due to analysis cuts ( e.g. , fiducial volume , thresholds , and 7 event separation , calibrations ) must be minimized .although these details emphasize the challenges to be faced in constructing an operating detector , there are two mitigating factors : the es cross - section is very precisely known due to experiment and electro - weak theory and the expected fluxes are high ( and ) .typically , this will result in an es event rate of - which implies that only a modest - size fiducial volume ( e.g. , ) is needed for a high statistics experiment .the geometry of the heron detector design , all dimensions are in centimeters . ]the detector design is shown in ( fig .[ fig : design ] ) and its design is discussed in more detail in ref .. the general approach of heron to these issues is as follows .the target material chosen is in the superfluid state ( density 0.145 g / cc ) which has several beneficial properties .energy deposited in the helium by recoiling particles can be detected by one or all of three processes : scintillation , phonons / rotons or collecting the recoil electron trapped in a bubble .helium has no long - lived isotopes but more importantly it can be made absolutely free of all other atomic species . at superfluid temperatures it is self - cleaning of impurities due to their high mobility and favorable energy minimum at the container walls .even particulate matter quickly attaches to the container walls at our operating temperature of .since the bulk helium volume will be free of background sources , the concern is to counter radiation entering from the cryostat ( 27.3 tonne copper ) and its environment . in a separate study environmental sourceswere modeled with the detector cryostat actively shielded externally by of water at a rock overburden of meter - water - equiv .( m.w.e . ) .helium is virtually immune to creation of long - lived cosmogenic muon activity , capture or decay in it ; muons ( at 4500 m.w.e . )are vetoed externally and internally in any case and greater depth is possible .this was sufficient shielding against environmental neutrons , cosmic muons and gammas that , for purposes of the analysis simulation under consideration here , they would be negligible relative to sources from the cryostat and other detector parts .as a consequence the background issue reduces to controlling conversions of gammas entering the helium volume of 21.6 and 8 tonnes total and fiducial , respectively . however , helium does not have good self - shielding properties and this is partially compensated for by lining the cryostat with a moderator of solid nitrogen enclosed in acrylic cells ( acrylic and solid nitrogen , density ) .the function of the moderator is to absorb or degrade in energy by compton scattering the entering gamma rays which originate 97% and 3% from the cryostat and moderator , respectively .the flux of gammas entering the helium is dominated by low energy ( dominantly ) cosmogenic activity in the copper with the remainder from u , th and other activity in the copper , nitrogen , acrylic and other parts .the background gamma conversions in the helium can not be fully eliminated but are amenable to the development of a distinguishing signature which aids in their separation from signal ; the nature of this separation is detailed in sec . 4 and 5 .we have chosen to use as the basic elements of detection for both signal and background the collection of scintillation light and also the recoil electron trapped in its bubble .we have carried out studies of these processes using prototype calorimeter sensors / detectors developed for this project and suitable for both signal types ; the processes and results are discussed briefly below and in more detail in the references cited .excited or ionized he atoms along the electron path quickly form dimers in the liquid .the radiative decays to the ground states of these singlet and triplet dimers emit photons in the ultraviolet .the scintillation light is in a narrow band centered at 16 ev and results from the decay of the singlet dimer ; since this energy is lower than the first excited state of he at 20.6 ev , the liquid is self - transparent . of a recoil electron s energy is released ( photons / mev ) in this singlet dimer mechanism ; the energy from the long - lived ( ) triplet dimer escapes or is collision quenched .( an additional of the recoil energy is radiated in phonons and rotons which by quantum evaporation could in principle also be utilized with the calorimetric sensors for discrimination ; however , we find incorporating into event signatures the detection of the recoil electron from the drifted bubble a much stronger discriminant . )when a recoil electron in he has lost most of its energy it forms an electron bubble .the electron experiences a strong , short - range repulsive potential from the bound electrons on surrounding atoms due to pauli exclusion .this repulsion forms a vacant volume , or bubble , of radius in which the electron is confined .the bubble forms in about 10 pico - sec with an effective displacement mass of he - masses and has a hydrodynamic mass of half that ; consequently due to this difference in masses , under gravity and with no electric field , a bubble would experience a buoyant acceleration of . a uniform drift velocity of the bubble can be provided and controlled by a combination of applied electric field and a very low concentration of ; for example , at , 3 and a field of / m provides a drift velocity of m/s . in a worst case example of m ( maximum depth ) , the collisions induced due to the large cross - section of 3 for scattering a bubble leads to an uncertainty in the transit time of and hence to a depth error of mm .in addition to providing a `` drag '' force the 3 also aids in extracting the electron efficiently through the free surface of the liquid by vortex attachment ( surko and reif ) .two grids on either side of the liquid surface ( not shown in fig .[ fig : design ] ) provide the drift and extraction fields .( at the normal operating temperature of the vapor pressure is sufficiently low that the space above the liquid is effectively a vacuum . )the final grid accelerates the electron to thus providing a large and distinguishable pulse in the calorimeter . for several reasons , photo - multiplier tubes ( pmt )are not suitable event detectors for heron , among them : the high radioactivity of pmt s , poor he self - shielding , lack of transparency of moderator and the desire to detect the drifted electrons .as mentioned , both scintillation and drifted electrons are detected on the same calorimetric devices .each device constitutes a pixel in a geometric array ( a coded aperture ) and consists of a thin wafer of silicon or sapphire to which is attached a high sensitivity metallic magnetic calorimeter ( mmc ) read out with a squid sensor . for astrophysical x - ray application , versions of mmc have been constructed with resolution .projecting from measurements on wafer prototypes of smaller heat capacity to ones of the heron size , resolution is to be expected . in a full simulation of the response of this large wafer ( , in this example ) with 16 ev photonsit is found that single photons should be detectable at wafer temperature of producing a pulse of rise- and fall - time .this performance capability is assumed in the context of the analysis of sec . 4 and 5 . for each neutrino event we must reconstruct its position within the he and also its recoil electron energy . in addition, we must develop event signatures which aid in separating signal from background events .the maximum track length expected for a neutrino event is cm so that on the scale of the total helium volume ( ; t ) neutrino signal events are effectively point sources of scintillation light . at the dominantly low energies of the gamma - ray background events the conversions in heare overwhelmingly ( ) compton scatters . of these conversions are multiple depositions distributed over an average length of more than cm in the he .consequently , the scintillation from background arises from a distributed , rather than a point , source ; additionally the event most often contains multiple , un - recombined electron recoil bubbles .these latter two features constitute the primary background signature .subsequently , differences among the spectral and spatial distributions of the signal and residual background events facilitate their final separation . in order to create these signatures and to enable the necessary cuts on data samplesthe 2400 wafer calorimeters are arranged into two planes in the vacuum space above the liquid ; the resulting array constitutes a coded aperture and provides the ability for both spatial and energy reconstruction .the concept of coded aperture arrays has been a well established one with arrays being widely used in x - ray astronomy .the role of the array is to accurately determine the _ direction _ of incoming photons from a remote x - ray point source . a coded aperture array consists of two parts , an imaging plane and , separated by a fixed distance , a mask plane . in the x - ray applicationthe image plane consists of a set of active sensors ( or pixels ) while the mask plane is opaque with a pattern of cut - out apertures . for far - distant sources ,nearly parallel rays enter the array and some of them are blocked by the opaque portions of the mask ; thus the image imposed on the sensor plane will resemble the pattern of the apertures in the mask effectively leaving a `` shadow '' ( see fig . [fig : ura ] ) . in principle , to determine the direction of the light source it is simply a matter of comparing the shadow pattern to that of the mask itself . in practice ,elegant techniques have been developed for design of mask patterns and image deconvolution taking into account side - lobe effects as well as intrinsic and statistical noise .various classes of mask patterns have been employed in the x - ray field ranging from random apertures to strictly repeating and regular patterns .the choice of one over another depends upon experimental considerations such as strength of photon flux , resolution needed and sidelobe tolerance .ura pattern , in dimensions and in periodicity .dark and light areas represent opaque and open regions , resp . in the heron mask ,the smallest squares represents single wafers . ] in the heron application there are important differences : a ) a non - distant point source location is to be determined within a limited volume in _3-d _ , b ) the event energy must be measured , c ) for reasons ( a ) and ( b ) both planes will consist of active wafer pixels and d ) some discrimination is needed between point and non - point sources . the heron coded aperture array is arranged with the mask plane cm above the liquid surface and the image plane m above the mask . the pixels are arranged in square arrays ( cm each ) with wafer calorimeters in the image plane and in the mask .the pattern of apertures in the mask is that of a uniformly redundant array ( ura ) .this mask pattern was chosen because of its low intrinsic noise and high transparency ( ) .( in the notation of the ura it has a ( 17,19 ) grid spacing as shown in fig .[ fig : ura ] . )the nature of the ura and heron physical properties and goals constrain the choice of wafer pixel size .the he fiducial volume can be chosen and varied during physics analysis but the total mass of he is contained within a cylindrical volume of cm and cm height . in our application , the transverse dimensions of the array should be at least commensurate with those of the fiducial volume containing the sources and , although smaller pixel sizes imply finer spatial resolution , ultimately photon statistics dominate ( typically a few hundred photons for many neutrino events , due to energy and solid angle effects ) . for the heron geometry a choice of cm pixels in a ura gives resolution adequate to the physics goals without unnecessarily increasing the complexity or noise and consistent with the desired single photon performance for a wafer of this size .the deconvolution techniques used for typical x - ray applications are not applicable for our 3-d application ; additionally , they are not easily amenable to developing a background ( distributed source ) signature .instead we have adopted a likelihood approach along with a search algorithm for the most probable position in 3-d space .this approach treats each event in a sample containing both signal and background as if it were a single point source . with that assumption, it finds from the observed photon hit pattern the most probable values of its spatial location and total energy .although no attempt is made on an event - by - event basis to distinguish signal from background , distributions of the likelihood parameter s logarithm can be useful in separating signal and background as we show in sec .5 . similarly ,the effective point - like positions and effective energy distributions of the background events are used .operationally , the algorithm initiates with a test - point location in the volume and the probability of this test electron to produce the observed photon hit pattern is calculated .the test photon distribution is taken as isotropic with straight - line propagation ; the probability of hitting the -th wafer is then proportional to the solid angle ( ) subtended by the wafer from the photon current starting point . after a systematic search of points throughout the available volume ,the location in space found to have the highest probability is taken as the final position .if is the solid angle subtended by all wafers in both planes and is the number of photon hits in the pattern then we can define a quantity which is the total number inferred for the test point .then the probability of an event located at producing the recorded photon pattern is evaluated as : where is the number hitting -th wafer and and for computational convenience we use the logarithm ( loglikelihood ) : and select as the final position the one with the largest ( least negative ) .the final energy estimate is scaled from the solid angle subtended from the test point .the process converges rapidly guided by a set of empirically established criteria for avoiding subsidiary maxima and reaching a stable solution . ) .consequently , for simplicity of analysis and discussion we ignore these effects in this paper . ]a test of the reconstruction ability of the coded aperture approach has been done in a way which examines a full range of variable correlations .we have generated samples of , 7 ( , each ) and background events ( ) as they would appear in the configuration of heron described .for the neutrinos , the input recoil electron energy spectra are as shown in fig .[ fig : spectra ] and the events are distributed uniformly throughout the full he volume . the input background sample in the he is generated by propagating gamma rays initiating from sources within the detector s principal components using geant3 .the source activities and concentrations are listed in table [ tab : bg sources ] . within the he accountwas taken for bremstrahlung , very low energy compton recoils and delta - rays . for all input samples , the original position and deposited energy for every recoil electron was retained for use in comparing to reconstructed values .the events were then reconstructed as described in sec ..[tab : bg sources]assumed levels of residual activity in major detector components . [ cols="<,^,^,^",options="header " , ] for comparison , we also show in table [ tab : fitting results ] the resulting errors to be expected when all optical effects are included .the differences are not significant as we have discussed in footnote 2 .we have described that , on the basis of the new knowledge gained in recent years of neutrino properties and of higher energy solar neutrino fluxes , there are excellent reasons to perform precision real - time measurements of the very low - energy neutrino fluxes from the sun .the physics goals outlined in sec .2 include determining the luminosity of the sun in neutrinos , providing checks on some details of the ssm , testing the msw effect in the lma solution and improving constraints on the neutrinos mass - mixing parameters as well as providing discovery opportunities in the new low energy regime . to achieve these goals detectors are required which can measure the flux with a precision better than and the flux to better than .such detectors must be capable of collecting very large event samples and maintain good control of systematic errors .we have described the design of such a possible detector , heron , and have simulated its performance .although the heron detector is not presently scheduled or funded for construction , by experimentation in prototypes of several liters we have measured the details of energy loss processes ( scintillation , phonons / rotons , electron bubbles ) for low energy electrons in the superfluid .the development of wafer calorimeters capable of detecting all three channels has been carried out .we have not tested an array of wafers as a coded aperture ; however , given the well tested use of the method in other fields , performance as simulated can be reasonably expected .the superfluid helium target material is itself free of intrinsic internal background and provides two channels ( scintillation and drifted electrons ) to distinguish and separate externally entering background from point - like neutrino es signals via an active coded - aperture array .the simulation has been directed towards establishing the systematic and other errors to be expected for the heron detector in an exposure of 5 years . for that purposelarge samples of both signal and background events were generated and then fully reconstructed according to the physical processes in helium , the detector geometry and the properties of the coded aperture design .the expected signals were based on current best understanding of the neutrino mass - mixing parameters and the well - known electroweak scattering cross - sections .the backgrounds ( gamma - ray conversions in the helium ) were simulated from radioactive sources distributed throughout the major materials surrounding the helium . the level and nature of the activities assumed was in line with current best practice in solar and double beta decay neutrino experimentation . to separate a combined sample of pp , be7 and backgrounds into their respective flux components ,an extended loglikelihood method was used employing probability distribution functions constructed from various samples of the above simulation prescription . by design, the method included all correlations among variables imposed by event reconstruction , various cuts on the data as well as those arising from the properties of the pdf s themselves .the results are quite promising as can be seen in tables 2 - 5 .it appears that should the detector be built and perform as modeled it would be capable of satisfying the criteria necessary for the precision pp and be7 flux measurements .for example , to take the particular choice of the so - called `` standard threshold and fiducial cut '' on a 5-year exposure and combining all errors except energy scale ( statistical , high - level cuts , likelihood method ) a precision on pp flux of ( or including energy scale uncertainty ) results .similarly for , we find errors of and without and with , respectively , the energy scale uncertainty included .the full neutrino flux obtained without attempting to employ the separation of and individually would present a combined error of .the validity of the background model used in these simulations is a key issue .the composition and relative magnitude of the background sources assumed were based on current experience in the field and should therefore be realizable in practice ; nonetheless , it is important to ask to what extent are the simulation results dependent upon the assumed model and to what extent can the model be checked in practice .the decay modes of the sources , branching ratios and energies of the decay products are well known and the method of their propagation through the simulation programs are well established .perhaps most important is the question of whether we may have mis - estimated the total background level and if so how strongly the result would be affected . a mistake in the magnitude would enter principally through the separation procedure using the pdf s; consequently we have tested this effect by varying the assumed size of the background over a wide range .the effect is not drastic ; for example , should the background be 5 times larger , the error would double while a factor 50 larger background would raise the error by six times ( however , this latter rate would introduce prohibitive wafer deadtime ) . in contrast , reducing the background by a factor of only improves the error by . in practice , there are some checks available on the model . due to the good position and energy resolution for the point - like signal events , fiducial volumes of various sizes can be made and the stability of the flux results checked .as we have seen in tables 2 , 3 and 5 , varying the high - level cuts by setting different fiducial volumes and thresholds does not have a strong effect upon the expected errors .similarly , the nature and dependence of the observed spatial and energy distributions as a function of these cuts can also be compared directly to the model . in conclusionwe believe that a detector of the heron design utilizing superfluid helium and a coded aperture array could provide the capability to carry out the multiple physics goals achievable through precision , real - time , simultaneous measurements of and .we are grateful to the u.s .department of energy for support of r&d on this project through grant de - fg02 - 8840452 .we are indebted to j.r .klein for his close reading and valuable suggestions , r.b .vogelaar for comments and a.w .poon for assistance in providing additional computing resources at lawrence berkeley national laboratory .in order to examine the effectiveness of our flux fitting method under a more realistic context , we chose to test for the annual solar neutrino flux oscillation due to the earth orbit eccentricity .the orbit of the earth around the sun has an eccentricity of .since the diameter of the sun ( km ) is much smaller than the radius of the orbit ( km ) , the sun can be treated as a point source , thus the neutrino flux observed on earth will oscillate according to .we simulated the number of events observed daily over a span of five years .these event numbers consist of both and 7 events including their statistical fluctuations . a random error according to the systematic uncertainty of the flux fitting methodis then added to each day s flux , to simulate the errors introduced during the reconstruction and flux separation process. then the daily fitted event counts are grouped into consecutive day periods , with days worth of data each year discarded for simplicity .thus over 5 years , each of these periods contain days worth of events .a fit of these event numbers against the model of an elliptic earth orbit can then be performed , using the eccentricity as the fitting parameter .figure [ fig : ecc ] shows a typical set of data for such fitting .the error bars on the `` fitted '' data include both statistical and systematic errors , as discussed in sec .[ sec : error in fluxes ] , where the absolute energy scale uncertainty is taken as .best fit of the particular data set in that figure gives an eccentricity of ; repeating this simulation for times reveals the distribution of the best fitted eccentricity values to be .this exercise demonstrates that our flux fitting method is capable of resolving the solar neutrino flux to the precision of a few percent given 5 years of detector running time .lanou , h.j .maris , g.m .seidel , _ phys .* 58 * ( 1987 ) 2498 ; s.r .bandler , et al . ,_ j. low temp phys . _ * 93 * ( 3/4 ) ( 1993 ) 715 ; r.e .lanou , _ nucl .b. ( proc . suppl . ) _ * 138 * ( 2005 ) 98 .mckinsey , j.m .doyle ( clean ; neon using es ) , _ j. low temp phys . _* 118 * ( 2000 ) 153165 ; y. suzuki ( xmass ; xenon using es ) , in : _ proc .lownu2 _ , world scientific publ . , 2001 ,p. 81 ; r. raghavan ( lens ; indium for flux ) , _ phys .* 78 * ( 1997 ) 3618 ; h. ejiri et al .( moon ; mo for flux ) , _ phys .* 85 * ( 14 ) ( 2000 ) 2917 ; c. amsler et al .( cf4 tpc for es ) _ arxiv:0710.1049v1 [ hep - ex ] _ ; m. chen ( sno+ ; liquid scintilator for pep & cno fluxes ) , _ earth , moon and planets _ * 99 * ( 2006 ) 221 .see e.g. : m. spiro and d. vignaud , _ phys .b _ * 242 * ( 1990 ) 279 ; n. hata , s. bludman and p. langacker , _ phys . rev .d _ * 49 * ( 1994 ) 3622 ; k. m. heeger and r. g. h. robertson , _ phys . rev .* 77 * ( 18 ) ( 1996 ) 37203723 .cleveland , et al .( homestake ) , _ astrophys .j. _ * 496 * ( 1998 ) 505526 ; w. hampel , et al .( gallex ) , _ phys .b _ * 447 * ( 1999 ) 127133 ; j.n .abdurashitov , et al .( sage ) , _ nucl .phys . b _ * 118 * ( 2003 ) 39 ; m. g. altmann , et al .( gno ) , _ phys .b _ * 616 * ( 2005 ) 174 ; s.n .ahmed , et al .( sno ) , _ phys ._ * 92 * ( 2004 ) 181301 ; y. fukuda , et al .( superk ) , _ phys .b _ * 539 * ( 2002 ) 179 ; t. araki , et al .( kamland ) , _ phys .* 94 * ( 2005 ) 081801 .r. fardon , a.e .nelson and n. weiner , _j. cosmol .. phys _ * 0410 * ( 2004 ) 005 ; m. cirelli , m.c . gonzalez - garcia and c. pea - garay , _ nucl .b _ * 719 * ( 2005 ) 219 ; v. barger , p. huber and d. marfatia , _ phys . rev .lett . _ * 95 * ( 2005 ) 211802 ; m.c .gonzalez - garcia , p.c .holanda and r. zukanovich funchal , _ phys .* 73 * ( 2006 ) 033008. s.r .bandler et al , _ phys .* 74 * ( 1995 ) 3169 ; s.r .bandler , _ `` detection of charged particles in superfluid helium '' _ , phd . dissertation , brown university ( 1996 ) ; j.s .adams et al , _ phys .b _ * 341(3/4 ) * ( 1995 ) 431 .surko and f. reif , _ phys ._ * 175 * ( 1968 ) 229 ; b. sethumadhavan et al , _ nucl .instr . meth .a _ * 520 * ( 2004 ) 142 ; b. sethumadhavan , _ `` charge gain and breakdown in liquid helium at low temperatures '' _ , phd .dissertation , brown university ( 2007 ) ; b. sethumadhavan et al , _ phys .* 97 * ( 2006 ) 015301 .
results are presented for a simulation carried out to test the precision with which a detector design ( heron ) based on a superfluid helium target material should be able to measure the solar and 7 fluxes . it is found that precisions of and for and 7 fluxes , respectively , should be achievable in a 5-year data sample . the physics motivation to aim for these precisions is outlined as are the detector design , the methods used in the simulation and sensitivity to solar orbit eccentricity . , , , , , and
power series methods have proved to yield remarkably accurate eigenvalues of simple one dimensional and central field quantum mechanical models .there are basically two different approaches : on the one hand , the use of dirichlet boundary conditions at the endpoints of a sufficiently wide interval , on the other , the hill determinant method and its variants . in this paperwe focus our attention on the latter that has been applied to a wide variety of problems , including the vibration rotation spectra of diatomic molecules .the success of this method commonly depends on the weight function which in many cases is an exponential function with a width parameter that affects the rate of convergence of the approach .the purpose of this paper is to discuss an alternative approach that is less dependent on the width parameter or scaling factor . in sec .[ sec : powerseries ] we outline a well known weighted power series method . in sec . [ sec : hankel ] we develop an alternative approach based on a rational approximation to that power series approach . in sec .[ sec : anho ] we apply both approaches to the pure quartic anharmonic oscillator . in sec .[ sec : rational ] we consider a rational potential with a singular point on the complex coordinate plane . finally , in sec .[ sec : conclusions ] we draw conclusions .consider the schrdinger equation \psi ( x)=0 \label{eq : schro}\ ] ] where the potential energy function can be expanded as a well known approach for the calculation of eigenvalues and eigenfunctions is based on the ansatz where or for even or odd states , respectively .if this expansion satisfies the schrdinger equation ( [ eq : schro ] ) , then the coefficients are polynomial functions of the energy .it has been shown that one can obtain the allowed energies ( those consistent with square integrable solutions ) from the roots of the rate of convergence of the sequence of roots }\log and for in terms of the number of coefficients required by the calculation .straight lines show the overall trend of the logarithmic sequences .we appreciate that the hankel sequence converges faster than the one for the standard approach .the `` exact '' result is simply a more accurate estimate of the eigenvalue provided by the rpm .fig [ fig : var_a ] shows the variation of the logarithmic error of the roots of with for three values of .we appreciate that the optimal value of the adjustable parameter for the quartic oscillator ( [ eq : v_x^4 ] ) is about .it is not necessary to have the `` exact '' energy in order to estimate an optimal value of the adjustable parameter .if }$ ] is the approximation of order to the eigenvalue , we simply monitor the convergence of the sequence in terms of , for example , rational potential[sec : rational ] -------------------------------- both the straightforward power series method and the hankel pad approach apply successfully to polynomial potentials as illustrated in the preceding section by means of a simple nontrivial example . in what followswe consider the rational potential that has been studied by several authors . among the approaches applied to this model we mention perturbation theory , including the expansion , variational methods , and in particular the rayleigh ritz method .one can easily obtain exact solutions to the schrdinger equation with the potential ( [ eq : v_rational ] ) for some values of the parameters and that prove suitable for testing approximate methods . the power series ( [ eq : v_series ] ) converges only for , where are the two poles of the potential energy function on the imaginary axis of the complex .if the eigenfunction is negligible for and the hill determinant method may yield reasonable results for the lowest energies and only after judicious truncation of the sequences of roots . on the other hand, the expansion of the wavefunction in a power series of the variable leads to a successful approach for all values of and . since the hankel pad method is based on a rational approximation to the wavefunction , one expects that it takes into account the singularities properly , succeeding even for moderate values of . in what follows we compare it with the weighted power series method ( [ eq : psi_series ] ) .first of all , notice that as so that we expect to be optimal and choose this width parameter value from now on .we have verified that the hankel pad approach yields reasonable results for other values of such as , for example , , and .table [ tab : rational1 ] shows the results of the hankel pad calculation of the ground state eigenvalue of the schrdinger equation ( [ eq : schro ] ) with the rational potential ( [ eq : v_rational ] ) for and three values of .notice that present hankel pad results are more accurate than those obtained earlier by means of the rayleigh ritz variational method , and comparable to those provided by a kind of iterative solution of the rayleigh ritz secular equation with an adjustable parameter .there are much more accurate results in the literature ; for example , stubbins and gornstein obtained and for and , respectively .table [ tab : rational2 ] shows results from the hill determinant method .a lack of entry means that we did not find any root in the interval .notice that while the hankel pad approach converges smoothly the hill determinant method does not , even for . besides, the latter approach does not give any reasonable result for .for the roots of the hill determinant oscillate about the exact eigenvalue , giving the tightest bounds for , before the sequence begins to diverge .averaging the roots for and one estimates that is quite close to the exact eigenvalue .however , this strategy is only practical for sufficiently small values of as discussed above .we have thus verified our earlier supposition that the hankel pad method should correct a possible failure of the power series approach caused by singular points of the potential energy function in the complex coordinate plane .clearly , the results of the preceding sections show that * the sequence of roots of the hankel determinants ( [ eq : hankel_c ] ) converges more smoothly than the sequence of roots of equation ( [ eq : cm=0 ] ) for polynomial potentials .* the rate of convergence of the sequence of roots of the hankel determinants ( [ eq : hankel_c ] ) is not so strongly dependent on the value of as the sequence of roots of the standard approach ( [ eq : cm=0 ] ) .in fact , the former converges where the latter does not ( even when ) . * the hankel pad method is preferable for the treatment of potential energy functions with singularities in the complex coordinate plane that limit seriously the range of applicability of the power series .* however , from a purely practical point of view it is worth noticing that when both approaches are successful , the calculation of the roots of the hankel determinants typically requires more cpu time .* there is more than one sequence of roots of the hankel determinant ( [ eq : hankel_c ] ) that converges towards a given eigenvalue .present approach shares this curious phenomenon with the rpm and appears to be a feature of the hankel determinants constructed from the coefficients of the power series coming from either the riccati equation or the schrdinger one .the schrdinger equation with the simple potential energy functions discussed above can easily be treated by means of the rayleigh ritz variational method and the basis set of eigenfunctions of the harmonic oscillator , where is an adjustable parameter .the problem reduces to the diagonalization of the hamiltonian matrix with elements .the main advantage of this approach is that it provides upper bounds to all the eigenvalues .besides , in some cases for all , and the resulting secular equation with a band matrix can be treated as a recurrence relation . in this wayone does not have to diagonalize a large matrix but simply to find the roots of a determinant of much smaller constant dimension .this is precisely the case for the simple examples discussed above .however , this variational method may not be practical if the calculation of the matrix elements of the potential energy function is too difficult . in that case the power series methods and its variants may be preferable .mitra a k 1978 _ j. math .* 19 * 2018 .kaushal r s 1979 _ j. phys . a _ * 12 * l253 .bessis n and bessis g 1980 _ j. math ._ * 21 * 2780 .flessas g p 1981 _ phys .a _ * 83 * 121 .hautot a 1981 _ j. comput .* 39 * 72 .varma v s 1981 _ j. phys .* 14 * l489 .flessas g p 1982 _ j. phys .* 15 * l97 . heading j 1982 _ j. phys .* 15 * 2355 .lai c s and lin h e 1982 _ j. phys .* 15 * 1495 .whiteheadt r r , watt a , flessas g p , and nagarajan m a 1982 _ j. phys .* 15 * 1217 .bessis n , bessis g , and hadinger g 1983 _ j. phys . a _ * 16 * 497 .chaudhuri r n and mukherjee b 1983 _ j. phys . a _ * 16 * 4031 . heading j 1983 _ j. phys .a _ * 16 * 2121 . znojil m 1983 _ j. phys .a _ * 16 * 293 .znojil m 1983 _ j. phys .a _ * 16 * 279 .cohen m 1984 _ j. phys .a _ * 17 * 2345 .znojil m 1984 _ j. phys . a _ * 17 * 3441 .fack v and berghe g v 1985 _ j. phys . a _ * 18* 3355 . handy c r 1985 _ j. phys . a _ * 18 * 3593 .marcilhacy g and pons r 1985 _ j. phys . a _ * 18* 2441 . fack v , de meyer h , and vanden berghe g 1986 _ j. math ._ * 27 * 1340 .blecher m h and leach p g l 1987 _ j. phys .* 20 * 5923 .fack v and berghe g v 1987 _ j. phys .* 20 * 4153 .roy p and roychoudhury r 1987 _ phys .a _ * 122 * 275 .varshni y p 1987 _ phys .a _ * 36 * 3009 .gallas j a c 1988 _ j. phys .* 21 * 3393 .hodgson r j w 1988 _ j. phys . a _ * 21 * 1563 .roy p , roychoudhury r , and varshni y p 1988 _ j. phys .* 21 * 1589 .roy b , roychoudhury r , and roy p 1988 _ j. phys .* 21 * 1579 .scherrer h , risken h , and leiber t 1988 _ phys .a _ * 38 * 3949 .berghe g v and meyer h e d 1989 _ j. phys .* 22 * 1705 .bose s k and varma n 1989 _ phys .a _ * 141 * 141 .lakhtakia a 1989 _ j. phys . a _ * 22 * 1701 .hislop d , wolfaard m f , and leach p g l 1990 _ j. phys . a _ * 23 * l1109 .roy p and roychoudhury r 1990 _ j. phys . a _* 23 * 1657 .adhikari r , dutt r , and varshni y p 1991 _ j. math .* 32 * 447 .fernndez f m 1991 _ phys .lett . a _ * 160 * 116 .pons r and marcilhacy g 1991 _ phys .a _ * 152 * 235 .witwit m r m 1991 _ j. phys .* 24 * 5291 .agrawal r k and varma v s 1993 _ phys .a _ * 48 * 1921 .handy c r , hayes h , stephens d v , joshua j , and summerour s 1993 _ j. phys .* 26 * 263 . stubbins c and gornstein m 1995 _ phys .a _ * 202 * 34 .ishikawa h 2002 _ j. phys .a _ * 35 * 4453 .hall r l , and ciftci h 2006 _ j. phys .a _ * 39 * 7745 .fernndez f m , ogilvie j f , and tipping r h 1986 _ j. chem .phys . _ * 85 * 5850 ... 8d .. 20d ..20d .. 20 & & & + 2 & 1.385 & 1.353120 & 1.21 + 3 & 1.380525 & 1.353123 & 1.23 + 4 & 1.3805318 & 1.3529481 & 1.232 + 5 & 1.3805322 & 1.3529489 & 1.2323 + 6 & 1.38053181 & 1.352948023 & 1.23234 + 7 & 1.3805318009377 & 1.352952 & 1.232348 + 8 & 1.380531800938043 & 1.352948022755 & 1.2323502 + 9 & 1.3805318009380452 & 1.352948037359 & 1.2323506 + 10 & 1.380531800938045232 & 1.352948022753577 & 1.23235069 + 11 & 1.3805318009380452345 & 1.352948022753566 & 1.23235072 + 12 & 1.3805318009380452344 & 1.35294802275357088 & 1.232350721 + 13 & 1.3805318009380452344 & 1.35294802275357081 & 1.232350723 + 14 & & 1.3529480227535708289 & 1.2323507233 + 15 & & 1.3529480227535708284 & 1.23235072337 + 16 & & 1.3529480227535708285 & 1.23235072339 + 17 & & 1.3529480227535708284 & 1.232350723403 + 18 & & 1.3529480227535708284 & 1.232350723405 + 19 & & & 1.2323507234057 + 20 & & & 1.23235072340595 + 21 & & & 1.23235072340602 + 22 & & & 1.232350723406047 + 23 & & & 1.232350723406054 + 24 & & & 1.2323507234060566 + 25 & & & 1.2323507234060574 d.. 1d .. 6d .. 6 & & + 2 & 1.59 & 1.59 + 3 & 1.32 & 1.26 + 4 & 1.41 & 1.43 + 5 & 1.37 & 1.30 + 6 & 1.389 & 1.42 + 7 & 1.375 & 1.29 + 8 & 1.385 & 1.46 + 9 & 1.377 & 1.22 + 10 & 1.384 & 1.82 + 11 & 1.377 & 1.03 + 12 & 1.384 & + 13 & 1.376 & + 14 & 1.386 & + 15 & 1.373 & + 16 & 1.391 & + 17 & 1.364 & + 18 & 1.409 & + 19 & 1.337 & + 20 & 1.48 & + 21 & 1.25
an appropriate rational approximation to the eigenfunction of the schrdinger equation for anharmonic oscillators enables one to obtain the eigenvalue accurately as the limit of a sequence of roots of hankel determinants . the convergence rate of this approach is greater than that for a well established method based on a power series expansions weighted by a gaussian factor with an adjustable parameter ( the so called hill determinant method ) .
pricing financial derivatives is a main subject in mathematical finance with multiple implications in physics . in 1900 ,five years before einstein s classic paper , bachelier proposed the _arithmetic _ brownian motion for the dynamical evolution of stock prices with the aim of obtaining a formula for option valuation .samuelson noticed the failure of such market model that allowed negative values in stock prices , and introduced the _geometric _ brownian motion which corrects this unwanted feature . within his log - normal model ,samuelson obtained the fair price for perpetual options , although he was unable to find a general solution for expiring contracts .the answer to this question must wait until the publication of the works of black and scholes , and merton .the celebrated black - scholes - merton formula has been broadly used by practitioners since then , mainly due to its unambiguous interpretation and mathematical simplicity .this mathematical simplicity in the black - scholes - merton scheme has nevertheless a drawback : the model poorly adapts to those evolving conditions that affect actual derivatives and real markets present . in particular , empirical analyses conclude that volatility , roughly the diffusion parameter , must be considered as a changing ( random ) magnitude rather than a mere constant , as in the black - scholes - merton formula .many models have been developed in this direction , but among them a few deserve special emphasis because their historical imprint : the works of hull and white , wiggins , scott , stein and stein , or heston belong to this selected group .they are collectively termed as _ stochastic volatility _models because volatility becomes a continuous process that follows its own stochastic differential equation .quite in the opposite direction we find another possible approach to the issue : the markov - modulated geometrical brownian motion models . within these modelsthe market coefficients change in a deterministic way but at random times , and according to that they are generically known as regime - switching models . behind these liesthe theory that the value of the parameters depends on the state of the economy , that suffers from seasonal changes .seeds of this idea appeared in barone - adesi and whaley , but the first time a model of this kind was properly settled is in the work of naik .since then such models have been used to discuss european , russian or american option prices , but also in asset allocation and portfolio optimisation problems .once we have mentioned different option flavours , we must also recall that the black - scholes - merton formula is only is applicable to european options |derivatives that can be exercised at maturity alone| , whereas most of the exchange - traded options are american |they can be exercised anytime during the life of the contract .kim provided an integral representation of the american _ vanilla _ option price , but the explicit solution remains still unknown .the real difficulty behind pricing american options lies in the fact that one has to solve free boundary problems for partial differential equations , equations which sometimes have clear physical interpretation : mckean , in what is the earliest approach to the issue , unambiguously speaks about the _ heat equation _ in an economic publication . herethe boundary represents the optimal exercise price , the stock price that triggers the early exercise of the option , and in general one can determine it just once the pricing expression is known , what leads to a circular problem . as a consequence ,only restricted circumstances allow for closed formulas in american - like problems , whereas in the most general scenario analytical or numerical approximate methods must be used instead .one of theses favourable instances corresponds to a major simplification in the american problem : the assumption that the option never expires .non - expiring or _options serve as a good stating point in the resolution of the complete problem because the absence of maturity usually removes any explicit temporal dependence that those differential equations present . in this articlewe are going to tackle the problem of pricing perpetual american vanilla calls and puts within a market model that can be considered as a degenerated instance of a regime - switching model with only two states and where one of the transition probabilities is set to zero : we will let volatility and dividend rate perform a _regime change at some unknown instant in the future .the value before and after the regime change of such magnitudes are assumed to be known in advance , what indeed determines if the change in the dynamics has taken place or not .part of the present analysis is formally included as a limiting case in the paper by guo and zhang cited above , where the authors studied perpetual american vanilla put options with regime switching in absence of dividends , but there are divergences as well . on the one hand , we will also pay attention to the call case . since dividend - paying assets are considered , a perpetual call may differ in price from its underlying , even though the option must not be early exercised . on the other hand ,their study on puts is complete and very general but few explicit details about the closed - form solution found are given .this is specially relevant in the analysis of the properties of the equations that determine the optimal exercise price : in essence one obtains different pricing formulas , each of them coming from an extreme value problem , and must elucidate the right choice at every moment . in spite of the multiplicity of parameters that our model involves , we will show how the value of a single magnitude answers the dilemma . therefore , there is a formal resemblance between this behaviour and a phase transition in thermodynamics . the paper is structured as follows : in we present the market model , the securities traded in it and their general properties . in we introduce the concept of hedging portfolio and show how it can be used for pricing derivatives .is specifically devoted to perpetual american vanilla options , we settle the properties of these ideal derivatives , and stress their interest from the point of view of finance and physics as well . inwe quote the analytic expressions found , and emphasize the interpretation of the optimal exercise price as the outcome of an extremal problem . inwe discuss the most appealing properties and particularities of the formulas that conform the solution to the problem , by illustrating our inferences through graphical examples .conclusions are drawn in , and the paper ends with a lengthy appendix where we present step - by - step calculations for call options .let us begin with the general description of the stochastic properties of the securities that are traded in the market .the first security to be considered is a zero - coupon bond , a riskless monetary asset with a market price with deterministic evolution : where , the risk - free interest rate , is assumed to be constant and positive .the second security present in the market is the stock , whose time evolution for fulfils the following it stochastic differential equation ( sde ) : where is a wiener process , a one dimensional brownian motion with zero mean and variance equal to .the drift , , and the volatility , , are stochastic processes as well : we will assume that their initial values are and , and after that moment they may simultaneously change to some other ( different ) fixed values , and , known in advance .such a turnover can take place only once in a lifetime : will denote the indicator function , which assigns the value 1 to a true statement , and the value 0 to a false statement . ] we state that these magnitudes are stochastic because the instant in which the regime change occurs is a random variable . we will consider that follows an exponential law with =\lambda^{-1}>0 ] : which is a maximum since the above properties compel equation to have at the most one solution for which .the necessary and sufficient condition for the existence of such solution is therefore , condition grants that and , incidentally , that is satisfied in ( [ vplusa1 ] ) .in fact , we can show how leads to as well .let us assume that : in this case we have to solve ( [ odecalla1 ] ) and ( [ odecalla22 ] ) .the general solutions if are : and with the five boundary conditions to be satisfied listed below : after imposing constraints ( [ va2_1])-([va2_4 ] ) one gets \left(\frac{s}{h^+_a}\right)^{\gamma^+_a}\\ + \frac{\lambda}{\lambda+r}\frac{\beta^+_b}{\gamma^+_a-\gamma^-_a } k\bigg[\frac{\gamma^-_a}{(\gamma^+_a-1)(\gamma^+_a-\beta^+_b)}\\-\frac{\gamma^+_a}{(1-\gamma^-_a)(\beta^+_b-\gamma^-_a)}\left(\frac{h^+_b}{h^+_a}\right)^{\gamma^+_a-\gamma^-_a}\bigg]\left(\frac{s}{h^+_b}\right)^{\gamma^+_a}\\ + ( h^{+}_b - k)\frac{\lambda}{\lambda+\ell^{+ } } \left(\frac{s}{h^{+}_b}\right)^{\beta^{+}_b } , \quad ( s\leqslant h^{+}_b),\end{aligned}\ ] ] and \left(\frac{s}{h^+_a}\right)^{\gamma^+_a}\\ + \frac{\lambda}{\lambda+r}\frac{\beta^+_b\gamma^+_a}{(\gamma^+_a-\gamma^-_a)(1-\gamma^-_a)(\beta^+_b-\gamma^-_a ) } k\bigg[\left(\frac{s}{h^+_b}\right)^{\gamma^-_a}-\left(\frac{h^+_a}{h^+_b}\right)^{\gamma^-_a}\left(\frac{s}{h^+_a}\right)^{\gamma^+_a}\bigg]\\ + \frac{\lambda}{\lambda+\delta_a}s - \frac{\lambda}{\lambda+r}k , \quad ( h^{+}_b < s\leqslant h^{+}_a).\end{aligned}\ ] ] condition ( [ va2_5 ] ) leads to a new transcendental equation : and one can define a second auxiliary function in order to analyse the problem : since . as before and bounded and positive constants , defined in the main text , equations ( [ cc ] ) and ( [ cd ] ) .the values at the extremes of the interval of interest are and provided that .when function has a maximum at , since whereas is a decreasing function if . in any case , one can conclude that has a single zero , irrespective of the maximum location which , if exists , may be placed either inside or outside this region .the resolution scheme for perpetual american vanilla puts is very similar .we have decided not to include it here for brevity reasons .
perpetual american options are financial instruments that can be readily exercised and do not mature . in this paper we study in detail the problem of pricing this kind of derivatives , for the most popular flavour , within a framework in which some of the properties |volatility and dividend policy| of the underlying stock can change at a random instant of time , but in such a way that we can forecast their final values . under this assumption we can model actual market conditions because most relevant facts usually entail sharp predictable consequences . the effect of this potential risk on perpetual american vanilla options is remarkable : the very equation that will determine the fair price depends on the solution to be found . sound results are found under the optics both of finance and physics . in particular , a parallelism among the overall outcome of this problem and a phase transition is established .
paper presents a new way to analyze gossip protocols based on random linear network coding that substantially simplifies , extends , and strengthens the results of previous work .gossip is a powerful tool to efficiently disseminate information .its randomized nature is especially well - suited to work in unstructured networks with unknown , unstable or changing topologies .because of this , gossip protocols have found a wide range of applications and have been extensively studied over the past several decades .recently , gossip protocols based on random linear network coding ( rlnc ) have been suggested to cope with the additional complexities that arise when multiple messages are to be distributed in parallel .rlnc gossip has been adopted in many practical implementations and has performed extremely well in practice .these successes stand in contrast to how little rlnc gossip is understood theoretically .since its initial analysis on the complete graph , several papers have tried to give good upper bounds on the stopping time of rlnc gossip in more general topologies . however , none of them address the case of unstable or changing topologies , and , even with the restriction to static networks , the guarantees are far from being general or tight on most graphs .in addition , all existing proofs are quite involved and do not seem to generalize easily . [[ our - results ] ] our results + + + + + + + + + + + this paper has two main contributions .the first is a new analysis technique that is both simpler and more powerful than previous approaches .our technique relates the stopping time for messages to the much easier to analyze time needed to disseminate a single message . for the first time , and in practically all settings , this technique shows that rlnc gossip achieves perfect pipelining , i.e. , it disseminates messages in order optimal time .our results match , and in most cases improve , all previously known bounds and apply to much more general models . to formalize this ,we give a general framework for network and communication models that encompasses and unifies the models suggested in the literature so far .we give concrete results for several instantiations of this framework and give more detailed comparisons with previous results in each section separately . as a second major contribution, our framework extends all models to ( highly ) dynamic networks in which the topology is allowed to completely change at any time .all of our results hold in these networks even if the network dynamics are controlled by a fully adaptive adversary that decides the topology at each time based on the complete network state as well as all previously used randomness . virtually nothing , besides simple sequential flooding protocols , was previously known in such truly pessimistic network dynamics .having optimal `` perfectly pipelined '' stopping times in worst - case adaptive dynamic networks is among the strongest stability guarantees for rlnc gossip that one might hope for . to this end ,our results are the first that formally explain rlnc gossip performance in the dynamic environments it is used in and was designed for . while the algorithm works in this wide variety of settings , our analysis remains mostly the same and extremely simple , in contrast with complex proofs that were previously put forward for the static setting .gossip is the process of spreading information via a randomized flooding procedure to all nodes in an unstructured network .it stands in contrast to structured multi - cast in which information is distributed via an explicitly built and maintained structure ( e.g. spanning tree ) .while structured multi - cast can often guarantee optimal use of the limited communication resources it relies heavily on having a know and stable network topology and fails in distributed or uncoordinated settings .gossip protocols were designed to overcome this problem . by flooding information in a randomized fashionthey guarantee to deliver messages with high probability to all nodes with little communication overhead .this stability and distributed nature of gossip makes it an important tool for collaborative content distribution , peer - to - peer networks , sensor networks , ad - hoc networks and wireless networks and literature applying gossip in many areas and for many purposes is vast ( e.g. ) . the gossip spreading of both a single message and multiple messages has been intensely studied .the spreading of one message often follows a comparatively simple epidemic random process in which the message is flooded to a randomly chosen subset of neighbors .spreading multiple messages in parallel is significantly more complicated because nodes need to select which information to forward .the main problem in this context is that widely spread messages get forwarded more often and quickly outnumber rarer messages . in many casesthe slow spread of the rare messages dominates the time needed until all nodes know every message . a powerful and elegant way to avoidthis and similar problems is the use of network coding techniques .network coding as introduced by the seminal work of ahlswede , cai , li and yeung breaks with the traditional concept that information is transported by the network as an unchanged entity .ahlswede at al .show that in many multi - cast scenarios the optimal communication bandwidth can be achieved if and only if intermediate nodes in the network code information together .li , yeung and cai showed that for multi - cast it is enough if intermediate nodes use linear coding , i.e. computing linear combinations of messages . following this ho , koetter , mdard, karger and effros showed that the coefficients for these linear combinations need not be carefully chosen with regard to the network topology but that for any fixed network the use of random linear combinations works with high probability .the strong performance guarantees and the independence of the coding procedure from any global information about the network makes random linear network coding ( rlnc ) the perfect tool for spreading multiple messages .this was first observed and made formal by deb and mdard .they show that using randomized gossip and rlnc in a complete network in which each of the nodes starts with one message all information can be spread to all nodes in linear time , beating all non - coding approaches . after the introduction of this protocol in and its follow - up was used in many applications , most notably the microsoft secure content distribution ( mscd ) or avalanche system .there has also been more theoretical work investigating the convergence time of the rlnc - algorithm on general static network topologies .we give a detailed description and comparison to these works in section [ sec : applications ] .[ [ gossip - in - dynamic - networks - models ] ] gossip in dynamic networks models + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + while previous work on rlnc gossip focused on static networks our analysis shows that it works equally well in a wide range of dynamic network topologies .this contributes to ongoing work on modeling dynamic networks and exploring ways to efficiently communicate over them . with more and more modern networks being highly dynamicthis task has recently gained importance .the model for studying these networks is still in flux .substantial work has been devoted to random connectivity models in which a particular graph suffers different random edge faults in each round , or in which each node is connected to other random nodes in each round .other work , e.g. on population protocols ( see for a recent survey ) has been invested in studying networks that eventually stabilize .other models allows for worst - case changes in network connectivity to happen , but only at a slow pace with plenty of time for self - stabilization to adapt to the changes .gossip and broadcasting are among the most frequently considered primitives in these settings .recently , kuhn , lynch , and oshman proposed a truly pessimal model of network connectivity : that an adaptive adversary chooses the network structure in each round , subject only to the requirement that the network be connected in each round , and that nodes _ anonymously broadcast _ some chosen message without knowing who their current neighbors are .the strength of this model means that any algorithms that work in it will be broadly applicable to dynamic networks .kuhn et al .give simple algorithms based on sequentially flooding messages through the network as a proof that computation is at least possible though with strong performance losses compared to static networks ( even a simple consensus takes rounds in which all nodes communicate ) . our network model framework adopts the pessimal dynamics of kuhn et al . and can be seen as extending the model to also include network topologies with different connectivities , asynchronous communication or non - broadcasting behavior .more importantly is that this paper shows that rlnc gossip remains highly efficient in these dynamic networks giving the first improvements over the simple flooding algorithms in .[ [ organization ] ] organization + + + + + + + + + + + + section [ sec : algorithm ] reviews the rlnc algorithm and section [ sec : technique ] gives our new analysis technique . in section [ sec : model ]we introduce the network model framework .section [ sec : applications ] shows how to apply our technique in various instantiations of this framework .section [ sec : extensions ] finally discusses several ways in which the intentionally simple proofs from section [ sec : applications ] can be extended or sharpened .in this section , we give a brief description of the rlnc algorithm .the algorithm is simple and completely independent of the network structure or communication protocol .alternative descriptions of the same algorithm can be found in or .the rlnc algorithm sends out packets in the form of vectors over a finite field , where is an arbitrary prime or prime power .we assume that there are messages , , that are vectors from of length .every packet that is sent around during the execution of the algorithm has the form , where is a linear combination of the messages , and is the vector of the coefficients . if enough packets of this form are known to a node , i.e. , the span of the coefficient vectors is the full space , gaussian elimination can be used to reconstruct all messages . for this , only packets with linearly independent coefficient vectors are needed .linearity furthermore guarantees that any `` new packet '' that is created by taking a linear combination of old packets has the same valid format . with this, it is easy to see that a node can produce any packet whose coefficient vector is spanned by the coefficient vectors of the packets it knows .the algorithm is now easily described : each node maintains a subspace that is the span of all packets known to it at the beginning and received so far . if does not know any messages at the beginning , then is initialized to contain only the zero vector .if knows some message(s ) at the beginning , is initialized to contain the packet in which is the standard basis vector . furthermore contains all linear combinations that complete the span of these packet(s ) . whenever node sends out a packet , it chooses a uniformly random packet from . at the end of each round, all received packets are added to and again the span is taken . if the subspace spanned by the coefficient vectors is the full space , a node decodes all messages . throughout the rest of the paper we will solely concentrate onthe `` spreading '' of the coefficient vectors ; the linear combination of the messages implied by a coefficient vector is always sent along with it .we therefore define to be only the coefficient part of , i.e. , the projection onto the first components .* remark : * the parameter is used to trade of a faster running time versus bandwidth . while a larger can lead to faster convergence it increases communication overhead by increasing the size of the -size rlnc - coefficients . in contrast to some of the related papers all results in this paper hold for arbitrary choices of . for simplicity we will often restrict ourself to .note that this is the hardest case for running time considerations and it can be safely assumed that convergence times for larger will only be better .the case is furthermore interesting because it leads to the minimal rlnc - coefficients overhead and allows the use of simple xors as a basic arithmetic operation .the crux of multi - message gossip , especially in dynamically changing networks , is that it is not known to a sender who will receive a packet when it is transmitted .this renders exchanging information difficult and typically makes information broadcast protocols work well in the beginning but deteriorate when nodes begin knowing a lot of ( mostly the same ) information .for example , if messages need to be spread , and one node knows all of them and is allowed to transmit to another node that is only missing one message , the chance that the right message is picked is at most . here , network coding can drastically improve the performance .mixing packets over a large enough field makes it highly likely that the node will transmit some new information . in the above example, the first node could transmit a specified random xor ( or random linear combination over ) of messages . whenever the message not known to the second nodeis mixed with a non - zero coefficient , the node can reconstruct the missing message .if the field size is taken to be large enough , the probability that this happens can be made arbitrarily high .when analyzing the rlnc algorithm presented in section [ sec : algorithm ] , sub and mdard were the first to use the notion of dimensionality of the subspaces as a measure of progress .they made the observation that a node can , and most likely will , transmit new information to a node , and thus increase the dimension of , whenever the subspace is not already contained in .for this reason .they call such a node _ helpful _ for .it is easy to see that the vectors that do not extend the dimensionality of , namely those in , form a lower dimensional subspace in .this results in a success probability of at least if a random vector from is chosen as a transmission .this fact and the notion of helpfulness is used as a crucial tool in all further rlnc proofs .unfortunately , it becomes hard and complicated to keep track of helpfulness especially because dimensionality does not accurately capture the progress of a system towards a mixed state well enough .take for example a network in which two cliques or nodes are connected to each other . in this systemthere are two extreme states in which every node has dimension and every node is helpful to all others . in the first one the knowledge of a messageis restricted to one clique , i.e. , messages are known by the first clique and messages are known by the second clique .a message is known to all nodes in its clique but to one .this makes all nodes helpful to each other .it is clear that this state is highly concentrated and not well mixed while the truly mixed state would be the one in which a `` randomly chosen '' half of the nodes knows about each message .we argue that the right way to look at the spreading of information is to look at the orthogonal ( dual ) complement offers additional information on orthogonal complements . ] of the coefficient subspaces . while the coefficient subspaces grow monotonically to the full space their orthogonal complement decreases monotonically to the empty span . to seehow quickly this happens we first concentrate on one fixed ( dual ) vector , determine the time that is needed until it disappears from all subspaces with high probability and than take a union bound over all those dual vectors .to formalize this we introduce the following crucial notion of knowing : a node knows about if its coefficient subspace is not orthogonal to , i.e. , if there is a vector with .note that a node knowing a vector does not imply or anything about being able to decode a message associated with the coefficients .knowing only indicates that the node is not completely ignorant about the set of packets that have a coefficient vector orthogonal to .counterintuitively , because we are not working over a positive - definite inner - product space , it can even be that but does not know .for example , over , if is just ( the span of ) the vector , then since over ( has dot product 0 with itself mod 2 ) , does not know , even though .the next lemma proves the two facts that make this notion of knowledge so useful : [ lem : knowledge - spreads ] if a node knows about a vector and transmits a packet to node then knows about afterwards with probability at least .furthermore if a node knows about all vectors in then it is able to decode all messages .knowledge about a essentially spreads with probability because the vectors in that are perpendicular to form a hyperplane in .for a complete and more elementary proof see appendix [ app : proofs ] . with this , the spreading of knowledge for a vector is a monotone increasing set growing process .it is usually relatively easy to understand this process and to determine its expected cover time .because the spreading process can be seen as a monotone markov process , it is easy to prove that the cover time always has an exponentially decaying tail . in most cases this tail kicks in close to the expectation .this allows to pick a ( usually ) such that after time any vector in has spread with probability and then take a union bound over all vectors to complete the proof that with high probability everything has spread .the following theorem summarizes this idea : [ thm : reduction ] fix a prime ( power ) , a probability and an arbitrary network and communication model .+ suppose a single message is initiated at a node and then flooded through the network by the following faulty broadcast : in every round every node that knows the message and is supposed to communicate according to the communication model does forward the message with probability and remains silent otherwise .if for every node the probability that the message reaches all nodes after rounds is at least then messages can be spread in the same model in time with probability using the rlnc gossip protocol with field size .this follows directly from the discussion above and lemma [ lem : knowledge - spreads ] .initially every non - zero vector is known to at least one node namely the one that knows about the message where is a non - zero component of .whenever the network and communication model dictates that a node that knows sends a message to a node lemma [ lem : knowledge - spreads ] shows that with probability the node afterwards knows .the spreading of each vector therefore behaves like a faulty flooding process that floods in every transmission with probability . by assumptionwe have that after time steps every vector from fails to spread to all nodes with probability at most .taking a union bound over all vectors gives the guarantee that the probability that after rounds all nodes know about all vectors is at least . according to lemma [ lem : knowledge - spreads ]all nodes can decode in this case and have learned the messages .next we give a typical and easy way to apply theorem [ thm : reduction ] .we show that the cover time for one vector is often dominated by a negative binomial distribution , where is the expected coverage - time , and is a constant probability .such a distribution has a strong enough tail to prove optimal stopping times . in what followswe give a simple template to establish this : what is needed for this template is a definition of a `` successful round '' such that at most such rounds are needed to spread a single vector and such that a round is not a success with ( say for now constant ) probability at most .the appropriate definition of success depends on the network model and is usually centered around its expansion , cuts , or diameter which determine how many additional nodes come to know about the vector in a `` good round '' . since nodes do not forget any information this spreading process is monotone and no progress gets lost in a bad round .thus if the knowledge about has not spread after steps , then there were at least failures , whereas one would only expect .if we choose the constant large enough , a chernoff bound or even simpler methods can now show that the probability for this to happen is at most .this is small enough that , after a union bound over all vectors ( e.g. for ) , the probability that all messages have not spread is at most .this simple template often applies directly and leads to simple proofs of expected and high probability converges times of that are often already order optimal .even when not stated explicitly , all of our results hold furthermore with high probability . in particular as shown here , an optimal additive additional rounds typically suffice to obtain a success probability for any .in this section , we elaborate on our network model framework that encompasses and extends the models suggested in the literature so far .the models and the results are very stable and can easily be extended further .we chose the following description as a trade - off between simplicity and generality .[ [ the - network ] ] the network + + + + + + + + + + + we consider networks that consist of nodes .a network is specified by a ( directed ) graph on these nodes for every time .edges in are links and present potential communication connections between two nodes in round .we will usually assume that the network has , at all times , certain connectivity properties and will express the stopping time in terms of these parameters .( see also section [ sec : modelextensions ] . )[ [ adversarial - dynamics ] ] ( adversarial ) dynamics + + + + + + + + + + + + + + + + + + + + + + in all previous papers that analyzed the rlnc algorithm , the network topology was assumed to be _ static _ , i.e. , .as discussed in the introduction , we allow the network topology to change completely from round to round and allow a fully adaptive adversary to choose the network .because we are dealing with randomized protocols , we have to specify precisely what the adversary is allowed to adapt to . in our models ( similar to )an _ adaptive adversary _ gets to know the complete network state and all previously used randomness when choosing the topology .after that , independent randomness is used to determine the communication behavior and the messages of the nodes on this topology .this means that the adversary can not adapt to who is sending to whom , or which messages are chosen for this round .the first assumption is necessary in many models to not render communication impossible .the second assumption can be weakened to get _ strongly adaptive _ or even _ omniscient _ adversaries who know in advance all future randomness that is used to create messages .the companion paper shows a trade - off between the adaptiveness of the adversary and the field size for these models . in this paperwe restrict our attention to the adaptive adversary .[ [ the - goal - gossip ] ] the goal : gossip + + + + + + + + + + + + + + + + distributed over the network are messages numbered each known to at least one node . throughout this paper , we assume a worst - case starting configuration for all messages including the case in which all messages are exclusively known to only one node ( see also section [ sec : mixed - initial - state ] ) .the goal of gossip protocols is to make all messages known to all nodes in the network using as little time as possible ( in expectation and with high probability ) [ [ communication ] ] communication + + + + + + + + + + + + + nodes communicate along links with each other during transactions that are atomic in time . in each round , one packetis transmitted over a link if this link is activated in this round . from the view of a node , there are four commonly considered types of connections .either a node sends to all its neighbors , which is usually referred to as broadcast , or it establishes a connection to one ( e.g. uniformly random ) neighbor and sends ( push ) or receives ( pull ) a message or both ( exchange ) . in all cases , the packet is chosen without the sender knowing which node(s ) will receive it . [[ message - and - packet - size ] ] message and packet size + + + + + + + + + + + + + + + + + + + + + + + as described in section [ sec : algorithm ] we assume that all messages and packets have the same size , and that a packet exactly contains one encoded message and its rlnc - coefficients . note that the restriction on the message size is without loss of generality , since one can always cut a big message into multiple messages that fit into a packet .we also assume that the message size is large enough that the size of the rlnc - coefficients that are sent along is negligible .this assumption was made by all previous work and is justified by simulations and implementations in which the overhead is only a small fraction ( e.g. ) of the packet size .[ [ synchronous - versus - asynchronous - communication ] ] synchronous versus asynchronous communication + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we consider two types of timing models . in the synchronous case ,all nodes get activated at the same time and choose their messages independently , and messages get delivered according to the current network and who sends and receives from whom .note that this model is inherently discrete , and we assume that are the times when nodes communicate .we discuss this model in section [ sec : randomphonecall ] . for the asynchronous case , we assume that every node communication is triggered independently by a poisson clock .this means that ( with probability one ) at any time only one node sends its message .this model can be directly translated into a discrete time model that defines round as the time such a communication takes place .the model considered in the literature so far assumes that every node is activated uniformly at random to communicate and then chooses a uniformly random neighbor for a push , pull or exchange .they also scale the time in the asynchronous model by a factor of so that each node gets activated once per time unit in expectation .we do not assume uniformity in either of the two distributions , and we present results for this more general model in section [ sec : asynchsingle ] .in this section we take the models from section [ sec : model ] and describe the results that can be obtained for them using our analysis technique .there is a section for each different kind of communication model .we start with the random phone call model that introduced rlnc - gossip .we than cover the extensions to arbitrary underlying network topologies as considered by .section [ sec : asynchsingle ] proves stopping times for a communication model that encompasses all former asynchronous communication protocols ( push , pull , exchange , ) . for this modelwe answer a question of and show that a simple min - cut quantity exactly captures the behavior of gossip of messages .lastly in section [ sec : broadcast ] we give the first bounds for the performance of synchronous and asynchronous broadcast in general networks . in this sectionwe concentrate on showing only simple proofs that solely use the template from section [ sec : simple - template ] . in section [ sec : extensions ] , we revisit the models covered here and show some proof extensions .in this section , we consider the work of deb and mdard and its follow - up and show how to simplify and improve the analysis .the papers use a fairly simple model from our framework , namely the synchronous push or pull model on the complete graph , i.e. , .this means in each round each node picks a random other node to exchange information with .this model is also known as the random phone call model and was introduced by .it is shown that it is possible in this model to spread messages in time if .this beats the time of sequential -phases of flooding just one message .the follow - up papers generalize this result to smaller number of messages and allow to be as small as .they show that the running time of the algorithm is , i.e. , order optimal as long as . in order to prove this result, they have to assume that each node knows initially only one message and that initially the messages are equally spread . even with these assumptionsthe analysis is long and complicated and the authors state themselves in their abstract that `` while the asymptotic results might sound believable , owing to the distributed nature of the system , a rigorous derivation poses quite a few technical challenges and requires careful modeling and analysis of an appropriate time - varying bernoulli process . ''our next lemma shows that rlnc gossip actually always finishes with high probability in order optimal stopping time .our analysis is much simpler and has many further advantages : it holds for all choices of and allows to be as small as .our proof does also not rely on any assumptions on the initial message distribution .we show in section [ sec : exact - dependence - k ] that the well - mixed initial state assumed in actually provably speeds up the convergence compared to the worst - cast distribution for which our result holds .our proof furthermore gives a success probability of if the algorithm runs for time . in the setting of with ,this is instead of the stated there .lastly it is interesting to note that previous general approaches are unable to prove any running time that beats the simple non - coding non - gossiping sequential flooding approach when applied to the complete graph / network . [lem : randomphonecall ] the rlnc gossip in the random phone call model with spreads messages with high probability in exactly time .this holds independently from the initial distribution of the messages and of the communication model ( e.g. push , pull , exchange ) .after the helpfulness of rlnc gossip was established for the complete graph by , the papers , and generalized it to general static topologies and consider asynchronous and synchronous push , pull and exchange gossip .in this section we first review the previous results and than show how to improve over them giving an exact characterization of the stopping time or rlnc gossip for messages using the template of section [ sec : simple - template ] .the paper `` information dissemination via network coding'' by mosk - aoyama and shah was the first to consider general topologies .they consider a similarly general version of the synchronous and asynchronous gossip as presented here and analyze the stopping times for in dependence on the conductance .their analysis implies that with high probability phases of asynchronous rounds suffice for the complete graph and constant degree expanders and such phases for the ring - graph .while the analysis is very interesting , these results do not beat the simple ( non - coding ) sequential flooding protocol and the stopping time of the ring - graph and many other graphs is even off by a factor of .their running times for the synchronous model are similar but lose another -factor .their dependence is on the success probability is furthermore multiplicative in because it stems from a standard probability amplification argument .two recent papers analyzed rlnc gossip using two completely different approaches . the second points out that the analysis of the first is flawed and prove that the asynchronous rlnc gossip on a network with maximum degree takes with high probability time .their proof uses an interesting reduction to networks of queues and applies jackson s theorem .they also give a tight analysis and lower bounds for a few special graphs with interesting behavior ( see below ) .while their analysis is exact for few selected graphs the analysis is far from tight and in most graphs the maximum degree has nothing to do with the stopping time of rlnc gossip .the major question asked in is to find a characterizing property of the graph that determines the stopping time .we give exactly such a characterization for the asynchronous case with assuming a worst - cast message initialization .the model we use is a generalization of the classical push , pull and exchange model : we allow the topology in every round to be specified by a graph with directed and/or undirected edges and a probability weight on every edge , such that the sum over all edges is at most 1 . in every round each edge gets exclusively selected with probability , i.e. , in each round at most one edge gets selected .if the edge is undirected an exchange is performed and if a directed edge gets activated a packet is delivered in the direction of the edge .note that this model is a generalization of the `` classical '' communication models . to obtain the probability graph from the undirected network with push orpull one just has to replace every undirected edge by two directed edges with probability weight and where and are the degrees of and respectively . to obtain the exchange protocoleach undirected edge simply has the probability weight .given such a network graph with probability weights we define the min - cut as : where are all edges leaving a non - empty vertex - subset in .the next two lemmas show that this quantity exactly captures how long rlnc gossip for messages takes .[ lem : cut - asynch - single ] if for every time the min - cut of is at least then the asynchronous single transfer algorithm with spreads messages with probability at least in time .the next lemma proves that is optimal .[ lem : lowerbound - asynch - single ] with high probability , the asynchronous single transfer algorithm takes at least rounds to spread messages if it is used on fixed graph with ( min-)cut on which at least messages are initialized inside this cut . applying lemma [ lem : cut - asynch - single ] to the standard push / pull model gives a stopping time for any dynamic graph whose maximum degree is bounded by , which is the main result of .it also gives for the complete graph ( instead of the worst case of ) and nicely explains the behavior of the barbel graph and the extended barbel - graph that were considered by .the proof of lemma [ lem : cut - asynch - single ] can furthermore easily be extended to show that the dependency on the success probability is only logarithmic and additive in contrast to the previous work . in this sectionwe give convergence results for synchronous and asynchronous broadcast gossip in arbitrary dynamic networks .these are to our knowledge the first results for the rlnc algorithm in such a setting .we think the results in this section are of particular interest for highly dynamic networks .the reason for this is that many of the highly unstable or dynamic networks that occur in practice like ad - hoc- , vehicular- or sensor - networks are wireless and thus have inherent broadcasting behavior . to fix a model we first consider the simple synchronous broadcast model .we assume without loss of generality that the network graph is directed because any undirected edge can be replaced by its two anti - parallel directed edges .having wireless networks in mind we also assume that in each round each nodes computes only one packet that is then send out to all neighbors .our results also hold for the less realistic model where a node sends out a different packet to each neighbor .the parameter that governs the time to spread one message in a static setting is ( not surprisingly ) the diameter and it is easy to prove stopping times for messages using our technique . in a dynamic setting this is not true . even for just one message , an adaptive adversary can , for example , always connect both the set of nodes that know about it and the set of nodes that do not know about it to a clique and connect the two cliques by one edge .even though the graph has diameter at all times , it clearly takes at least rounds to spread one message . in order to prove stopping times in the adaptive adversaries model we switch to a parameter that indirectly gives a good upper - bound on the diameter for many graphsthe parameter we use is the isoperimetric number , which is defined as follows : where are the nodes in outside of the subset that are in the directed neighborhood of . to give a few example values : for disconnected graphs is zero and for connected graphs it ranges between and ; for a -vertex - connected graph we have and holds if and only if is a vertex - expander ( or a complete graph ) .we are going to show that the expected time for one message to be broadcasted is at most .this is for a line and for any vertex - expander .our bound is tight in the sense that for any value with there is a static graph that has diameter at least and isoperimetric number . having an upper bound on the time it takes to spread one message we again prove an perfectly pipelined time of for messages : [ lem : synchbroadcast ] the synchronous broadcast gossip protocol takes with high probability at most rounds to spread messages as long as the isoperimetric number of the graph is at least at every time .a similar result to lemma [ lem : synchbroadcast ] can be proven for the asynchronous broadcast model in which at every round each node gets selected uniformly independently at random ( i.e. with probability ) to broadcast its packet to its neighbors : [ lem : asynchbroadcast ] the asynchronous broadcast gossip protocol takes with high probability at most rounds to spread messages as long as the isoperimetric number of the graph is at least at any time .in this section we discuss how the simple proofs from section [ sec : applications ] that use only the template from section [ sec : simple - template ] can be extended to give more detailed or sharper bounds .as stated in section [ sec : model ] we assume throughout the paper that messages are to be spread that are initially distributed in a worst - case fashion .all earlier papers restricted themselves to the easier special case that and that each node initially holds exactly one message , or that is arbitrary but the network starts in a similarly well - mixed state in which each message is known by a different node and all messages are equally spread over the network .in many cases the worst - case and any well - mixed initialization take equally long to converge because the running time is lower bounded and bottlenecked by the flooding time for a single message or the time it takes for a node to receive at least packets .nevertheless there are cases where a well - mixed initialization can drastically improve performance .our proof technique explains this and we give a simple way to exploit assumptions about well - mixed initializations to prove stronger performance guarantees : if , e.g. , each node initially holds exactly one of messages then most vectors are already known to most nodes initially .more precisely exactly the vectors with non - zero components are initially known to exactly nodes . with many vectorsalready widely spread initially the union bound over the failure probabilities for all vectors to spread after rounds can decrease significantly .taking the different quantities and probabilities for nodes that are initially known to a certain number of nodes in account one can prove in theses cases that a smaller suffices .one example for a mixed initialization being advantageous is discussed in the next section [ sec : exact - dependence - k ] and another one is the convergence time of the asynchronous push and pull protocol on the star - graph : for both push and pull the network induced by the star - graph has a min - cut of which leads according to lemma [ lem : cut - asynch - single ] and [ lem : lowerbound - asynch - single ] to a stopping time of under a worst - case initialization . to lower boundthe convergence time lemma [ lem : lowerbound - asynch - single ] , which relates the convergence time to the min - cut of the network graph , has to assume that at least a constant fraction of the messages are initialized inside a bad cut .for the `` classical '' initialization in which each node starts with exactly one message this is true for the push model but not in the pull model in which every bad cut only contains few messages . indeed assuming a well - mixed initialization the push protocol takes still time to converge while a much lower stopping time for the pull model can be easily derived using our techniques . in most ( highly connected ) networksthe spreading time for one message is short and becomes the dominant term in the order optimal -type upper bounds presented in this paper .so is , for example , for most expanding networks .while it is clear that at least packets need to be received at each node it becomes an interesting question how large the constant factor hidden by the -notation is .differently stated , we ask how large the fraction of helpful or innovative packets received by a node is over the execution of the protocol . determining and even more optimizing proofs to obtain such constants is usually a big hassle or even infeasible due to involved proofs .simulation is therefore often used in practice to get a good estimation of the constants ( e.g. ) . our template from section [ sec : simple - template ]reduces the question for the stopping time of rlnc gossip to a simple standard question about tail bounds for negative binomial random variables .this makes it often possible to determine and prove ( optimal ) constants ( and lower order terms ) .all that is needed is to replace the chernoff bound in the template from section [ sec : simple - template ] by an argument that gives the correct base in the exponential tail - bound . in section [ sec : tighter - tail ] we give such a bound .we than exemplify then how to apply this bound by two examples : in section [ sec : exact - broadcast ] the synchronous broadcast gossip from section [ sec : broadcast ] and in section [ sec : exact - rumormongering ] the rumor mongering from section [ sec : randomphonecall ] . in both caseswe can show that the constant in the dependency on is arbitrarily close to the absolutely optimal constant , i.e. we can obtain a perfectly pipelined stopping time .the following simple lemma gives a stronger guarantee on the tail of a negative binomial random variable than the chernoff bound used in the template from section [ sec : simple - template ] .the lemma proves that a constant factor away from the expectation the probability drops by a factor of with every additional trial instead of a constant factor drop that would be obtained by a standard chernoff bound : [ lem : tail ] the probability that after independent trials there are less than successes is at most where is the failure probability ( with ) .if we apply this stronger tail bound in the template from section [ sec : simple - template ] we obtain the following corollary : [ cor : tail ] let and .if in order to spread any fixed coefficient vector only successful rounds are needed and if a round fails with probability at most then messages spread in rounds with probability at least . for means a running time of in expectation and with high probability . in this sectionwe use the tighter tail bounds from the last section [ sec : tighter - tail ] to sharpen the bounds on the convergence time of the synchronous broadcast from section [ sec : broadcast ] : [ lem : exact - synchbroadcast ] the synchronous broadcast gossip protocol takes with high probability at most rounds to spread messages where if the isoperimetric number of the graph is at least at any time . ( and ) ) another interesting case in which the exact dependence on the number of messages was considered is the rumor mongering process from section [ sec : randomphonecall ] .the authors of give a theoretical analysis in the regime where the term clearly dominates and prove an upper bound of for the push protocol and for the pull model .they also simulated the protocol and estimated the stopping time to be .both their analytic bounds and the simulation assume that messages start out in separate nodes and are equally spread over the network ( see also section [ sec : mixed - initial - state ] ) .in this section we improve over these findings and show that the pull model in this setting actually converges in time for .interestingly we also show that with a worst - cast initialization ( see also section [ sec : mixed - initial - state ] ) the pull model does not achieve this convergence time but has a leading constant between and : determining the correct constants for random communication protocols like the random phone call model is much more delicate than proving order optimal convergence times .the reason for this is that the union of random exchanges over many rounds almost surely form an expander while the graph in a single round is usually not even connected .this is the case for all of the presented random phone call models .while all these models are very stable order optimal one must be much more careful to achieve and even more prove optimal -type bounds for large .we exemplify this by describing these concerns in detail for the pull protocol : the worst - case initialization for the pull protocol is when all messages are initially known to only one node . in this casethis node is not pulled at all in one round with probability . in order to get pulled at least times it takes therefore in expectation at least rounds .thus for the case that only one node initially knows about all messages and if this node prepares a message in each round which it sends out to the nodes requesting it this is an information - theoretic lower bound on the number of rounds .a direct analysis of the protocol using corollary [ cor : tail ] for this case gives a constant of which is for .this can be improved if the start state is a bit more mixed , e.g. , if each message is known to nodes initially . in this case the information - theoretical lower bound becomes and our upper bound becomes this means that for our proof gives the optimal stopping time .lemma [ lem : better - initialization ] also shows a stopping time for the case where all messages are initiated at different nodes .this contrasts the upper bound of and the estimate of of for this setting .more extensive simulation results than the ones in confirm that the constant for the dependency on should indeed be smaller than the projected .[ lem : better - initialization ] the rlnc algorithm in the random phone call pull model even with spreads messages with high probability in time if all messages are initially known to different nodes .section [ sec : asynchsingle ] proves convergence times for spreading messages using the asynchronous single transfer protocols .these bounds are tight and directly extend to a bound for messages . in what followswe want to generalize this to smaller number of messages and discuss the bounds that can be obtained using the technique from section [ sec : technique ] .for small number of messages , e.g. , the convergence time of rlnc single transfer gossip can be much faster than but still be .this shows that the min - cut is not the right quantity to look at in this scenario .again , as in section [ sec : broadcast ] , conductance quantities capture much better how fast a small number of messages spreads .the quantity we consider is : the next lemma shows that it takes at most time for one message to spread if the conductance is bounded by .[ thm : one - message - asynch - single ] in the asynchronous single transfer model ( with any ) it takes in expectation at most time for one message to spread .the probability that a set of nodes that know about the message grows from size to is at least .it thus takes at least rounds in expectation for the first success , rounds for the second success and in general rounds in expectation for one message to spread .this is a tight bound for many regular graphs and gives e.g. a flooding time of for the complete graph or any other regular expanders .it is clear that rlnc - gossip for any needs to take at least so much time .the other lower bound that kicks in for large enough is the lower bound from lemma [ lem : lowerbound - asynch - single ] .similar to the results for the other models we want show that the total running time is essentially ( up to at most a factor ) either dominated by the rounds to spread one message or for larger number of messages the rounds coming from the communication lower bound that the messages have to cross the worst case cut .[ lem : exact - asynch - single ] disseminating messages in the asynchronous single transfer model with takes with high probability at most rounds if the graph as a min - cut of at most and a conductance of at least at all times .the idea behind proving performances in the rather strong adaptive adversary model introduced in this paper is that the guarantees directly extend to the widest possible range of dynamic networks including random models .most of our proofs like the ones of lemma [ lem : cut - asynch - single ] , [ lem : synchbroadcast ] or [ lem : asynchbroadcast ] demand that the network graph has a certain connectivity requirement at any time .these requirements might be too strong especially for random network models .we discuss in the following how these requirements can be easily weakened in many ways : the simple fact that no progress in the spreading of knowledge gets lost makes it easy to deal with the case that the connectivity fluctuates ( e.g. , randomly ) .increasing the stopping time by a constant factor easily accounts for models in which the desired connectivity occurs only occasionally or with constant probability .looking at the average connectivity is another possibility .it is furthermore not necessary to require the entire graph to be expanding on average but it suffices to demand that each subset expands with constant probability according to its size .this way convergence can be proven even for always disconnected graphs . especially for random models it can also be helpful to consider the union of the network graphs of consecutive rounds , i.e. .this gives for example directly valid upper bounds for the synchronous or asynchronous broadcast model . as a simple example for the usefulness of these approaches we discuss an alternative way to prove lemma [ lem : randomphonecall ] about the stopping time of the rumor mongering process : instead of analyzing the rumor mongering as a synchronous protocol on the complete graph in which each node performs a pull , push or exchange one can alternatively see it as a synchronous broadcast ( see section [ sec : broadcast ] ) on a random network .the network graph in this case is simply formed by a random directed in - edge , directed out - edge or undirected edge at each node depending on whether on looks at the push , pull or exchange model .the results from lemma [ lem : synchbroadcast ] or [ lem : synchbroadcast ] will not directly give any bounds simply because the network graph is with high probability disconnected .using either of the two more advanced extensions solves this problem : with constant probability every set has a constant expansion ; alternatively one can use that the union of a constant number of rounds , as described above , forms with an expander with high probability .we have given a new technique to analyze the stopping times of rlnc - gossip that drastically simplifies , strengthens and extends previous results .most notably all our results hold in highly dynamic networks that are controlled by a fully adaptive adversary .theorem [ thm : reduction ] gives a direct way to transfer results for the single - message flooding / gossip process to the multi - message rlnc - gossip if strong enough tail bounds are provided .one candidate for which this could work is , e.g. , which can be interpreted as giving bounds on a synchronous single transfer gossip for one message .this paper also gives evidence that in most network models rlnc - gossip achieves perfect pipelining , i.e. the bounds for disseminating messages have the form where is the expected time to ( faultily ) flood one message .it is a very intriguing question under which general conditions on the network model one can prove this behavior .it is easy to see that the monotone set - growing process induced by the faulty flooding process of one message always exhibits a strong exponential tail as needed to apply lemma [ thm : reduction ] .this already implies asymptotic convergence times of the form ( see also lemma [ lem : exact - asynch - single ] ) where is the min - cut in the induced markov - chain , i.e. the minimal probability over all sets to inform another node within one round .the main question remaining is therefore to guarantee that this tail kicks in after rounds .in this section we provide a few background facts in linear algebra on vector spaces without ( positive - definite ) inner product , especially the notions involved in orthogonality .even so the section [ sec : technique ] is fully self - containing this section might be helpful in understanding the proofs . for a vector space the _ dual space _ consists of all linear forms on . for any subset the orthogonal ( dual ) complement is defined as all elements from that disappear on .it is easy to see that the orthogonal complement is a subspace in and has co - dimension equal to the dimension of the span of in .the dual space is isomorphic to and in the case of the dot - product is an isomorphism . using this identification the orthogonal complementcan also be defined as the space of all vectors that are perpendicular ( i.e. having a zero dot - product ) to all vectors in .this is the standard definition of orthogonality and for inner - product spaces like it matches the geometrical notion of orthogonality .this is not true for in which the dot - product is not positive definite .this leads to counter - intuitive situations , e.g. the vector $ ] is orthogonal to itself in .but the fact remains that every subspace can be assigned a orthogonal complement subspace with remains true and is the important notion used in section [ sec : technique ] .we give a more basic proof here : for this we define two vectors as equivalent if .this splits in exactly equivalence classes of equal size . to see this note that , because is a subspace , scalar - multiplication is a bijection between any two equivalence classes that correspond to a non - zero dot - product . by assumption furthermore contains a vector that has a non - zero dot - product with gives that -translation is a bijection between the zero dot - product equivalent class and another equivalence class .thus with probability exactly a packet with coefficient vector from a non - zero equivalence class is chosen for transmission . in this casethis coefficient vector gets added to and the node now knows . for the second claimwe prove that any node that is not able to decode does not know about at least one vector : if can not decode than is not the full space . because is a subspace it is lower - dimensional and we can use gram - schmidt to construct a orthogonal basis of and a vector that is orthogonal to .this vector is then by definition not known to , a contradiction . for the lowerbound we note that each node receives in expectation ( and with high probability ) only packets per round .thus if in the beginning at least one node did not already know about a constant fraction of the messages , then the algorithm has to run for at least rounds .it is also clear that even one message takes in expectation time to spread to all nodes .this completes the lower bound . to prove the upper bound, we use the template from [ sec : simple - template ] : for this we fix a coefficient vector and define a round as successful if the number of nodes that know about it increases by at least a constant factor or if the number of nodes that do not know about decreases by a factor of .there are at most successful rounds needed until at least nodes know about and at most another successful rounds until all nodes know about .it remains to be shown that each round succeeds with constant probability .we first consider the pull model . at firstwe have nodes that know about and at least nodes pulling for it .each of those nodes has a probability of to hit a knowing node .we expect a fraction of the ignorant nodes , i.e. , at least nodes , to receive a message from a node that knows about .the independence of these successes and lemma [ lem : knowledge - spreads ] prove that with constant probability at least nodes learn about .once there are at least nodes that know , each of the ignorant nodes pulls a packet from a knowing node with probability at least .the proof for the push model is similar .if there are nodes that know about and push out a message , then there are at least ignorant nodes that each receive at least one message from one of the nodes with probability .it is not hard to see that , in total , ignorant nodes receive a message from a node that knows with constant probability . lemma [ lem : knowledge - spreads ] now guarantees that , with constant probability , the number of ignorant nodes that learn is only a small factor smaller .once there are nodes knowing about and each of these pushes out , each node that does not know has a chance of per round to receive a message from a node that knows . applying lemma [ lem : knowledge - spreads ] again finishes the proof .our proof proceeds along the lines of the simple template from section [ sec : simple - template ] and concentrates on the spreading of one coefficient vector .we define a round as a success if and only if one more node learns about it .it is clear that exactly successes are needed . from the definition of and lemma [lem : knowledge - spreads ] follows that each round is successful with probability at least .thus if we run the protocol for rounds we expect at least successes and by chernoff bound the probability that we get less than is at most .if we choose appropriately this is small enough to end up with after taking the union bound over the vectors . in each round , at most one packet can cross the cut . for this to happen , an edge going out of the cut has to be selected and the probability for this is by definition exactly . in order to be able to decode the messages at least packets have to cross the cut each taking in expectation at least rounds .it takes with high probability at least rounds until packets have crossed the cut .we use the simple template from section [ sec : simple - template ] and concentrate on the spreading of one coefficient vector .we define a round to be a success if and only if the number of nodes that know about grows at least by a fraction or the number of nodes that do not know about shrinks at least by the same factor .+ we want to argue that at most successes are needed to spread completely .note that this is slightly better than the straight forward bound that would lead to .the improvement comes from exploiting the fact that the number of nodes that learn is an integral quantity : in the first successful rounds at least one node learns about .the next successful rounds at least nodes learn about and the following successful rounds it is new nodes and so on .there are such phases until at least nodes know about . the downward progression than follows by symmetry .the total number of successes sums up to : to finish the proof we show that every round has a constant success probability .this follows from lemma [ lem : knowledge - spreads ] if for a success only one node is supposed to learn about .if at least nodes are supposed to learn then by the definition of a success and of there are nodes on the knowledge cut , i.e. , at least nodes that do not know about are connected to a node that knows about .we invoke lemma [ lem : knowledge - spreads ] again to see that each of these nodes fails to learn about with probability at most .finally markov s inequality gives that the probability that more than fail to learn is at most .a round is therefore successful with probability at least .the proof is nearly identical to the one of lemma [ lem : synchbroadcast ] but instead of defining a round as a success we define successes for phases of consecutive rounds .using the same definition of success and following the same reasoning as before it is clear that at most successful phases are needed .to finish the proof we have to show that every phase has a constant success probability .for this we note again that at least nodes are on the knowledge - cut of if nodes need to learn about . for each of these nodesthe probability that no neighboring node that knows is activated during rounds is at most . according to lemma [ lem : knowledge - spreads ]the probability for each of the nodes to fail to learn about is thus at most .markov s inequality again implies that the probability for a failed round in which more than fail is at most .we pick and have now that which is exactly the probability for having at least failures in rounds .follows directly by applying theorem [ thm : reduction ] according to the template in section [ sec : simple - template ] and the use of lemma [ lem : tail ] to get the right bound on the tail probability .[ lem : weighted - sum - bernoulli ] let be i.i.d .bernoulli variables with probability .the probability that a positively weighted sum of the variables is at most its expectation is at most : first scale the weights such that and than use the second moment method : now the left - hand side is the variance of a weighted sum of i.i.d .bernoulli variables with probability , and as such its expectation is exactly . using markov s inequality on this expectation , we get that the probability we want to bound is at most : the last transformation holds because and because we can assume that all weights are at most .this is true because if there is a then already leads to an outcome of at least the expectation and the probability for this to happen is .we modify the proof of lemma [ lem : synchbroadcast ] only in the way that we use the stronger tail bound from corollary [ cor : tail ] instead of the simpler template from section [ sec : simple - template ] .we keep the same definition of success but prove that the success probability of a round is at least instead of as in lemma [ lem : synchbroadcast ] : if only one node is supposed to learn for a success this is again clear by lemma [ lem : knowledge - spreads ] .if at least nodes nodes are needed to a success we know also by the definition of a success that at least nodes that do not know about are connected to a node that knows about it .we assign each ignorant node to exactly one node that knows about breaking ties arbitrarily . now according to lemma [ lem : knowledge - spreads ] with probability each such node independently sends out a message that is not perpendicular to and all ignorant nodes that are connected to it learn .we can now directly apply lemma [ lem : weighted - sum - bernoulli ] and obtain that we indeed have a success probability of at least per round .this finishes the proof .we assume each message is initially known to exactly one node and all messages are known to different nodes .this implies that exactly the vectors that have non - zero components are initially known to exactly nodes .we will prove that the running time suffices to spread all messages with probability at least .for this we pick a threshold and first look at the vectors that are known to at most nodes initially . from the proof of lemma [ lem : randomphonecall ] we know that after rounds each of these vectors has a probability of at most to not have spread completely .choosing therefore suffices easily to make the contribution of these vectors to the union bound at most .most of the vectors start initially known to at least nodes . for these vectors we choose the same definition of success as in the proof of lemma [ lem : randomphonecall ] : a round is successful if the number of nodes that know about increases by at least a constant factor or if the number of nodes that do not know about decreases by a factor of . we will show that if we choose small enough these vectors have a probability of to spread successfully in one round . while with our initial analysis the start phase was the critical bottleneck we can show that the success probability for this phase can now even be pushed below by choosing small enough .in the first phase we have nodes that know and at least nodes that are pulling for it . each of those nodes has an independent probability of to hit a knowing node . because we have that the probability that none of these nodes pulls from a node knowing about is .lemma [ lem : knowledge - spreads ] shows than that each node that does pull from a node that knows about has a probability of to learn .this means more generally we have at least nodes that have an independent chance of to learn . for a small enough it is clear that the probability that at least nodes learn about can be made an arbitrarily small constant . in the second phasethere are at least nodes that know about and we want that of the remaining nodes at least a -fraction learns .each of these nodes has a probability of at least to pull from a knowing node and learn ( see lemma [ lem : knowledge - spreads ] ) . choosing suffices to guarantee that the probability that at least a -fraction learns is at least .the only reason that this probability can not be reduced is because if only one node remains to learn to learn about a round is successful with probability exactly . using the proof from lemma [ lem : tail ] it is easy to verify that choosing such that suffices to also make a union bound over these vectors at most . combining this to a union bound over all vectors finished the proof by showing that the probability that after rounds not all vectors have spread is at most .we want to show that running the protocol for rounds , where suffices to spread messages .note that we always have and can also safely assume that .as a first step we define to be a lower bound for the probability that if nodes know about in the next round one more node learns about .note that by assumption and lemma [ lem : knowledge - spreads ] is lower bounded by and .we now look at phases in which we allow tries for nodes informing the next node about .the number of rounds spend in successful phases sums up to at most .lets now look at the probability that has not spread after steps . in this casewe have at least failures that can occur after any of the phases .the probability that at least errors occur after phase is at most .we thus get a factor for every phase that does not finish `` in time '' .we also get a total factor of from all failures occurring after any round .let be the number of phases that finish not `` in time '' .there are exactly ways of distributing the failures to these phases .putting all this together we get the following upper bound on the probability that the algorithm did not converge after steps : choosing makes this smaller than .applying theorem [ thm : reduction ] now finishes the proof .the author wants to thank jon kelner for his incredible help while finishing this write - up .he also wants to thank an anonymous reviewer of a related paper , david karger and muriel mdard .a. demers , d. greene , c. hauser , w. irish , j. larson , s. shenker , h. sturgis , d. swinehart , and d. terry , `` epidemic algorithms for replicated database maintenance , '' in _ proceedings of the 6th symposium on principles of distributed computing ( podc ) _, 1987 , pp .d. agrawal , a. el abbadi , and r. c. steinke , `` epidemic algorithms in replicated databases ( extended abstract ) , '' in _ proceedings of the 16th symposium on principles of database systems ( pods ) _ , 1997 , pp .161172 .j. aspnes and e. ruppert , `` an introduction to population protocols , '' in _middleware for network eccentric and mobile applications _ , b. garbinato , h. miranda , and l. rodrigues , eds.1em plus 0.5em minus 0.4emspringer - verlag , 2009 , pp .97120 .d. kempe and j. kleinberg , `` protocols and impossibility results for gossip - based communication mechanisms , '' in _ proceedings of 43rd symposium on foundations of computer science ( focs ) _ , 2002 , pp .471480 .f. chierichetti , s. lattanzi , and a. panconesi , `` almost tight bounds for rumour spreading with conductance , '' in _ proceedings of the 42nd acm symposium on theory of computing ( stoc ) _ , 2010 , pp .399408 . t. ho , r. koetter , m. medard , d. karger , and m. effros , `` the benefits of coding over routing in a randomized setting , '' in _ proceedings of the ieee international symposium on information theory ( isit _ , 2003 , pp . 442442 .s. katti , d. katabi , w. hu , h. rahul , and m. medard , `` the importance of being opportunistic : practical network coding for wireless environments , '' in _proceedings 43rd allerton conference on communication , control , and computing _ , 2005 .s. katti , h. rahul , w. hu , d. katabi , m. mdard , and j. crowcroft , `` xors in the air : practical wireless network coding , '' _ ieee / acm transactions on networking ( ton ) _ , vol .16 , no . 3 , pp .497510 , 2008 . c. fragouli , j. widmer , and j. boudec , `` a network coding approach to energy efficient broadcasting : from theory to practice , '' in _ proceedings of the 25th international conference on computer communications ( infocom ) _ , 2006 . r. bar - yehuda , o. goldreich , and a. itai , `` on the time complexity of broadcast in radio networks : an exponential gap between determinism and randomization , '' _ journal of computer and system sciences ( jcss ) _ , vol . 45 , no . 1 , pp .104126 , 1992 .a. e. g. clementi , a. monti , and r. silvestri , `` distributed multi - broadcast in unknown radio networks , '' in _proceedings of 20th symposium on principles of distributed computing ( podc ) _ , 2001 , pp. 255263 .bernhard haeupler received the b.sc . and m.sc .degree in mathematics from the technical university munich , germany , and the m.sc .degree in computer science and electrical engineering from the massachusetts institute of technology in 2007 , 2008 and 2010 respectively .he is currently a ph.d .candidate with the computer science department at mit . in 2007 - 2008he was a visiting graduate student at the computer science department of princeton university working with robert tarjan . for his graduate studieshe has received an akamai / mit presidential fellowship .
we give a new technique to analyze the stopping time of gossip protocols that are based on random linear network coding ( rlnc ) . our analysis drastically simplifies , extends and strengthens previous results . we analyze rlnc gossip in a general framework for network and communication models that encompasses and unifies the models used previously in this context . we show , in most settings for the first time , that it converges with high probability in the information - theoretically optimal time . most stopping times are of the form where is the number of messages to be distributed and is the time it takes to disseminate one message . this means rlnc gossip achieves `` perfect pipelining '' . our analysis directly extends to highly dynamic networks in which the topology can change completely at any time . this remains true even if the network dynamics are controlled by a fully adaptive adversary that knows the complete network state . virtually nothing besides simple sequential flooding protocols was previously known for such a setting . while rlnc gossip works in this wide variety of networks its analysis remains the same and extremely simple . this contrasts with more complex proofs that were put forward to give less strong results for various special cases . haeupler : analyzing network coding gossip made easy
the multivariate compound poisson process is an intuitively appealing and natural model for operational risk and insurance claim modelling .the model is intuitively appealing because dependencies between different loss categories are caused by common shocks that apply to multiple loss categories simultaneously .for example , in operational risk modelling , failure of an it system is a common shock that causes losses in multiple lines of business .the multivariate compound poisson process is a natural model for the following two reasons .first , as a lvy process , it is easily applied to any time horizon of interest .second , because a redesign of loss categories results in a loss process that is again multivariate compound poisson , the nature of the model does not depend on the level of granularity .a multivariate compound poisson process can be specified in terms of univariate compound poisson processes and a copula .in essence , a copula provides the relationship between the measure of a multivariate process and the measures of its marginal processes .the copula allows for a parsimonious bottom - up modelling with compound poisson processes . in case of two loss categories , for example ,parameterization with a clayton copula requires two marginal frequencies , two marginal jump size distributions and one copula parameter .in contrast , parameterization without a copula requires three frequencies ( corresponding to losses of the first category only , losses of the second category only and common shocks that apply to both categories ) , two univariate jump size distributions ( corresponding to losses of one of the two categories only ) and , finally , one bivariate jump size distribution ( corresponding to the common shocks ) . the parameters of a copula of a multivariate compound poisson process can be estimated if the process is either observed continuously ( such that common shocks can be identified ) or observed discretely with knowledge about all jump sizes and the common shocks .these two cases have been studied by for a bivariate compound poisson process ( the continuous observation is mimicked in a simulation study , while the discrete observation corresponds to a real data set of insurance claims ) .the objective of this work is to develop a method to estimate the parameters of a copula of a bivariate compound poisson process in case the process is observed discretely with knowledge about all jump sizes , but without knowledge of which jumps stem from common shocks .this situation is relevant to operational risk modelling in which all material losses are registered , but common shocks are typically unknown . with the methodology developed here , the copula becomes a realistic tool of the advanced measurement approach of operational risk .the outline of this paper is as follows . in section [ cpp ], we discuss the bivariate compound poisson process in terms of its common shock representation and measure .this prepares the ground for the two - dimensional copula of section [ levy_copula ] . in section [ mle ] , the new estimation method of the discretely observed bivariate compound poisson processis presented .the method is tested in a simulation study in section [ sim_study ] and applied to a real data set in section [ real_data ] . in section [ real_data ], we also develop a goodness of fit test for the copula .finally , we conclude in section [ conclusions ] .a bivariate compound poisson process is defined on a filtered probability space as where is a poisson process with intensity and is a sequence of iid -dimensional random vectors . the process and the sequence are statistically independent . by construction ,given any , the increment is independent of and has the same distribution as .the probability distribution of is such that , which means that a jump of almost surely manifests itself in a jump of at least one of the components of .the lvy - it decomposition of takes the form where is the poisson random measure . with the help of the lvy - it decomposition ,we find that has common shock representation where and the processes , and do not jump simultaneously and are statistically independent .the processes and are called the independent parts of .conversely , the process is called the dependent part of and corresponds to the common shocks .the lvy - khinchin representation of the characteristic function of can be determined from eq .( [ levy_ito ] ) with the exponential formula for poisson random measures .the representation takes the form =\exp { \left[t \int_{\mathbb{r}^{2}}\left(e^{i u \cdot x}-1\right)\nu(dx ) \right ] } , \label{char}\ ] ] where and the lvy measure ,a)]}{t}\ ] ] gives the expected number of jumps per unit of time in each borel set of . the processes and are independent if and only if the support of is contained in the set . in this case , and do not jump simultaneously almost surely and the lvy - khinchin representation factorizes as = \mathbb{e } \left [ e^{iu_{1}s_{1}(t ) } \right ] \mathbb{e } \left [ e^{iu_{2}s_{2}(t ) } \right].\ ] ] on the other hand , the processes and are defined to be comonotonic if their jump sizes and , respectively , are elements of an increasing set of , see .any two elements and of satisfy or for all .an example of an increasing set is .the requirement means that by observing one of the processes or , the other process can be constructed exactly with a positive dependence . in case of comonotonic and ,the measure is concentrated on . in terms of , and ,( [ char ] ) takes the form = \mathbb{e } \left [ e^{iu_{1}^{\vphantom{\perp}}s_{1}^{\perp}(t ) } \right ] \mathbb{e } \left [ e^{iu_{2}^{\vphantom{\perp}}s_{2}^{\perp}(t ) } \right ] \mathbb{e } \left [ e^{iu_{1}^{\vphantom{\perp}}s_{1}^{\parallel}(t)+iu_{2}^{\vphantom{\perp}}s_{2}^{\parallel}(t ) } \right ] , \label{char_detailed}\ ] ] where we have used that , and are independent .the lvy - khinchin representation of the characteristic functions of , and can be determined from their lvy - it decompositions in the same way as eq .( [ char ] ) is determined from eq .( [ levy_ito ] ) .the measures of and are given by , respectively , where is a borel set of .the levy measure of takes the form where the sets and are defined as to conclude our discussion of the bivariate compound poisson process , we consider its components for a measure that is not necessarily concentrated on or an increasing set .the process is compound poisson and by setting in eq .( [ char_detailed ] ) , we find that the characteristic function of takes the form & = \exp{\left[t\int_{\mathbb{r}}\left ( e^{iu_{1}x_{1}}-1 \right)\nu_{1}^{\perp}(dx_{1})\right ] } \exp{\left[t\int_{\mathbb{r}^{2}}\left ( e^{iu_{1}x_{1}}-1 \right)\nu^{\parallel}(dx_{1 } \times dx_{2})\right ] } \\ & = \exp{\left[t\int_{\mathbb{r}}\left ( e^{iu_{1}x_{1}}-1 \right)\left\{\nu_{1}^{\perp}(dx_{1})+\nu^{\parallel}(dx_{1 } \times ( -\infty,\infty))\right\ } \right]}. \label{char_1 } \end{split}\ ] ] from eq .( [ char_1 ] ) , it follows that the measure of is given by if the measure is concentrated on , then and if it is concentrated on ( such as on an increasing set ) , then . in general, is a combination of and cf .( [ nu_1 ] ) . in the same way , is compound poisson with measure consider a bivariate compound poisson process with positive jumps .this means that the measure is concentrated on rather than on .the assumption of positive jumps is reasonable in the context of operational risk modelling and restricts our discussion of copulas to positive copulas . a two - dimensional positive copula ^{2 } \rightarrow [ 0,\infty ] ] and ^{2} ] , if and are continuous , this copula is unique .otherwise it is unique on .conversely , let and be two one - dimensional processes with positive jumps having tail integrals and and let be a two - dimensional positive copula .then there exsists a two - dimensional process with copula and marginal tail integrals and . its tail integralis given by eq .( [ levy_connect ] ) .the definition of the tail integral and its marginal tail integrals imply that and .the singularity at zero is necessary to correctly account for jumps of the independent parts and .consider , for example , on the one hand and , on the other hand the difference for between the tail integrals of eqs .( [ tail1 ] ) and ( [ tail2 ] ) corresponds to the tail integral of .if would have been defined as , the difference of the tail integrals vanishes and does not jump almost surely . in case of independent and ,the measure is concentrated on the set and the tail integral takes the form with the help of eq .( [ levy_connect ] ) , we find that the independence copula is given by in case of comonotonic and , the measure is concentrated on an increasing set and the tail integral takes the form which implies that the comonotonic copula is given by a copula with a dependence that is between and can be constructed in several ways , such as by an approach similar to the construction of archimedean distributional copulas .given a strictly decreasing convex function \rightarrow [ 0,\infty] ] in intervals of equal length .the partition is chosen such that jumps of separate intervals can realistically be assumed not to stem from common shocks . in the context of operational risk modelling , with either being a month or a quarter ,this is the observation scheme typically assumed in the advanced measurement approach .the objective of this work is to estimate the parameters of the copula in the observation scheme described above .a possible solution is to construct a likelihood function based on all possible combinations of jumps within each interval .if , within a certain interval , there are jumps within loss category one and jumps within loss category two , one can distinguish between possibilities for the number of common jumps .given a certain , there are possibilities of distributing the common jumps over the observed jump sizes . due tothe large number of possibilities , a likelihood function based on all combinations of jumps is not feasible .an alternative approach is to construct a likelihood function based on the number of jumps and the expected jump sizes within the intervals .this approach , however , is also not feasible because the convolutions involved typically have no closed - form expressions . in the method proposed here, we use a sample consisting of the number of jumps and the maximum jump sizes within the intervals .for such a sample , we derive a closed - form likelihood function .alternatively , a closed - form likelihood function based on the minimum jump sizes can also derived . in the context of operational risk modelling ,however , one can expect the likelihood function based on maximum losses to me more variable with respect to model parameters than the likelihood function based on minimum losses .we consider a partition of ] . in the second step ,the estimates of the marginal parameters are substituted in and the resulting likelihood function is maximized with respect to the copula parameters .the ifm approach seems particularly suitable in the observation scheme of this work because the method makes use of all jump sizes ( rather than the maximum jump sizes and the number of jumps in the intervals ) in estimating the marginal parameters .the quality of the estimation method of section [ mle ] is tested in a bootstrap analysis .the analysis consists of sampling many times from on a period ] distribution .the resulting draws are the jump times of .the jump times of are determined similarly . *draw times from a uniform ] distribution and apply the inverse of the marginal distribution function defined as to each draw .the resulting numbers with are the jump sizes of .( note that the marginal distribution function defined here has one entry .in contrast , the function defined in eq .( [ partials ] ) with two entries denotes the partial derivative of with respect to the first entry . we will use to denote both functionsthe number of entries indicates to which function it refers . ) * draw times from a uniform ] in intervals of equal length and determine and for all .this results in an matrix of maximum jump sizes . also determine and for all .this results in an matrix of number of jumps . *determine the vector of all jump sizes of on ] is divided in 100 intervals of equal length and the parameters are estimated with the ifm approach in 100 bootstrap samples . [ cols="<,>,>,>,>,>",options="header " , ] [ results6 ]in summary , we have developed a method to estimate a copula of a bivariate compound poisson process in case the process is observed discretely with knowledge about all jump sizes , but without knowledge of which jumps stem from common shocks .the method is tested in a simulation study with a clayton copula .the results indicate that the method is unbiased in small samples and that the bootstrap standard deviation of the clayton copula parameter is approximately proportional to its bootstrap mean .a goodness of fit test for the copula is developed and applied to monthly log - losses of the danish fire loss data set .the results indicate that the clayton copula provides a good fit to the data set .the method developed in this work is particularly useful in the context of operational risk modelling in which common shocks are typically unknown . to model dependencies between operational losses of different loss categories , the common practice in the banking industryis to use a distributional copula between either the number of losses or the aggregate losses within a certain time window .a disadvantage of this approach is that the distributional copula depends non - trivially on the length of the time window .if one has , for example , estimated a distributional copula between monthly losses , the distributional copula between yearly losses is typically unknown .a second disadvantage of the approach is that the nature of the model depends on the level of granularity .if one combines , for example , two loss categories connected by a distributional copula , the new loss category is typically not compound poisson .these two issues are resolved by a multivariate compound poisson process , which can be parsimoniously modelled with a copula in a bottom - up approach .in this appendix , we relate the measures , and to the measures , and the copula . on a borel set with , the measure is given by which is equivalent to in terms of and the copula , takes the form similarly , on a borel set for , the measure takes the form finally , on a borel set with , the measure is given by
a method is developed to estimate the parameters of a copula of a discretely observed bivariate compound poisson process without knowledge of common shocks . the method is tested in a small sample simulation study . also , the method is applied to a real data set and a goodness of fit test is developed . with the methodology of this work , the copula becomes a realistic tool of the advanced measurement approach of operational risk .
fault tolerant quantum computing involves encoding one or more logical qubit(s ) into a plurality of physical qubits and performing measurements on those physical qubits to detect and control error rates .a profound drawback of all such encodings is it is impossible to unitarily implement a universal set of operations on the logical qubits ( i.e. gates ) without risking the amplification of existing errors .however we must of course achieve a universal set of operations in order to perform general quantum computing .this problem can be circumvented a number of ways to perform quantum computation fault - tolerantly .universality can be achieved by allowing a limited amplification of noise or introducing additional redundancy into the code .these approaches make considerable sacrifices and are not expected to tolerate as much noise as high - threshold codes , for instance the surface code .alternatively one can exploit the fact that , while the high - threshold codes do not support a complete set of fault - tolerant operations directly on our logical qubits , we can perform a more limited set of operations .for example , in the surface code we can perform a cnot gate between two encoded logical qubits simply by performing cnots between each physical qubit in one logical qubit and the corresponding physical qubit in another logical qubit .such a procedure is called transversal. we can also perform certain other gates transversally , but crucially there are operations which we can not achieve in this way , for example the gate .while these allowed operations do give us the ability to perform limited computations , unfortunately they do not take us beyond the algorithms that can efficiently performed on a classical computer .therefore we need some means of upgrading the limited set of computations to a universal set , while retaining fault tolerance .the solution for achieving universality with the surface code is the use of magic states .suppose that we have a logical qubit encoded in a surface code composed of hundreds of physical qubits .we wish to perform a gate on in a way that is fault tolerant .now suppose that we are _ given _ a second ancillary logical qubit , this time in the magic state . if we now perform a cnot controlled by targeted on , and then measure out in the computational basis ( which we can do transversally ) , then the input state on will be transferred to with the gate applied . given a free supply of magic states , we could consume them as - needed and thus upgrade our machine to perform full universal computing .the question then becomes , how can we create a supply of magic states for our computer given that they are precisely the states which we _ can not _ reach by fault tolerant operations on simple states ( like logical zero ) .an answer is to go ahead and create ` raw ' magic states as well as we can , recognising that they will contain errors at an unacceptable level , and then _ distil _ those states until they are acceptable .distillation involves taking a large number of raw magic states and deriving a smaller number of improved states , and then repeating this as necessary until the target fidelity is reached .crucially , this process can be performed using only the limited set of allowed fault tolerant operations .however the process is costly in resources and the cost depends on the fidelity of the initial magic states . consequently the distillation process may occupy the majority of the machine s hardware ( in ref . it is estimated that implementing shor s algorithm would require a machine with over of qubits dedicated to magic state distillation ) . to minimise the hardware cost , one could think of either designing more efficient distillation algorithms or simply improving the initial fidelity of encoded magic states . in this paper, we describe a highly efficient protocol for creating ` raw ' magic states in the surface code . as with previous authors ,we take a single physical qubit in the desired magic state , and then perform a procedure that yields the same state in an encoded form .there are many such protocols for encoding a state into various topological codes .the basic idea is to grow the magic state from the physical - qubit level to the full - size - encoding level by increasing the code distance .similar ideas can be used to encode entanglement into quantum networks or unknown states into various topological codes .the aim of our new protocol is to minimise the noise in the ` raw ' encoded magic state before any distillation is performed .such noise is potentially induced by any imperfect operation in the encoding circuit .we begin by considering the case that the error rate for single - qubit operations is far lower than the two - qubit errors .we note that this is indeed the case in many real implementations ; even in the system with the highest fidelity ever reported for a two - qubit gate , i.e. between two trapped ions , the fidelity of single ion operations has been reported at far higher levels , reaching . in our new protocol , under this practical condition we find that * the infidelity in the encoded magic state is less than half the infidelity of even a single cnot gate*. this is despite the fact that a large number of such gates are involved in the creation of the magic state .more specifically : when single - qubit operations are perfect and two - qubit gate noise is depolarizing , the rate of logical errors on the encoded magic state , where is the error rate of cnot gates , i.e. two - qubit gates .this observation is verified by numerical simulations .presently we will also consider the effect of single - qubit noise , and we find that the logical error rate is still below the two - qubit error rate after switching on weak single - qubit noise .we use post - selection to suppress logical errors , but importantly the cost of doing so is modest : the success rate when and single - qubit operations are perfect ( and it becomes more deterministic as error rates fall ) .stabilisers ; and green squares ( or triangles ) with dashed perimeter represent stabilisers .see the main text for details . ]the protocol has two phases . in the first phase , the magic state initialised on a single physical qubitis encoded into the surface code with distance . in this stage ,post selection is used to reduce the logical error rate . in the second phase ,the code distance is enlarged from to , the target code distance , to complete the encoding . from then on ,the logical qubit is protected by correcting errors with normal syndrome detection and pairing algorithms of surface code .the detailed protocol reads : * the whole lattice of the distance- surface code is divided into five sets ( fig .[ scheme ] ) : the top - left corner itself , two triangular areas ( i and ii ) , and two areas ( iii and iv ) with trapezoidal shapes . the top - left corner , area - i , and area - ii form the lattice of the distance- surface code .* * first phase * : * a magic state is initialised on the data qubit at the top - left corner of the lattice ( magic - state qubit ) ; data qubits in area - i ( area - ii ) are initialised in the state ( ) ; data qubits in area - iii and area - iv are not included in the first phase . *stabiliser measurements are performed on the lattice of the distance- surface code for _ two _ full rounds , each involving both and stabilizer measurements .circuits of stabiliser measurements in the first phase are shown in fig .[ circuit ] , where the order of cnot gates is designed to minimise logical errors ( selecting the correct order proves to be vital to achieving a high fidelity result ) .* error syndromes ( as we define later ) are detected from outcomes of stabiliser measurements . in the event that an error syndrome is found, the magic state is discarded , and all data qubits are reinitialised according to step-1 . * * second phase * : * if no error syndrome is detected in the first phase , data qubits in area - iii ( area - iv ) are initialised in the state ( ) . *stabiliser measurements are performed on the entire lattice of the distance- surface code with any valid circuits ( i.e. cnot gates can be arranged in any convenient order that leads to valid stabilizer measurements ) for one full round to complete the encoding . regardless of whether error syndromes are found , the encoded magic state proceeds for further error correction , employing pairing algorithms and state distillation etc . except the magic - state qubit , all other qubits on the left ( top ) side of the lattice ( fig .[ scheme ] ) are initialised in the state ( ) .hence , the logical qubit is an eigenstate of with the eigenvalue if the magic - state qubit is initialised as an eigenstate of with the eigenvalue . here , , , and are pauli operators of the logical qubit , which commute with stabiliser measurements and are conserved quantities .therefore , the logical qubit is now in the magic state which was previously represented by the lone physical qubit in the top left .an error syndrome is an event indicating errors . without error ,outcomes of stabiliser measurements coincide with the initialisation pattern : in the first phase , values of stabilisers in area - i and stabilisers in area - ii ( stabilisers with slash lines in fig .[ scheme ] ) are all ; similarly , in the second phase , values of stabilisers in area - i and area - iii and stabilisers in area - ii and area - iv are also . without error ,the outcome of a stabiliser in later measurements is always the same as it is in the first - round measurement .therefore , in the first phase two types of events are recognised as error syndromes : mismatches i ) between the initialisation pattern and the first round of stabiliser measurements and ii ) between the first and second rounds of stabiliser measurements .error syndromes in the second phase and following stabiliser measurements are similar . .we have taken as an example .the logical error rate converges to the analytical limit when the two - qubit error rate .this result is obtained by assuming single - qubit operations are perfect , i.e. .logical errors are detected after performing rounds of full - size stabiliser measurements ( including the one in the second phase ) and correcting errors with edmonds s minimum weight matching algorithm , so that short error chains are sufficiently considered in our simulations . in the second phase ,we have used stabiliser - measurement circuits proposed in ref . . ] optimal circuits of stabiliser measurements ( fig .[ circuit ] ) in the first phase are obtained by minimising logical errors on the encoded magic state .generally , a stabiliser measurement includes an ancillary qubit and four ( or three ) cnot gates between the ancillary qubit and relevant data qubits . classifying cnot gates by stabilisers and their orientations ,there are eight sets of cnot gates in each full round of stabiliser measurements .after searching in all valid stabiliser - measurement circuits restricted to those implementing each set of cnot gates in parallel , we find that the logical error rate ranges from to depending on ordering . the circuit shown in fig .[ circuit ] is one of the circuits providing the minimised logical errors .given this optimal circuit we then allow for finite error rates in other operations besides the two - qubit cnot gates ; the consequences are described in the following section .and ( b ) single - qubit errors . with these ( )single - qubit errors , the logical error rate converges to ( ) consistent with eq .( [ eq : pl ] ) . ]operations on physical qubits which are included in our protocol are : initialisation in the state ; measurement in the computational basis ( and ) ; single - qubit gates ; and cnot gate .we assume a qubit may be initialised in the incorrect state with the probability ; the measurement may report an incorrect outcome with the probability ; and each single - qubit gate and cnot gate ( i.e. two - qubit gate ) may induce an error with the probability and respectively .a noisy gate is modelled as a perfect gate followed by single - qubit depolarizing noise for single - qubit gates and two - qubit depolarizing noise for the cnot gate .the logical qubit is sensitive to noise when the code distance is small and more stable when the code distance is larger .therefore , most of logical errors occur in the first phase . utilizing post selection and optimised stabilizer - measurement circuits , errors in the first phaseare well suppressed . with the depolarizing error model ,the rate of logical errors on the encoded magic state is : initialisation errors on the magic - state qubit and the next data qubit on the same horizontal line ( second data qubit ) can cause logical errors occurring with the probability .the single - qubit gate for rotating the magic - state qubit to the magic state may induce a logical error with the probability .cnot gates in the first round of stabiliser measurements may also induce logical errors .there are kinds of cnot - gate errors that can result in logical errors , and each of them occurs with the probability .these errors are ] and ] induced by the cnot gate on the magic - state qubit for measuring the stabiliser , and ] induced by the cnot gate on the second data qubit for measuring the stabiliser . here , and respectively denote the control and target qubits in corresponding cnot gates .all other errors do not cause logical errors solely hence contribute to the logical error rate in second order .this analytical result is verified by numerical simulations , example curves are shown in fig .( [ with1qerrors ] ) where we take as an example . in general , a larger implies a higher fidelity but also a smaller success probability . by choosing , we find that logical errors are well suppressed and at the same time the success probability is still high . when single - qubit operations are perfect ( ) , the logical error rate converges to as shown in fig. [ no1qerror ] . after switching on all single - qubit noise to ,i.e. , the logical error rate increases to as shown in fig .[ with1qerrors](a ) . thus the logical error rate remains lower than the physical two - qubit error rate even when other error sources are present at a finite level ( and indeed in many physical implementations there is more than an order of magnitude separating the two - qubit and single - qubit error rates ) .ultimately however if all forms of single - qubit operation suffer error rates equal to the two - qubit error rate , then logical error rate does exceed this common physical error rate and reaches [ see fig . [ with1qerrors](b ) ] . with and two - qubit error rate ,the first phase succeeds with a probability in the range depending on the rate of single - qubit errors .however , by adaptive use of hardware resources the protocol s effective success rate is much higher : for practical quantum computation , the target surface code usually has a large distance .if we choose , the entire lattice can be divided into copies of lattice , hence the first phase can be attempted in parallel , and the rate of obtaining at least one success is .although the successful copy may not be the one located at the top - left corner , we still can enlarge the code distance from to by adapting the initialisation pattern in the second phase .because eq .( [ eq : pl ] ) is only determined by the first phase , the overall fidelity will not be affected significantly . finally as an asidewe note that the protocol described here can also be used to encode magic states to a punctured surface code .we have proposed a new protocol for encoding magic states into the surface code with high - fidelity . remarkably , we find that the optimal gate sequence results in noise on the encoded magic state which is _ lower than half of the noise induced by a single physical cnot gate_. compared with the previous protocol , logical errors due to two - qubit noise are reduced by about a factor of ten. this can profoundly reduce the size of the hardware needed for quantum computing : for example with the 15-to-1 distillation protocol the logical error rate can be reduced from for input magic states to ( for small ) for the output magic state for each round of distillation , i.e. the advantage of our protocol is then a factor of after rounds of distillations . we can expect that this will reduce the required number of rounds by one ( as , for example , if and the target error rate of the distillation is anywhere between and ) .the hardware requirements can then be reduced by a factor of . given the anticipated expense and complexity of quantum computing devices , we believe this is an important and very encouraging result .i wish to thank simon benjamin , earl campbell , austin fowler , clare horsman , and naomi nickerson for helpful discussions .i am also grateful to simon benjamin and earl campbell for their help in preparing the introductory parts of this manuscript .in fact one of two different gates on the input state will have occurred , depending on the outcome of the measurement but these gates are related by a rotation that can be performed fault - tolerantly .
the leading approach to fault tolerant quantum computing requires a continual supply of _ magic states_. when a new magic state is first encoded , its initial fidelity will be too poor for use in the computation . this necessitates a resource - intensive _ distillation _ process that occupies the majority of the computer s hardware ; creating magic states with a high initial fidelity minimises this cost and is therefore crucial for practical quantum computing . here we present the surprising and encouraging result that raw magic states can have a fidelity significantly better than that of the two - qubit gate operations used to construct them . our protocol exploits post - selection without significantly slowing the rate of generation and tolerates finite error rates in initialisations , measurements and single - qubit gates . this approach may dramatically reduce the size of the hardware needed for a given quantum computing task .
a diverse and exciting array of scientific possibilities , whose exploration are enhanced by the existence of a virtual observatory , are detailed elsewhere in this volume .certain lines of scientific inquiry , however , are not just enhanced by a virtual observatory , but are actually enabled by it .for example , a panchromatic study of active galactic nuclei ( see , _e.g. _ , boroson , these proceedings ) , studies of the low surface brightness universe ( see , _ e.g. _ , schombert , these proceedings ) , a study of galactic structure ( see , _e.g. _ , kent , these proceedings ) , or a panchromatic study of galaxy clusters , are all extremely interesting projects that are facilitated by a virtual observatory . in this article, i will discuss some specific technical challenges which must be overcome in order to fully enable this new type of scientific inquiry .this is not as difficult as it may first appear , as many of these challenges are already being tackled , as is evidenced by the prototype services which are currently available at many of the leading data centers . in order to truly make revolutionary , and not merely evolutionary ,leaps forward in our ability to answer the important scientific questions of our time , we need to `` think outside the box '' , not just in the design and implementation of a virtual observatory , but in the actual scientific methodology we wish to employ ( see , _e.g. _ , figure 1 , which demonstrates this concept by combining large image viewing with the ability to selectively mark objects in the image based on their statistical properties ) .while many of the technical challenges are rather self - evident upon a cursory examination , such as the federation of existing archival centers , other challenges are considerably more difficult to elucidate .this effect is primarily a result of the difficulty in designing scientific programs for the , as yet unavailable , virtual observatory .this is exactly the time where `` thinking outside the box '' applies , as one needs to ask not `` _ what can i do right now ? _ '' , but `` _ what would i like to be able to do ? _ '' .the first step in this process is to consider , in its entirety , all of the data which might be available for ingestion into a virtual observatory .this includes the obligatory data catalogs , which are the most often used derivative of survey programs , and perhaps more importantly , the original imaging data and any associated metadata ( that is , data which describes the data ) . similar extensions likewise apply to other types of astronomical data , including spectral and temporal . after taking this revolutionary leap, we can now consider querying not just catalogs , but also the data from which the catalogs were extracted .this would allow for new techniques to be applied , which might , for example , perform source extraction using multiple wavelength images simultaneously ( _ e.g. _ , detection , szalay _ et al . _1999 ) , or perhaps to extract flux limits for objects detected in other wavelengths , or , finally , to extract matched parameters ( _ e.g. _ , matched aperture photometry ) .this is demonstrated in figure 2 , where the multiwavelength nature of nearby galaxies is explored , from the optical , extracted from the dposs survey ( djorgovski _ et al . _1998 ) , to the near - infrared , extracted form the 2mass survey ( skrutskie _ et al . _as this example demonstrates , multiwavelength image processing is a pressing need , since objects bright in one wavelength are often much fainter , if even detected at all , at other wavelengths .another example of the need for image reprocessing is shown in figure 3 , where the detection of a nearby , low surface brightness dwarf spheroidal galaxy is demonstrated .the vast majority of survey pipelines are designed to detect the dominant source population , namely high surface brightness point - type sources . as a result, an implicit surface - brightness selection effect exists in nearly all catalogs ( see , _e.g. _ , schombert , these proceedings for a more detailed account ) . in the future, one would ideally like to be able to reprocess survey data in an effort to find objects at varying spatial scales and surface brightnesses .as a demonstration of how a virtual observatory can enable new science , consider the specific science use case of understanding high velocity clouds ( hvcs ) . hvcs are defined as systems consisting of neutral hydrogen which have velocities that are incompatible with simple models of galactic rotation ( wakker and van woerden 1997 ) .their origin , however , remains uncertain , with various arguments being made in support of a wide range of hypothesis , including that they are galactic constituents , that they are the remnants of galaxy interactions , or that they are fragments from the hierarchical formation of our local group of galaxies . in an effort to truly understand these systems, we also would like to understand their composition .although these systems are , by definition , found in neutral hydrogen surveys , we can perform either follow - up observations at other wavelengths , or else correlate the hi data with existing surveys at other wavelengths in order to learn more about them ( see , _ e.g. _ , figure 4 for a demonstration of multiwavelength image correlation for a known hvc ). this process can often require the construction of large image mosaics involving multiple poss - ii photographic plates in order to map structures that span several tens of square degrees .this service should clearly be one of the principal design requirements for a virtual observatory .the most powerful method for understanding the composition of hvcs , however , is to study their absorption effects on the spectra of background sources , most notably quasars . in order to find suitable targets , we need to be able to dynamicallycorrelate the hvc images with published quasar catalogs in order to determine the optimal line - of - sights for quantifying the composition of the intervening hvcs with follow - up spectral observations .finally , we also would like to understand the evolution of these systems , which has obvious implications for understanding their origin .this can optimally be done by comparing the predictions of theoretical models to our correlated multiwavelength observations .this implies a need for a virtual observatory to allow seamless access to not only astronomical data but also the results of dynamical analysis , either through persisted calculations or a real - time process .to accomplish these ambitious scientific goals , we need powerful tools , which should be implemented as part of a virtual observatory . first , we need the ability to process and visualize large amounts of imaging data .this should be done in both a manner which is suitable for public consumption ( _ i.e. _ , the virtualsky.org project ) and also a manner which preserves scientific calibrations .these services will also need to provide coordinate transformations , overlays and arbitrary re - pixelizations .ideally , these operations occur as part of a service which can also accept user - defined functions to further process the data , minimizing the size of the data stream which must be established with the end - user .next , we need the ability to federate an arbitrary collection of catalogs , selected from geographically diverse archives , a prime computational grid application . to completely enable the discovery process , we also need intelligent display mechanisms to explore the high - dimensionality spaces which will result from this federation process. we also will need to allow the user to post - process these federations using user - defined tools or functions ( _ e.g. _ , statistical analysis ) as well as combine these processes with image operations and visualizations .finally , a complete census and subsequent description of science use cases ( _ e.g. _ , the previous section , see also , boroson , these proceedings ) , inevitably leads one to the formulation of a new paradigm for doing astronomy with a virtual observatory . in the future , anyone , anywhere , will be able to do cutting edge science , as researchers will only be limited by their creativity and energy , not their access to restricted observations or telescopes .not only will this revolutionize the scientific output of our community , but it will also have an important effect on the sociology of our field as well , since students will need to be trained in these new tools and techniques .this work was made possible in part through the npaci sponsored digital sky project and a generous equipment grant from sun microsystems .rjb would like to acknowledge the generous support of the fullam award for facilitating this project .access to the dposs image data stored on the hpss , located at the california institute of technology , was provided by the center for advanced computing research .the processing of the dposs data was supported by a generous gift from the norris foundation , and by other private donors .djorgovski , s. , de carvalho , r.r . , gal , r.r . ,pahre , m.a . ,scaramella , r. , and longo , g. 1998 , in b. mclean , editors , _ the proceedings of the iau on new horizons from multi - wavelength sky surveys _ ,iau symposium no .
a virtual observatory will not only enhance many current scientific investigations , but it will also enable entirely new scientific explorations due to both the federation of vast amounts of multiwavelength data and the new archival services which will , as a necessity , be developed . the detailing of specific science use cases is important in order to properly facilitate the development of the necessary infrastructure of a virtual observatory . the understanding of high velocity clouds is presented as an example science use case , demonstrating the future synergy between the data ( either catalog or images ) , and the desired analysis in the new paradigm of a virtual observatory .
modern financial markets have developed lives of their own .this fact makes it necessary that we not only monitor financial markets as an `` auxiliary system '' of the economy , but that we develop a methodology for evaluating them , their feedback on the real economy , and their effect on society as a whole .the events of the recent past have clearly demonstrated that the everyday life of the majority of the world s population is tied to the well - being of the financial system .individuals are invested in stock markets either directly or indirectly , and shocks to the system ( be they endogenous or exogenous ) have an immense and immediate impact .thus the need for a robust and efficient financial system is becoming stronger and stronger .these two critical concepts have been discussed and heatedly debated for the past century , with the efficient market hypothesis ( emh ) in the center of the debate .the emh stipulates that all available information ( or only past prices in the weak variant of the hypothesis ) is already reflected in the current price and it is therefore not possible to predict future values in any statistical method based on past records .the emh has been questioned by applying statistical tests to nyse returns in which the authors formulated the problem equivalent to the emh , and showed by contrast that an efficient compression algorithm they proposed was able to utilize structure in the data which would not be possible if the hypothesis were in fact true .the possibility for such compression suggests the data must be somehow structured .this encourages us to explore methods of modeling and exploring this structure in ways that can be applied to real - world markets .many efforts have thus been devoted to uncovering the true nature of the underlying structure of financial markets .much attention has been given to understanding correlations in financial markets and their dynamics , for both daily and intra - day time scales .more recently , other measures of similarity have been introduced , such as granger - causality analysis and partial correlation analysis , both of which aim to quantify how the behavior of one financial asset provides information about the behavior of a second asset . for these different measures of co -movement in financial markets , however , the main question that remains is how to uncover underlying meaningful information .an analysis of synchronous correlations of equity returns has shown that a financial market usually displays a nested structure in which all the stock returns are driven by a common factor , e.g. , a market index , and are then organized in groups of like economic activity such as technology , services , utilities , or energy that exhibit higher values of average pair correlation . within each group ,stocks belonging to the same sub - sector of economic activity , e.g. , `` insurance '' and `` regional banks '' within the financial sector , show an even higher correlation degree .such a structure has been recognized using very different methods of analysis , ranging from random matrix theory , to hierarchical clustering , to correlation based networks .the several methods devised to construct correlation based networks can be grouped into two main categories : threshold methods and topological / hierarchical methods .both approaches start from a sample correlation matrix or , more generally , a sample similarity measure . using the threshold method we set a correlation threshold and construct a network in which any two nodes are linked if their correlation is larger than the threshold .as we lower the threshold value we see the formation of groups of stocks ( economic sub - sectors ) that progressively merge to form larger groups ( economic sectors ) and finally merge into a single group ( the market ) .the advantage of this approach is that , due to the finite length of data series , threshold networks are very robust to correlation uncertainty .the disadvantage of threshold based networks is that it is difficult to find a single threshold value to display , in a single network , the nested structure of the correlation matrix of stock returns ( see ) .topological methods to construct correlation based networks , such as the minimal spanning tree ( mst ) or the planar maximally - filtered graph ( pmfg ) , are based solely on the ranking of empirical correlations .the advantage of this approach is that these methods are intrinsically hierarchical and are able to display the nested structure of stock - return correlations in a financial market .the disadvantage of this approach is that these methods are less stable than threshold methods with respect to the statistical uncertainty of data series , and it is difficult to include information about the statistical significance of correlations and their ranking .thus it is a challenge of modern network science to uncover the significant relationships ( links ) between the components ( nodes ) of the investigated system . although much attention has been devoted to the study of synchronous correlation networks of equity returns ( see for a review of the topic ) ,comparatively few results have been obtained for networks of lagged correlations .neither method of constructing correlation based networks is readily extendable to the study of directed lagged correlations in a financial market .the lagged correlations in stock returns are small , even at time horizons as short as five minutes , and are thus strongly influenced by the statistical uncertainty of the estimation process .the use of topological methods to construct a lagged - correlation based network of stock returns is difficult because they only take into consideration the ranking of correlations and not their actual values .the result could be a network in which many links are simply caused by statistical fluctuations .on the other hand , standard threshold methods are also difficult to apply because it is difficult to find an appropriate threshold level and , more importantly , the threshold selected in these methods is usually the same for all stock pairs .this is a problem if we want to study lagged correlations because the statistical significance of a lagged - correlation may depend on the return distribution of the corresponding pair of stocks , and such distributions might vary across stocks a consequence , for example , of the different liquidity of stocks .here we introduce a method for filtering a lagged correlation matrix into a network of statistically - validated directed links that takes into account the heterogeneity of stock return distributions .this is done by associating a -value with each observed lagged - correlation and then setting a threshold on -values , i.e. , setting a level of statistical significance corrected for multiple hypothesis testing .we apply our method to describe the structure of lagged relationships between intraday equity returns sampled at high frequencies in financial markets . in particular , we investigate how the structure of the network changes with increasing return sampling frequency , and compare the results using data from both the periods 20022003 and 20112012 .it should be noted that the two investigated time periods are quite different if we consider that the fraction of volume exchanged by algorithmic trading in the us equity markets has increased from approximately 20% in 2003 to more than 50% in 2011 . in both periodswe find a large growth in the connectedness of the networks as we increase the sampling frequency .the paper is organized as follows .section 2 introduces the method used to filter and validate statistically significant lagged correlations from transaction data .section 3 analyzes the structure of the resulting networks and investigates how this structure evolves with changing return sampling frequency . in sec .4 we discuss the application of our method to the construction of synchronous correlation networks . finally , in sec . 5we discuss the implications of our results for the efficiency and stability of financial markets .we begin the analysis by calculating the matrix of logarithmic returns over given intraday time - horizons .we denote by the most recent transaction price for stock occurring on or before time during the trading day .we define the opening price of the stock to be the price of its first transaction of the trading day .let be the time horizon .then for each stock we sample the logarithmic returns , every minutes throughout the trading day , and assemble these time series as columns in a matrix .we then filter into two matrices , and , in which we exclude returns during the last period of each trading day from and returns during the first period of each trading day from . from these data we construct anempirical lagged correlation matrix using the pearson correlation coefficient of columns of and , where and are the mean and sample standard deviation , respectively , of column of , and is the number of rows in ( and ) . herewe set the lag to be one time horizon .a schematic of this sum is diagrammed in fig .[ fig : lag_schem ] . minutes .the sum is generated using products of returns from stocks and that are linked by an arrow .we consider only time horizons that divide evenly into the 390 minute trading day . ]the matrix can be considered a weighted adjacency matrix for a fully connected , directed graph . to filter the links in this graph according to a threshold of statistical significance , we apply a shuffling technique .the rows of are shuffled repeatedly without replacement in order to create a large number of surrogated time series of returns .after each shuffling we re - calculate the lagged correlation matrix ( [ eqn : corr_matrix ] ) and compare this shuffled lagged correlation matrix to the empirical matrix . for each shufflingwe thus have an independent realization of .we then construct the matrices and , where is the number of realizations for which , and is the number of realizations for which . from matrix associate a one - tailed -value with all positive correlations as the probability of observing a correlation that is equal to or higher than the empirically - measured correlation .similarly , from we associate a one - tailed -value with all negative correlations . in this analysis we set the threshold at .we must adjust our statistical threshold , however , to account for multiple comparisons .we use the conservative bonferroni correction for a given sample size of stocks .for example , for stocks the corrected threshold will be .we thus construct independently shuffled surrogate time series .if we can associate a statistically - validated positive link from stock to stock ( , bonferroni correction ) .likewise , if we can associate a statistically - validated negative link from stock to stock . in this waywe construct the bonferroni network . in appendixa we discuss the probability that using our approximated method we will wrongly indentify a link as statistically significant ( i.e. , have a false positive ) . for the sake of comparison , for each time horizon also construct the network using -values corrected according to the false discovery rate ( fdr ) protocol .this correction is less conservative than the bonferroni correction and is constructed as follows .the -values from each individual test are arranged in increasing order ( ) , and the threshold is defined as the largest such that . in the fdr network our threshold for the matrices or is thus not zero but the largest integer such that or has exactly entries fewer than or equal to . from this thresholdwe can filter the links in to construct the fdr network .we note that the bonferroni network is a subgraph of the fdr network . because we make no assumptions about the return distributions , this randomization approach is especially useful in high - dimensional systems in which it can be difficult to infer the joint probability distribution from the data .we also impose no topological constraints on the bonferroni or fdr networks .this method serves to identify the significant positive and negative lagged correlation coefficients in a way that accounts for heterogeneities in relationships between the returns of stocks .an alternative , but closely related approach would be to construct a theoretical distribution for correlation coefficients under the null hypothesis of uncorrelated returns sampled from a given joint distribution .for a desired confidence level , one could then construct a threshold correlation , beyond which empirical correlations are validated .such an approach typically assumes equal marginal distributions for returns , and must fix a uniform correlation threshold for all relationships . at the expense of computational time ,our method is flexible in that it permits heterogeneities in marginal distributions .we compare the results of the two approaches in appendix b.we study and compare two different datasets .the first dataset comprises returns of 100 companies with the largest market capitalization on the new york stock exchange ( nyse ) during the period 20022003 ( 501 trading days ) , which was investigated in .for the second dataset we consider returns during the period 20112012 ( 502 trading days ) of 100 companies with the largest market capitalization on the nyse as of december 31 , 2012 ( retrieved from the trades and quotes database , wharton research data services , http://wrds-web.wharton.upenn.edu/wrds/ ) .market capitalization figures were obtained from yahoo finance web service ( http://finance.yahoo.com ) . for each companywe obtain intraday transaction records .these records provide transaction price data at a time resolution of one second .the stocks under consideration are quite liquid , helping to control for the problem of asynchronous transactions and artificial lead - lag relationships due to different transaction frequencies .we sample returns at time horizons of 5 , 15 , 30 , 65 , and 130 minutes .we report summary statistics in table [ summary_table ] , including the lengths of time series from equation ( [ eqn : corr_matrix ] ) , as well as the mean and standard deviation of synchronous pearson correlation coefficients between distinct columns of the returns matrix for each time horizon .we also show the mean and standard deviation of entries in the lagged correlation matrix .[ summary_table ] figure [ fig : bounds ] displays bounds on the positive and negative coefficients selected by this method for both bonferroni and fdr networks at a time horizon of minutes. stocks at a time horizon minutes .the minimum positive coefficients and maximum negative coefficients selected using both bonferroni and fdr filtering procedures are shown .we note that these methods select coefficients from the tails of the distribution , without fixing a uniform threshold for all pairs of stocks . ] in fig .[ fig : networks ] we display plots of each statistically validated lagged correlation network obtained from the 20112012 data ( bonferroni correction ) . at time horizons of minutes and minuteswe validate one and two links , respectively .it is somewhat remarkable that we uncover any persistent relationships at such long time horizons .we see a striking increase in the number of validated links at small intraday time horizons , below minutes in particular .this is likely due to a confluence of two effects : ( i ) with decreasing we increase the length of our time series , gaining statistical power and therefore the ability to reject the null hypothesis ; ( ii ) at small we approach the timescales over which information and returns spill over across different equities . in appendix c we provide evidence that diminishing the time horizon reveals more information about the system than is obtained by increasing the time series length alone .it is clear visually that the validated links of positive correlation vastly outnumber the validated links of negative correlation .we plot the number of validated links in both the bonferroni and fdr networks for the 20022003 and 20112012 datasets in fig .[ fig : number_links ] , where the decrease in number of all validated links for increasing time horizon is apparent .note that for a given time horizon we usually validate more links in the 20022003 dataset than in the 20112012 dataset .this suggests that there has been an increase in market efficiency over the past decade .we revisit this idea in subsequent portions of this paper , where we study the properties of the network in- and out - degree distributions and the characterization of three - node motifs .we also explore how the number of validated links decreases for a fixed time horizon but a changing time lag .we build a lag into the lagged correlation matrix ( [ eqn : corr_matrix ] ) by excluding the last returns of each trading day from matrix and the first returns of each trading day from matrix .thus the present analysis uses . in appendix cwe plot the decreasing number of validated links with increasing for minutes .we must also measure the extent to which the number of validated lead - lag relationships can be disentangled from the strength of those relationships .figure [ fig : lagged_coeffs ] thus shows plots of the average magnitude of lagged correlation coefficients selected by the bonferroni and fdr networks .although we validate more links at small time horizons , we note that the average magnitude of the selected coefficients tends to decrease . at short time horizons we correlate time series of comparatively large length , narrowing the distribution of entries in the shuffled lagged correlation matrix and gaining statistical power .we are thus able to reject the null hypothesis even for lead - lag relationships with a modest correlation coefficient . finally , in fig .[ fig : degrees ] we characterize the topologies of the statistically - validated networks by studying the properties of their in - degree and out - degree distributions .we make two observations .first , we note that both the in - degree and out - degree distributions appear more homogeneous in the 20022003 period than the 20112012 period , i.e. , the 20112012 data exhibit large heterogeneities , particularly in the in - degree distributions , in which many nodes have small degrees but few nodes have very large degrees , as can be seen in the extended tails of the distributions .second , we observe that in both the 20022003 and 20112012 data there are more nodes with large in - degrees than out - degrees . although few individual stocks have a strong influence on the larger financial market , it appears that the larger financial market has a strong influence on many individual stocks , especially at short time horizons .we further investigate this point by studying the relative occurrence of three - node network motifs in the bonferroni networks .we find that , of all motifs featuring more than one link , the `` 021u '' motif ( two nodes influencing a common third node ) occurs frequently in the recent data , and in fact occurs in over 80% of node triplets having more than one link between them for time horizons greater than minutes . in the 20022003 datathis motif is also the most common at every time horizon except minutes .figure [ fig : motifs ] plots the occurrence frequencies of these motifs .these features can be related to the information efficiency of the market . in the 20112012 datasetwe find a dominant motif in which a large number of stocks influence only a few other stocks .predictive information regarding a given stock , therefore , tends to be encoded in the price movements of many other stocks and so is difficult to extract and exploit .in contrast , the distributions of degrees and motifs in the 20022003 data are more homogeneous .although there are more nodes with large in - degrees , there are also more nodes with large out - degrees . if a stock has a large out - degree , its price movements influence the price movements of many other stocks .these sources of exploitable information have all but disappeared over the past decade . min . ) , 1,296 ( min . ) , 17,545 ( min . ) , and 92,673 ( min . ) . in 2011 - 2012these counts are 1 ( min . ) , 9,171 ( min . ) , 13,303 ( min . ) , and 35,405 ( min . ) . ]to construct synchronous correlation networks using the methodology described in sec .[ sec : methods ] , we use the unfiltered columns of as our time series such that each entry of the empirical correlation matrix is the pearson correlation between columns and of .we then independently shuffle the columns of , without replacement , when constructing the surrogated time series .we find that with the same significance threshold of , in 2011 - 2012 both the bonferroni and fdr networks are almost fully connected , with well over 4500 of the possible links validated in all networks over all time horizons .our method is thus quite sensitive to the presence or absence of correlations between time series .figure [ fig : epps ] plots the empirical synchronous correlations against time horizon for all stocks considered in both datasets .we see a clear increase in the magnitude of these coefficients as the time horizon grows , a phenomenon known as the epps effect .it is known that lagged correlations may in part contribute to this effect .the extent of this contribution is an active area of investigation .the synchronous correlations are also significantly higher in the recent data , suggesting that , despite the increased efficiencies shown in fig .[ fig : number_links ] , there is also an increase in co - movements in financial markets since 2003 , heightening the risk of financial contagion ( see for example ) . figure [ fig : sync_corr_hist ] shows the distribution of correlation coefficients at minutes for both 20022003 and 20112012 datasets .we observe a slightly bi - modal distribution of synchronous correlation coefficients in the 20022003 data across all time horizons .most coefficients are positive , but there is also a small number of negative coefficients among these high market capitalization stocks .this quality disappears in the 20112012 data , and all correlation coefficients are positive .in this paper , we propose a method for the construction of statistically validated correlation networks .the method is applicable to the construction of both lagged ( directed ) and synchronous ( undirected ) networks , and imposes no topological constraints on the networks .the sensitivity of the method to small deviations from the null hypothesis of uncorrelated returns makes it less useful for studying the synchronous correlations of stocks , as these equities tend to display a considerable degree of correlation and we validate almost all possible links in the network .the method is apt , however , for the study of lagged correlation networks .we are able to adjust the sensitivity of the method with our choice of -value and protocol for multiple comparisons . herewe show that , with the conservative bonferroni correction and -value=0.01 , we are able to compare changes in network connectivity with increasing return sampling frequency between old and new datasets .the primary drawback to our method is its computational burden , which grows as for time series .we find that for timescales longer than one hour , significant lead - lag relationships that capture return and information spill - over virtually disappear .for timescales smaller than 30 minutes , however , we are able to validate hundreds of relationships . according to the efficient market hypothesis there can be no arbitrage opportunities in informationally - efficient financial marketshowever , lagged correlations may not be easily exploitable due to the presence of market frictions , including transaction costs , the costs of information processing , and borrowing constraints . between the time periods 20022003 and 20112012 ,the synchronous correlations among these high market capitalization stocks grow considerably , but the number of validated lagged - correlation relationships diminish .we relate these two behaviors to an increase in the risks of financial contagion and an increase in the informational efficiency of the market , respectively .we find that networks from both periods exhibit asymmetries between their in - degree and out - degree distributions . in both there are more nodes with large in - degrees than large out - degrees , but in the 20112012 data , nodes with large in - degrees are represented by the extended tails of the degree distribution and , in contrast , the 20022003 distribution exhibits a greater uniformity . a comparison between in - degree and out - degree distributions shows that nodes with high in - degree are much more likely than nodes with high out - degree , especially for the 20112012 data .this evidence is also interpreted in terms of informational efficiency of the market .indeed a large out - degree of a stock implies that knowledge of its return , at a given time , may provide information about the future return of a large number of other stocks . on the other hand ,a large in - degree of a stock indicates that information about its return at a given time can be accessed through the knowledge of past returns of many stocks .there are also many more nodes with large out - degrees in the 20022003 data than in the 20112012 data .we relate these observations to an increased information efficiency in the market .such an interpretation is also supported by the analysis of three - node motifs , which shows an apparent dominance of motif 021u with respect to all the others . in the future , we could extend this work by incorporating a prediction model to measure the degree to which the information contained in these validated networks is exploitable in the presence of market frictions .we could also investigate the characteristics of nodes belonging to different industries , as well as the presence of intraday seasonalities .such features are potentially relevant to prediction models .finally , although our analysis restricts itself to using the pearson product - moment correlation , other measures , such as a lagged hayashi - yoshida estimator , could be used to probe correlations at the smallest ( inter - trade ) timescales while minimizing the problem of asynchronous trades .we thank viktoria dalko for useful conversations and insights , and her help with the data .cc , dyk , and he s wish to thank onr ( grant n00014 - 09 - 1 - 0380 , grant n00014 - 12 - 1 - 0548 ) , dtra ( grant hdtra-1 - 10 - 1- 0014 , grant hdtra-1 - 09 - 1 - 0035 ) , and nsf ( grant cmmi 1125290 ) . m.t . and r.n.m .acknowledge support from the inet research project nethet `` new tools in credit network modeling with heterogenous agents '' .r. n. m. acknowledge support from the fp7 research project crisis `` complexity research initiative for systemic instabilities '' .all authors contributed equally to this manuscript .44 [ 1]#1 [ 1 ] [ 2]#1 [ 1]#1 [ 2]#2#1 allez , r. and bouchaud , j.p . ,individual and collective stock dynamics : intra - day seasonalities ._ new journal of physics _ , 2011 , * 13 * , 025010. aste , t. , shaw , w. and di matteo , t. , correlation structure and dynamics in volatile markets ._ new journal of physics _ , 2010 , * 12 * , 085009. benjamini , y. and hochberg , y. , controlling the false discovery rate : a practical and powerful approach to multiple testing . _ journal of the royal statistical society .series b ( methodological ) _ , 1995 , pp. 289300 .billio , m. , getmansky , m. , lo , a. and pelizzon , l. , econometric measures of connectedness and systemic risk in the finance and insurance sectors ._ journal of financial economics _ , 2012 , * 104 * , 535559 .biroli , g. , bouchaud , j.p . and potters , m. , the student ensemble of correlation matrices : eigenvalue spectrum and kullback - leibler entropy . _ arxiv preprint arxiv:0710.0802 _ , 2007 .bonanno , g. , caldarelli , g. , lillo , f. and mantegna , r. , topology of correlation - based minimal spanning trees in real and model markets ._ physical review e _ , 2003 , * 68* , 046130 .bonanno , g. , lillo , f. and mantegna , r.n ., high - frequency cross - correlation in a set of stocks . , 2001 .borghesi , c. , marsili , m. and miccich , s. , emergence of time - horizon invariant correlation structure in financial returns by subtraction of the market mode . _ physical review e _ , 2007 , * 76 * , 026104. campbell , r. , forbes , c. , koedijk , k. and kofman , p. , increasing correlations or just fat tails ? ._ journal of empirical finance _ , 2008 ,* 15 * , 287309 .carbone , a. , detrending moving average algorithm : a brief review . in _ proceedings of the _ _ science and technology for humanity ( tic - sth ) , 2009 ieee toronto international conference _ , pp . 691696 , 2009 .cecchetti , s. and kharroubi , e. , reassessing the impact of finance on growth ._ bis working paper _ , 2012 , * available at ssrn : http://ssrn.com / abstract=2117753*. cizeau , p. , potters , m. and bouchaud , j. , correlation structure of extreme stock returns . _ quantitative finance _ , 2001 , * 1 * , 217222 .de jong , f. , nijman , t. and rell , a. , price effects of trading and components of the bid - ask spread on the paris bourse ._ journal of empirical finance _ , 1996 , * 3 * , 193213 .efron , b. and tibshirani , r. , _ an introduction to the bootstrap _ , vol .57 , , 1993 , crc press .epps , t. , comovements in stock prices in the very short run ._ journal of the american statistical association _ , 1979 ,291298 .forbes , k. and rigobon , r. , no contagion , only interdependence : measuring stock market comovements . _ the journal of finance _, 2002 , * 57 * , 22232261 .gopikrishnan , p. , plerou , v. , liu , y. , amaral , l. , gabaix , x. and stanley , h. , scaling and correlation in financial time series ._ physica a : statistical mechanics and its applications _ , 2000 , * 287 * , 362373 .gopikrishnan , p. , rosenow , b. , plerou , v. and stanley , h. , quantifying and interpreting collective behavior in financial markets ._ physical review e _, 2001 , * 64 * , 035106 .hall , r.e . , why does the economy fall to pieces after a financial crisis ? ._ the journal of economic perspectives _ , 2010 , * 24 * , 320 . havlin , s. , kenett , d.y ., ben - jacob , e. , bunde , a. , cohen , r. , hermann , h. , kantelhardt , j. , kertsz , j. , kirkpatrick , s. , kurths , j. _ et al ._ , challenges in network science : applications to infrastructures , climate , social systems and economics ._ european physical journal - special topics _ , 2012 , * 214 * , 273 .hayashi , t. and yoshida , n. , on covariance estimation of non - synchronously observed diffusion processes . _bernoulli _ , 2005 , * 11 * , 359379. huth , n. and abergel , f. , high frequency lead / lag relationships - empirical facts ._ arxiv preprint arxiv:1111.7103 _ , 2011 .kenett , d.y . ,preis , t. , gur - gershgoren , g. and ben - jacob , e. , quantifying meta - correlations in financial markets ._ europhysics letters _ , 2012 , * 99 * , 38001. kenett , d.y . ,raddant , m. , lux , t. and ben - jacob , e. , evolvement of uniformity and volatility in the stressed global financial village ._ plos one _ , 2012 , * 7 * , e31144 .kenett , d.y . ,tumminello , m. , madi , a. , gur - gershgoren , g. , mantegna , r. and ben - jacob , e. , dominating clasp of the financial sector revealed by partial correlation analysis of the stock market ._ plos one _ , 2010 , * 5 * , e15032 .kenney , j.f . and keeping , e.s ., _ mathematics of statistics , part 2 _ , 2nd edition , 1962 , d. van nostrand company inc .laloux , l. , cizeau , p. , potters , m. and bouchaud , j. , random matrix theory and financial correlations . _ international journal of theoretical and applied finance _ , 2000 , * 3 * , 391398 .lo , a.w . andmackinlay , a.c ., stock market prices do not follow random walks : evidence from a simple specification test ._ review of financial studies _ , 1988 , * 1 * , 4166 .malkiel , b.g ., the efficient market hypothesis and its critics . _ the journal of economic perspectives _ , 2003 , * 17 * , 5982 .malkiel , b.g . andfama , e.f ., efficient capital markets : a review of theory and empirical work*. _ the journal of finance _ , 1970 , * 25 * , 383417 .mantegna , r. , hierarchical structure in financial markets ._ the european physical journal b - condensed matter and complex systems _ , 1999 , * 11 * , 193197 .milo , r. , shen - orr , s. , itzkovitz , s. , kashtan , n. , chklovskii , d. and alon , u. , network motifs : simple building blocks of complex networks ._ science _, 2002 , * 298 * , 824827 .munnix , m. , schafer , r. and guhr , t. , impact of the tick - size on financial returns and correlations ._ physica a : statistical mechanics and its applications _ , 2010 , * 389 * , 48284843 .onnela , j. , chakraborti , a. , kaski , k. and kertesz , j. , dynamic asset trees and black monday ._ physica a : statistical mechanics and its applications _ , 2003 , * 324 * , 247252 .podobnik , b. and stanley , h.e . , detrended cross - correlation analysis : a new method for analyzing two nonstationary time series ._ physical review letters _ , 2008 , * 100*. pollet , j. and wilson , m. , average correlation and stock market returns ._ journal of financial economics _ , 2010 , * 96 * , 364380 .shmilovici , a. , alon - brimer , y. and hauser , s. , using a stochastic complexity measure to check the efficient market hypothesis ._ computational economics _ , 2003 , * 22 * , 273284. song , d. , tumminello , m. , zhou , w. and mantegna , r. , evolution of worldwide stock markets , correlation structure , and correlation - based graphs . _ physical review e _ , 2011 , * 84 * , 026108 .tobin , j. , a general equilibrium approach to monetary theory ._ journal of money , credit and banking _ , 1969 , * 1 * , 1529 .toth , b. and kertesz , j. , the epps effect revisited ._ quantitative finance _ , 2009 , * 9 * , 793802 .tumminello , m. , aste , t. , di matteo , t. and mantegna , r. , a tool for filtering information in complex systems ._ proceedings of the national academy of sciences of the united states of america _, 2005 , * 102 * , 10421 .tumminello , m. , coronnello , c. , lillo , f. and micciche , s. , spanning trees and bootstrap reliability estimation in correlation based networks .j. bifurcat . chaos _ , 2007 , * 17 * , 23192329 .tumminello , m. , di matteo , t. , aste , t. and mantegna , r. , correlation based networks of equity returns sampled at different time horizons . _the european physical journal b - condensed matter and complex systems _, 2007 , * 55 * , 209217 .tumminello , m. , lillo , f. and mantegna , r. , correlation , hierarchies , and networks in financial markets ._ journal of economic behavior & organization _ , 2010 , * 75 * , 4058 .tumminello , m. , miccich , s. , lillo , f. , piilo , j. and mantegna , r. , statistically validated networks in bipartite complex systems . _plos one _ , 2011 , * 6 * , e17994 .tumminello , m. , curme , c. , mantegna , r.n . ,stanley , h.e . and kenett , d.y, how lead - lag correlations affect the intra - day pattern of collective stock dynamics . _manuscript in preparation_.the authors declare no competing financial interests .the one - tailed -value associated with positive correlations represents the probability of observing a correlation between two elements , and , that is larger than or equal to the one observed , , under the null hypothesis that and are uncorrelated , our objective in the paper is to select all the correlations with a -value smaller than a given univariate statistical threshold , e.g. , or , corrected for multiple hypothesis testing through the bonferroni correction , that is , divided by the total number of tests , in our case ( where is the number of stocks ) .the question is : _ what is the probability that a correlation with a p - value larger or equal to is ( wrongly ) indicated as a statistically significant one according to the shuffling method?_. operatively , _what is the probability that , over the independent replicates of the data , a correlation between and larger than the observed one has never been observed ? _if we set the -value , , of equal to ( where is a quantity that ranges between and ) the question is : what is the probability that , over independent draws ( bootstrap replicates with our method ) a value of correlation larger than is never obtained ?this probability is where null " indicates the event that a value of correlation larger than has never been obtained over random replicates of data .this probability can be used to calculate the probability that is larger than or equal to , conditioned to the event that a value of correlation larger than has never been obtained over draws .this is done using bayes rule , under the assumption that the marginal distribution of -value is uniform in $ ] , i.e. , the density function is and then , integrating over , where we used the fact that . in our method , , andthe sample size is .therefore it is interesting to note that , as soon as the level of statistical significance is corrected through the bonferroni correction ( ) , where is the univariate level of statistical significance , and the number , , of independent replicates is a multiple of the number of tests , , the probability is approximately independent of the sample size ( ) . with our approximated method to estimate correlation p - values , the probability that we select a positive correlation as a statistically significant one at the confidence level ,while it is actually not significant at that level of statistical confidence , is .however , the probability that a significant correlation according to our method has a -value larger then is already quite small : . in other words ,if we obtain a validated network with 1,000 links , i.e. , 1,000 validated positive correlations according to our approximated method , we expect that , on average , only 7 correlations will have a one - tailed -value larger than .here we compare ( for a sub - set of our data ) the number of significant correlations obtained according to the presented bootstrap approach and the number of significant correlations that we may have obtained relying upon the analytical distribution of sample pair correlations of normally distributed data .if and are uncorrelated variables that follow a normal distribution , then the probability density function of the sample correlation coefficient , , between and is where is the length of the sample and is the euler beta function of parameters and . given a level of statistical significance , ( already corrected for multiple hypothesis testing ) , can be used to set a threshold for the correlation value such that the probability is according to this analysis , for a data sample of time series , each one of length , we can say that an observed correlation , , is statistically significant if , where is obtained by ( numerically ) solving the previous non linear equation .table b1 shows the 20022003 dataset and reports the length of data series used to calculate lagged correlations ( column 1 ) at a given time horizon ( column 2 ) , the quantity such that ( column 3 ) , the number of validated positive correlations ( column 4 ) , and the number of validated negative correlations ( column 5 ) .table b2 shows the number of validated positive correlations ( i ) according to the shuffling method ( column 3 ) , ( ii ) according to the analytical method discussed above ( column 4 ) , and ( iii ) common to both methods ( column 5 ) .the results reported in the table show that the bootstrap method we used is more conservative than the analytical method based on the assumption that return time series follow a normal distribution .indeed the number of validated positive correlations according to the bootstrap method is always smaller than the one obtained using the theoretical approach .furthermore , most of the correlations validated according to the bootstrap method are also validated according to the theoretical method .a similar discussion can be held about the validation of negative correlations .we explore how the number of validated links decreases when the time horizon is fixed and the time lag variable increases .a lag is built into the lagged correlation matrix ( [ eqn : corr_matrix ] ) by excluding the last returns of each trading day from matrix and the first returns of each trading day from matrix .thus the results presented in the main text are restricted to .figure [ fig : variable_lag ] plots the number of positive links and negative links validated in the 20112012 data for minutes as increases .although for this the length of the time series in and decrease by only % for each additional lag ( as each 390 minute trading day includes returns ) , we observe a sharp decrease in the number of validated links as increases .the number of validated negative links is an order of magnitude smaller than the number of positive links , so the small peak in negative links at for the fdr network is likely an artifact of noise .we also investigate the effect of the time series length on the numbers of validated links . for minutes , we partition the entire 2011 - 2012 time series into segments of length , as this is the length of the time series for the longest time horizon considered ( minutes ) . for each segmentwe generate the lagged correlation network using surrogate time series , as before .we find that the union of all such bonferroni networks consists of 124 distinct links , 104 of which are positive and 20 of which are negative .although this number is 27% of the number of links validated in the minute network that was not partitioned ( ) , it stands in contrast to the single link that was validated in the minute bonferroni network using the entire time period .the number validated in each partition is shown in figure [ fig : fixed_t ] .we can thus safely conclude that decreasing the time horizon provides information independent of the increased time series length .
according to the leading models in modern finance , the presence of intraday lead - lag relationships between financial assets is negligible in efficient markets . with the advance of technology , however , markets have become more sophisticated . to determine whether this has resulted in an improved market efficiency , we investigate whether statistically significant lagged correlation relationships exist in financial markets . we introduce a numerical method to statistically validate links in correlation - based networks , and employ our method to study lagged correlation networks of equity returns in financial markets . crucially , our statistical validation of lead - lag relationships accounts for multiple hypothesis testing over all stock pairs . in an analysis of intraday transaction data from the periods 20022003 and 20112012 , we find a striking growth in the networks as we increase the frequency with which we sample returns . we compute how the number of validated links and the magnitude of correlations change with increasing sampling frequency , and compare the results between the two data sets . finally , we compare topological properties of the directed correlation - based networks from the two periods using the in - degree and out - degree distributions and an analysis of three - node motifs . our analysis suggests a growth in both the efficiency and instability of financial markets over the past decade .
when a physical phenomenon is measured with a set of instruments , what we register is a sequence of values of some variable which takes values in a space .we will call the _ state space _ and the space of sequences the _ path space_. statistical properties of the phenomenon may be described at three different levels : \(1 ) by the expectation values of the observables ; \(2 ) by the probability measures on the state space ; \(3 ) by the probability measures on path space .one obtains three different characterizations of the phenomenon which represent successively finer levels of description of the statistical properties . borrowing a terminology used in large deviation theory , we will call these three types of description , respectively , _ level 1 , 2 and 3- statistical indicators . _ to obtain expectation values and probability measures we would require infinite samples and a law of large numbers .for any finite sample we obtain finite versions of the expectation values , the probability on state space and the probability on path space which are called the _ mean partial sums , _ the _ empirical measures _ ( or empirical probability distribution functions - pdf s ) and the measures on the _ empirical process_. level-1 and level-2 analysis are the most common ones and their statistical indicators the most commonly quoted when a stochastic process is analyzed .however to the same expectation values for the observables or to the same pdf s , different processes may be associated .therefore full understanding of the process requires the determination of the level-3 indicators .recent advances have been obtained on the identification of processes , especially in connection with the analysis of hydrodynamic turbulence data .in particular it has been clarified that analysis and reconstruction of the process involves two different but related steps .one is the identification of the _ grammar _ of the process , that is , the allowed transitions in the state space or the subspace in path space that corresponds to actual orbits of the system .the second step is the identification of the _ measure _ , which concerns the occurrence frequency of each orbit in typical samples .although largely independent from each other , this two features have a related effect on the constraints they impose on the statistical indicators .identification of grammars and measures ( in particular gibbs measures ) has been dealt with recently , in particular in the context of hydrodynamic turbulence and other dynamical systems .market fluctuations is an interesting stochastic process .some analogies have been found between this process and some of the features of turbulence data . however , when statistical indicators are computed , it turns out that the two processes are different .nevertheless the statistical tools that have been developed for turbulence are mathematical devices which are not process - dependent and they may be applied to any stochastic process process .of course , underlying this approach is the working hypothesis that statistical methods , by themselves , are an appropriate tool to describe and reconstruct the market fluctuation process .this hypothesis underlies the modern view of the _ efficient market _ , namely the idea that the market appears to overreact in some circumstances and underreact in others is pure chance . in other words ,the expected value of abnormal returns is zero .contrariwise , if a well defined deterministic pattern of over- and underreaction is ever found then , in addition to chance , a behavioral component must always be included in any description of the market .behavioral trends , however , may turn out not inconsistent with a pure statistical description if the different reaction times of the diverse market components are taken into account , as well as the secondary reactions of the components to each other moves . the emphasis on this paper will be on level-3 analysis and on the reconstruction of the processes .nevertheless we have also dedicated some time to the computation , for market fluctuations , of the level-1 and level-2 statistical indicators used in the past for turbulence data .in particular the behavior of some of these indicators already provides information on the nature of the grammars .this analysis is carried out in sect .3 . sect . 4is dedicated to the search for a gibbs measure and , once the long - memory features of the market processes are exhibited , sect .5 attempts to describe the processes in the framework of chains with complete connections .however , the first step in the analysis of any stochastic process is to inquire about the stationarity of the process and whether typical samples are available .this is the subject of the next section .large samples of high - frequency finance data are now available. however high - frequency data may not be the more appropriate data to begin understanding the stochastic process that underlies the market mechanism .this is because , when comparing minute to monthly variations for example , one is comparing systems with very different compositions , trading agents operating on the minute scale being in general different from those operating in longer time scales .this is evidenced , for example , by the different scaling laws for low and high - frequency data . in market dataone faces a complexity versus statistics trade - off .the high frequency data certainly provides better statistics but it also involves the interplay of many more reaction time scales and market compositions in the trading process . for this reason , to `` purify '' as much as possible our samples , we have decided to concentrate on daily data .the price to be paid for this choice is the fact that , as compared for example with a large scale hydrodynamics experiment , the available amount of one - day market fluctuation data is relatively small .if , in addition , the data is non - stationary , the chances to obtain a reliable statistical analysis would be rather slim .reliable application of statistical mechanics tools to any kind of signal , presupposes that two conditions are fulfilled .first , that the process that generates the data has some kind of underlying stationarity or asymptotic stationarity .second , that the time sequence that is presented to the analysis is a typical sample of the process .the second condition , of course , we can only hope that it is realized and to improve our belief in this condition several different signals of a similar nature should be analyzed ( several different stocks , or currencies or markets ) . as to the first condition it requires some preprocessing of the data .we will concentrate in this paper in the daily fluctuation data of industrial stocks and indexes and the objective is to try to extract the features of the market process that acts on them .we look at each stock as an experimental probe that , while reacting to the market pressures , may reveal some of the mechanisms of the market process .market prices are by nature non - stationary entities .they fluctuate , they have general trends that depend on the general state of the economy , on the total amount of capital flowing to the market , on the general acceleration of the economy , on long and medium term political decisions and expectations , etc .nevertheless , our hypothesis is that , if all these global factors are extracted from the data , there are still some invariant features that characterize this peculiar human phenomenon .the type of data that will be analyzed is displayed in fig.1 that shows daily price data for three stocks and the nyse composite index .its non - stationary nature is very apparent .the first step is to extract the general trend .this is done , in a smooth way by a polynomial fit ( fig.2 shows an example , where a 7-degree polynomial is used ) .fig.3 shows the difference .clearly the data is still very far from stationary , because due to the market volume acceleration recent fluctuations carry a much larger weight . therefore the last step is a rescaling of the data , by the average , that is are the signals to be analyzed .. they are shown in fig.4 . toanyone used to examine turbulence data , it looks as if the market signals are now somewhat stable .that does not mean , of course , that they are stationary in the strict sense .however it suggests that in spite of currency adjustments , increased number of players , trade volumes and other macroeconomic indicators , there is something more or less permanent in this human game .detrending and rescaling of the data is important because we will be analyzing price differences over large time intervals . for one - day differences of log - price, the results would be identical to those obtained from the raw data .detrending and rescaling the data , the overall amplitude of price fluctuations becomes reasonably uniform over the time span of the data .however the process is not ( locally ) stationary , as seen in figs.5 and 6 that show the strong variation in time of the volatility ( here defined as the standard deviation of the price fluctuations ) .the two figures on the left show the standard deviation computed on a sliding time window of 10 days .on the right one compares the cumulative standard deviation for the rescaled ( full line ) and the non - rescaled data ( dashed line ) .it is quite apparent that only the rescaled data has the chance to belong to an asymptotically stationary process .once the data is detrended and rescaled there is in fact no evidence for an abnormal increase , in recent times , of the volatility in the underlying process .a direct test of stationarity of the detrended and rescaled data was obtained by coding with a 5-symbols alphabet ( as explained in sect .4 ) . then , computing the entropies of multi - symbol words , in the first and the second half of the samples , no significant difference is found .here we concentrate on level-1 and level-2 analysis of the regularized samples discussed in sect.2 , that is , we compute quantities related to averages values and to probability distribution functions ( pdf s ) .the level-3 analysis of the processes will be done in the latter sections .the main variables that are used to construct the statistical indicators are the differences of log - prices sometimes called the return . for each experimental sample ,three main statistical indicators are computed : \(i ) the maximum ( over ) of \(ii ) the moments of the distribution of with meaning the sample average \(iii ) if inside a certain range , the moments satisfy then the scaling exponent is another important statistical indicator . the results obtained from our detrended and rescaled samples are displayed in figs.7 to 9 .fig.7 refers to and fig.8 shows as a function of for different values of ( from top to bottom to ) .the large fluctuations in for large values of and in for large are quite natural given the size of the data samples . in the range to the moments follow an approximate power law of the type of eq.([3.4 ] ) and from the behavior in this region we have extracted the scaling exponent shown in fig.9 .the main conclusions from this analysis of the statistical indicators are : \(a ) is log - concave , that is , is concave as a function of , increasing and probably ( with better statistics ) asymptotically constant for large ; \(b ) is also an increasing log - concave function of , allowing a power law approximation in a limited range ; \(c ) the scaling law is an increasing concave function of ; \(d ) for all samples , computed in the scaling region ( to ) is very close to 0.5 ; \(e ) the scaling properties of the nyse index seem somewhat different from those of the other stocks .however this is only apparent for , where poor statistics effects may already be felt. from this analysis one also obtains precise statements concerning the similarities and differences between hydrodynamic turbulence and the market fluctuation process .properties ( a ) to ( c ) are shared by the turbulence data , although the numerical values of the statistical indicators are quite different . for example , for turbulence data whereas here , showing the essentially uncorrelated nature of the signal for .the correlation function of one - day returns and its absolute value and are shown in fig.10 .one sees that for the returns are uncorrelated , their correlation function remaining at the noise level .in contrast the correlation for the absolute value remains non - negligible for a longer time ( at least up to ) .this means that although the returns are linearly uncorrelated , non - linear functions of the returns remain correlated for longer periods .the behavior of the statistical indicators , and already has some strong implications on the level-3 features of the process , namely on the structure of its grammar .in fact , without restrictions on the allowed transitions and would be independent of and for all .in particular , property ( a ) implies that if the process is a topological markov chain the transitions allowed by the transition matrix must lie inside a strictly convex domain around the diagonal of .fig.11 illustrates the dynamics of one - day returns it shows that the bulk of the data consists of a central core of small fluctuations with a few large flights away from this core .this structure of the data will have a strong influence on the results obtained in the next section .let us assume a coding of the dynamical system by a finite alphabet .then the space of orbits of the system are infinite sequences , , with the dynamical law being a shift on these symbol sequences . on the dynamical law of the coded system , not all sequences will be allowed .the set of allowed sequences in defines the _ grammar _ of the shift .the set of all sequences which coincide on the first symbols is called a _ ( or ) and is denoted ] obtained from the experimental sample .the problem is that eq.([4.4a ] ) requires the use of blocks of length as large as possible but , for a finite sample , the statistics of such blocks suffers from large uncertainties . for practical purposes the most important class of gibbs measuresis the one associated to finite range potentials , that is , functions on that depend only on the first symbols of a sequence .the importance of finite range potentials lies in the fact that they may be used to uniformly approximate any hlder continuous potential and , on the other hand , given a limited amount of experimental data , only finite - range potentials may be reliably inferred from experiment .an important property of range potentials is that for all values with \right ) = \frac{\mu \left ( [ i_{1}\cdots i_{r}]\right ) \mu \left ( [ i_{2}\cdots i_{r+1}]\right ) \times \cdots \times \mu \left ( [ i_{n - r+1}\cdots i_{n}]\right ) } { \mu \left ( [ i_{2}\cdots i_{r}]\right ) \mu \left ( [ i_{3}\cdots i_{r+1}]\right ) \times \cdots \times \mu \left ( [ i_{n - r+1}\cdots i_{n-1}]\right ) } \label{4.5}\ ] ] we will make use of this important relation in our attempt to look for a gibbs measure for the market fluctuation data . on the one handthe relation ( [ 4.5 ] ) allows to express the entropy in terms of measures of cylinders of finite length only , namely \right ) \log \frac{\mu \left ( [ i_{1}\cdots i_{k}]\right ) } { \mu \left ( [ i_{1}\cdots i_{k-1}]\right ) } = h_{k}-h_{k-1 } \label{4.6}\ ] ] for all if . if . is the entropy associated to cylinders of length \right ) \log\mu \left ( [ i_{1}\cdots i_{k}]\right ) \label{4.6a}\ ] ] this provides a criterium to find the range of the potential . using the empirical cylinder probabilities one computes for successively larger .then , the range of the potential is found when tends to a constant value .once the range is found , the potential may be constructed directly from the empirical weights \right ) ] for blocks of successively larger order are found . of course not be arbitrarily large because of statistics .results will not be reliable whenever is larger than the size of the data sample .the statistical reliability may be directly tested either by comparing the number of different occurring blocks and or by observing the fall - off of the empirically computed .first we try to estimate a possible range for the potential using the criterium discussed above .the results are shown in figs.12 and 13 for the analyzed stocks and the nyse index .the plots on the left show the quantity and the plots on the right compare the number of occurring blocks of size in the data with the maximum possible number , .already for the difference seems to stabilize , staying nearly constant until .after it falls off , reflecting the lack of statistics also apparent in the comparison of with in the right hand side plots .these results seem to suggest that the data is described by a very short range potential .notice that for a similar analysis performed on hydrodynamic turbulence data the results are quite different with rising smoothly up to a certain saturation level and then decreasing when one reaches the lack of statistics level . to check whether the short - range potential suggested by this criterium is reliable or whether it simply results from some misleading feature of the data, we have performed the test following from eq.([4.7 ] ) . for successively higher we estimate \right ) = \frac{\widetilde{\mu } \left ( [ i_{1}\cdots i_{k}]\right ) \widetilde{\mu } \left ( [ i_{2}\cdots i_{k+1}]\right ) } { \widetilde{\mu } \left ( [ i_{2}\cdots i_{k}]\right ) } ] and \right ) ] .the standard deviation of the relative positive errors \right ) -\mu _ { e}\left ( [ i_{1}\cdots i_{k+1}]\right ) } { \frac{1}{2}% \left ( \widetilde{\mu } \left ( [ i_{1}\cdots i_{k+1}]\right ) + \mu _ { e}\left ( [ i_{1}\cdots i_{k+1}]\right ) \right ) } \right ) \label{4.11}\ ] ] is computed and the number of blocks for which this error is one and two standard deviations above the mean is computed .the result is plotted in fig.14 where the number of underestimation errors that are one ( o ) and two ( * ) standard deviations away from the mean error are compared with the total number of different observed blocks of each length .one sees that the number of large deviation errors is very large and , identifying the blocks for which these errors occur , one finds out that they all correspond to blocks involving large positive or negative s ( and ) .the conclusion is that a short - range potential would describe the small fluctuations in the data , the large fluctuations being badly described by it .the reason why the empirically found difference seems to saturate for a small is because , as is apparent from fig.11 , the bulk of the data consists mostly of small fluctuations plus a few large flights .the saturation of for small is a reflection of the largely uncorrelated nature of the small fluctuations , whereas other features like the large deviations , persistence of non - linear correlations ( volatility ) , etc .are not captured by a short - range potential .large deviations being misrepresented by an empirically constructed measure is typical of situations where the actual measure is non - gibbsian . in our case , however , it may also occur that the measure is gibbsian but with a long - range potential .this would correspond to a sharp rise of at followed by a very slow increase above . in the empirical resultsa small increase may be hidden by the fact that , as the block length increases , the statistics becomes poorer . a large deviation analysis applied to the calculation of , using a standard technique to construct the free energy andthe deviation function from the data , is consistent with this hypothesis . in any case , whether a gibbs measure exists or not , the finite - range potential framework does not seem to be the more convenient way to describe the market fluctuation process . in the next sectionwe will explore another approach specially suited to deal with long - memory processes .processes with long memory have been studied in the past . under certain conditions , that is , when the dependence on the past does not decay too slowly , existence and uniqueness of a well defined process may be proved .a particularly well established framework is the one of chains with complete connections and summable decays ( and references therein ) . a stochastic process with alphabet is said to be a _ chain with complete connections _ ( ccc ) if the following conditions are satisfied 1 . 2 .the limit exists 3 .there is a sequence with , such that for all with for the process is said to be a _ chain with complete connections and summable decay _ ( cccsd ) if conditions 1 . and 2 .are implicitly assumed when we considered the processes ( and pre - processed the data ) to be asymptotically stationary . as for the decays they may be estimated from a typical sample of the process . from the empirical probabilities for where is a block of arbitrary length ,ones computes for each fixed set the maximum and the minimum over , obtaining for however if the statistics for very long blocks is poor , which is in general the case for finite samples , the computation of the maximum from empirical data is not reliable .a better estimate of the decay behavior of the decay rates is obtained from the following quantity , which smooths out the large fluctuations due to poor statistics the average being taken over all sets of size .the results obtained for the data of the detrended fluctuations ( of bmw data ) using blocks of length 5 to 8 ( ) is plotted in fig.15 .similar results are obtained for the other data .the result is compatible with exponential decay , which would probably imply the existence of a gibbs measure ( albeit with a long range potential ) .the data for the maxima of displays large fluctuations and slower decay . however , with the amount of available data it is not reliable for long blocks . in any case , in the present context of ccc - processes ,what the result suggests is the summability of the s .( ) . for practical purposes the most importance consequence ofthis fact is that a ccc - process with summable decays is the of its markov approximations of order .the nature of this approximation should however be clearly understood .the between two processes refers not to the processes themselves but to the process that implements the coupling of the two processes .a _ coupling _ between two processes and over the alphabet is another process defined over such that the marginal probabilities of and coincide with those of and .then the between and is for some types of coupling the two processes and are know to coincide after a certain random time .however , for the original processes and , if the tends to zero it does mean that the processes will coincide after a certain time .it only means that it will occur for some other processes with the same marginal probabilities .this fact has an important bearing on the correct interpretation of the `` perfect simulation '' schemes proposed for ccc s .perfect simulation is always understood in the sense and it does mean perfect prediction . it means simply that a process is constructed with the same conditional probabilities of the original process , whenever the conditional probabilities of the original process are known . in practice not all conditional probabilities involving infinite pasts are needed , because going back to a regeneration time , only a finite number of back steps are required .several simulation schemes have been proposed for ccc s with summable decays .the most important one for the applications , when the conditional probabilities are inferred from experiment , is the sequence of canonical markov approximations of finite order ( ) .a of a process is a markov chain of order with conditional probabilities such that for a ccc with summable decays being a constant .actually the property of the markov approximation that is essential for the approximation result ( [ 5.10 ] ) is meaning that for markov approximation schemes , other than the canonical one , eq.([5.10 ] ) holds provided ( [ 5.11 ] ) is satisfied .in fact , when the conditional probabilities are inferred from limited experimental data a different markov approximation is more convenient .the following approximation scheme is proposed for the market fluctuation data , which we call the approximation : \i ) empirical transition probabilities are inferred from the occurrence probability of blocks of order .up to a certain order . of course , only probabilities that correspond to blocks that appear in the data will be available and especially for large many will be missing .\ii ) for the simulation , with an approximation of order , one looks at the current block of order and uses the probability to infer the next state . if that block has not appeared in the data that was used to construct the empirical probabilities , then one looks at the sized block and uses the order empirical probabilities .if necessary the process is repeated until an available empirical probability is found .this is the reason why this is called the approximation .this approximation scheme has been applied to the market fluctuation data and for each the successor of each block is compared with a prediction obtained by throwing a random number with the probabilities .figs.16 shows some of the results . in all casesthe quantity that is plotted is the averaged squared error the average being taken over the samples and 100 different runs .the two upper plots and the left lower plot show the results obtained ( for each approximation order ) when half of the data for each company is used to predict the other half .the points labelled ( ) correspond to the past used to predict the future and those labelled ( ) to the future used to predict the past .finally the right lower plot shows the results obtained when is chosen at random ( for the 3 companies , ibm , bayer and bmw ) .the main conclusions that may be extracted from these results are : * the average prediction obtained from using the empirical probabilities is better than a random choice .* however , the main improvement is a result of a correct accounting of the two - symbol probabilities ( ) . * after the improvement due to the use of the lowest order blocks a small ( but consistent ) improvement is found by using the past information up or .no significant improvement is obtained by using higher order approximations .this is consistent with the poorer statistics of large blocks .actually for each individual simulation the result of using leads to much larger fluctuations .the main conclusion is that although the bulk of the data is represented by a short - memory process , there is nevertheless evidence for a small long - memory component that is captured by the higher - order markov approximations .depending on the amount of data that is available to infer the empirical conditional probabilities there is a maximum that should be used for the simulation process .this value may be estimated from the quantity plotted in figs 12 and 13 . finally ,although a mild gain is obtained from using probabilities rather than one - symbol probabilities , it should be remembered that perfect simulation in the sense is not perfect prediction for the actual process .this is a point to keep in mind when attempting to develop any trading strategies based on the empirical block probabilities .we have also explored the use of the empirical probabilities of one company to predict the behavior of the others . in all casesthe improvement coming from the one - symbol probabilities ( as compared to random choice ) is obtained .this means that the one - symbol probabilities are similar in all companies .however for the long - memory component the behavior is very much company - dependent .for example there seems to be no correlation of this component between ibm and the other two companies , with the prediction being actually worse when the empirical probabilities for longer blocks are used .the same happens also when the empirical probabilities of bmw and bayer are used to predict ibm .however there is some statistical correlation between the long - memory components ( and some mild prediction improvement ) between bmw and bayer .this suggests that the statistical short - memory component of the market process might be similar for many different stocks , whether the long - memory component might be different from market to market and to divide the stocks into classes .a similar conclusion follows from the stocks taxonomy obtained by mantegna , although that work does not distinguish between the short- and long - memory components of the process .the bulk of the market fluctuation process seems to be a short - memory process . in additionit has a small long - memory component , which however is very important for practical purposes because it is associated with the large fluctuations of the returns .2 . existence of the long - memory component suggests the _ chains with complete connections and summable decays _ as the appropriate framework to describe these processes .although the decays may be exponentially converging , the lack of accurate data concerning long blocks prevent an accurate description by a finite range gibbs potential .3 . the sequence of empirical based approximations discussed in sect.5 seems the most unbiased simulation of the process .eventual convergence in the sense is expected to hold because the market fluctuation process seems to fit in the framework of chains with complete connections and summable decays .4 . except for cases where one is sure of the existence of a finite potential , markov approximations must always be used if only finite data is available .this true whether a gibbs measure exists or not .what the chains with complete connections framework provides though , is a rationale for the convergence of the markov approximations and a criterium to estimate , through the decays , how good this approximation is .notice however the trade - off between higher order approximations and lack of statistics , that leads to an optimal block length for the empirical probabilities to be used in the simulations .5 . as work for the future we point out that it would be interesting to analyze in this framework the high frequency market datahere however attention should be paid to the possibly multi - scale and multi - component nature of the processes .
the statistical properties of a stochastic process may be described ( 1)by the expectation values of the observables , ( 2)by the probability distribution functions or ( 3)by probability measures on path space . here an analysis of level ( 3 ) is carried out for market fluctuation processes . gibbs measures and chains with complete connections are considered . some other topics are also discussed , in particular the asymptotic stationarity of the processes and the behavior of statistical indicators of level ( 1 ) and ( 2 ) . we end up with some remarks concerning the nature of the market fluctuation process . * keywords * : market fluctuations , gibbs measures , chains with complete connections
this work was inspired mainly by the recent papers on the computational complexity of video games by foriek and cormode , along with the excellent surveys on related topics by kendall et al . and demaine et al . , and may be regarded as their continuation on the same line of research .our purpose is to single out certain recurring features or mechanics in a video game that enable general reduction schemes from known hard problems to the games we are considering . to this end , in section [ s2 ]we produce several _ metatheorems _ that will be applied in section [ s3 ] to a wealth of famous commercial video games , in order to automatically establish their hardness with respect to certain computational complexity classes ( with a couple of exceptions ) .because most recent commercial games incorporate turing - equivalent scripting languages that easily allow the design of undecidable puzzles as part of the gameplay , we will focus primarily on older , `` scriptless '' games .our selection includes games published between 1980 and 1998 , presented in alphabetical order for better reference .not every game will be rigorously explained in all its aspects and details , but at least the game elements that are relevant to our proofs will be introduced , so that any casual player will promptly recognize them and readily understand our constructions . it is clear that , in order to meaningfully apply the standard computational complexity tools , a suitable _ generalization _ of each game must be considered .since classic video games typically include only a finite set of levels , whose complexity is merely a constant , a way must be devised to automatically generate a class of infinitely many new levels of increasing size .deciding which game elements are `` scalable '' and which are not is ultimately a matter of taste and common sense : when designing a generalization of a well - known game , one should remain as faithful as possible to the feeling and mechanics of the original version .for example , in a typical platform game , the number of platforms and the number of hazards in a level may increase as the level size grows . in contrast , the maximum height of a jump and the enemy ai should remain unchanged , as they are more inherent aspects of the game .it is generally acknowledged that single - player games that are humanly `` interesting '' to play are complete either for * np*or for * pspace*(for an introduction to general computational complexity theoretic concepts and classes , refer to ) .* np*-complete games feature levels whose solution demands some degree of ingenuity , but such levels are usually solved within a polynomial number of `` manipulations '' , and the challenge is merely to find them .in contrast , the additional complexity of a * pspace*-complete game seems to reside in the presence of levels whose solution requires an exponential number of manipulations , and this may be perceived as a nuisance by the player , as it makes for tediously long playing sessions .several open problems remain for further research : whenever only the hardness of a game is proved with respect to some complexity class , the obviously implied question is whether the game is also complete for that class .moreover , different variations of each game may be studied , obtained for instance by further restricting the set of game elements used in our hardness proofs .indeed , the computational complexity of a game is expected to dramatically drop if some `` critical '' elements are removed from its levels .it is interesting to study the `` complexity spectrum '' of a game , as a function of the game parameters that we set .this has been done to some extent for the game of lemmings , by different authors , as partly documented in section [ s3 ] .a conference version of this paper has appeared at fun 2012 .more often than not , games allow the player to control an _ avatar _ , either directly or indirectly . in some circumstances, an avatar may be identified within the game only through some sort of artifice or abstraction on the game mechanics . throughout section [ s2 ], we will stipulate that the player s actions involve controlling an avatar , and that the elements of the game may be freely arranged in a plane lattice , or a higher dimensional space . at the very least ,the set of game elements includes _ walls _ that can not be traversed by the avatar , and can be arranged to form rooms , paths , etc . in general, a problem instance will be a `` level '' of a given game .the description of a level includes the position of every relevant game element , such as walls , items , the avatar s starting location , etc .the question is always whether or not a given level can be `` solved '' under certain conditions , such as losing no lives , etc .the exact definition of `` solvability '' is highly game - dependent , and can range from reaching an exit location , to collecting some items , to killing some enemies , to surviving for a certain time , etc .all the _ metatheorems _ that follow yield hardness results under the assumption that certain game elements are present in a given game .these are not to be intended as `` black boxes '' , as regular _ theorems _ would be , but rather as `` frameworks '' .indeed , we will not always be able to apply the statement of a metatheorem to a particular game without keeping in mind the actual proof of the metatheorem , and the underlying construction enabling the reduction .as it turns out , in order to apply a metatheorem in a non - trivial way , we may need to use certain game elements having very complex behaviors , which serve our purposes only when arranged in some special ways . in order to make sure that our constructions workas intended , we may have to access the full proof of the metatheorem , and exploit some of its features at a `` lower level '' . to avoid all this, we would have to strengthen the metatheorems statements by adding so many details about the actual reduction constructions that most of their appeal would be lost . herewe opt for shorter metatheorem statements , but as a drawback we will have to refer to their proofs from time to time , when invoking them .a game is said to exhibit the _ location traversal _feature if the level designer can somehow force the player s avatar to visit several specific game locations , arbitrarily connected together , in order to beat the level .although every location must be visited at least once , the avatar may visit them multiple times and in any order .however , the first location is usually fixed ( starting location ) , and sometimes also the last one is ( exit location ) .an example of location traversal is the _ collecting items _ feature discussed in : a certain number of items are scattered across different locations , and the avatar s task is to collect them all .the _ single - use paths _ feature is the existence of configurations of game elements that act as paths connecting two locations , which can be traversed by the avatar at most once .a typical example are _ breakable tiles _ , which disappear as soon as the avatar walks on them .[ m1 ] any game exhibiting both location traversal ( with or without a starting location or an exit location ) and single - use paths is * np*-hard .we give a straightforward reduction from hamiltonian cycle , which is * np*-complete even for undirected 3-regular planar graphs .construct a plane embedding of a given 3-regular graph ( perhaps an orthogonal embedding , if needed ) with an additional vertex dangling from a distinguished vertex .then we convert such embedding into a valid level , by implementing each vertex as a location that must be visited by the avatar , and each edge as a single - use path .the starting location is placed in and , if an exit location is required , it is placed in . clearly , the last vertex the avatar must visit is , because it has only one incident edge .moreover , each vertex except can be visited at most once : recall that is 3-regular , hence reaching a given vertex for the first time implies the consumption of one of its incident edges .then , leaving consumes another incident edge , and reaching it a second time consumes the third incident edge . at this point, there is no way for the avatar to leave , and therefore no way to reach the last vertex . as for the startingvertex , the incident edges are initially four , and one is immediately consumed . the second time the avatar reaches , it must necessarily proceed to , for otherwise would become forever unreachable .it follows that the level is solvable if and only if the player can find a walk starting from , touching every vertex ( except ) exactly once , reaching again , and then terminating in .this is possible if and only if contains a hamiltonian cycle .it is easy to see that * np*-hardness is the best we can achieve given the hypotheses of metatheorem [ m1 ] .there exists an * np*-complete game exhibiting location traversal and single - use paths .consider the game played on an undirected graph , in which some distinguished edges implement single - use paths , and some distinguished vertices must be visited by the avatar in order to win .then , by metatheorem [ m1 ] is * np*-hard , while a certificate for is an injective sequence of distinguished vertices and distinguished edges .also notice that both assumptions of metatheorem [ m1 ] are required : removing either of them from the above game reduces it to determining if two vertices in a graph are connected , which is solvable in logarithmic time ( see ) . as section[ s3 ] testifies , metatheorem [ m1 ] has a wide range of applications , and it tends to yield game levels that are more `` playable '' than those resulting from the somewhat analogous ( * ? ? ?* metatheorem 2 ) , which rely on a tight time limit to traverse a grid graph .additionally , ( * ? ? ?* metatheorem 2 ) is prone to design complications in `` anisotropic '' games , in which the avatar moves at different speeds in different directions , for instance due to gravity effects .we consider now another type of game mechanics : _ tokens _ and _ toll roads_. tokens are items that can be carried by the avatar , and _ toll roads _are special paths connecting two locations .whenever the avatar traverses a toll road , it must `` spend '' a token that it is carrying .if the avatar is carrying no token , then it can not traverse a toll road .we distinguish between two types of tokens : _ collectible _ tokens , which may be placed by the game designer at specific locations and can be picked up by the avatar , and _ cumulative _ tokens , any number of which can be carried around by the avatar at the same time .section [ s3 ] will offer some examples of different types of tokens : for instance , pac - man features _ power pills _ , which may be regarded as collectible tokens that are not cumulative .[ m1b ] a game is * np*-hard if either of the following holds :1 . the game features _collectible _ tokens , toll roads , and location traversal .the game features _ cumulative _ tokens , toll roads , and location traversal .3 . the game features _collectible cumulative _tokens , toll roads , and the avatar has to reach an exit location .once again , we give a reduction from hamiltonian cyclefor all three parts of the metatheorem , varying it slightly depending on our hypotheses . for part ( a ) ,given an undirected 3-regular planar graph , we construct an embedding as described in the proof of metatheorem [ m1 ] , and we implement each vertex as a location that has to be traversed by the avatar . each edgeis then implemented as a toll road , and one collectible token is placed in each vertex , except for the final vertex , where we place no token , and the starting vertex , where we place two tokens .notice that , if has vertices , there are exactly locations that the avatar must visit , and tokens in the level .therefore , any feasible traversal of the level starts from and has length at most .if the traversal must reach all locations , then at most one location may be visited twice . moreover , must be visited at least a second time , because it is the only neighbor of . as a consequence , a valid traversal of the level must start from ,visit every other location except exactly once , return in , and end in .it follows that , if has no hamiltonian cycle , then the level is unsolvable .conversely , let us assume that has a hamiltonian cycle , and let us show that the level is solvable .the avatar can traverse , starting from and ending in again , along a hamiltonian cycle , and finally it can reach and solve the level .this traversal is valid even if tokens are not cumulative : upon reaching a new location , the avatar collects one new token and immediately spends it in a toll road . likewise ,when is reached for the second time , the second token is collected , and it is immediately spent to reach .the construction for part ( b ) is the same , but instead of scattering tokens throughout the level ( where is the number of vertices of ) , we assume that the avatar already carries tokens as the game starts. then a similar reasoning applies : exactly one location may be visited twice , which must be because it is the starting location and the only neighbor of .therefore , must be the last location to be visited , and the level is solvable if and only if has a hamiltonian cycle .for part ( c ) , we further modify the previous proof as follows : we construct the same embedding of , and we place two tokens in every location , except in , where we place no token. then we implement each edge as a toll road , except the edge between and , which is implemented as a sequence of toll roads .the starting location is again , and the exit location is . the avatar carries no token as the game starts .there are tokens in the level , and of them must be used to travel from to , so at most more tokens may be spent in other toll roads . every time a toll road is traversed ,one token is gained if a new location is reached ( one token is spent and two are found ) , and one token is lost if an already visited location is reached .it follows that the player must find a walk in that starts and ends in , traverses at most edges and visits different vertices .this is equivalent to finding a hamiltonian cycle in .( observe that the location traversal feature has been obtained here as a by - product of our construction , without being an explicit requirement . ) * np*-hardness is the best complexity achievable under the hypotheses of metatheorem [ m1b ] , in each of the three cases .there exists an * np*-complete game featuring collectible cumulative tokens and toll roads , in which the avatar has to reach an exit location .consider the game played on a graph in which some distinguished edges implement toll roads , each vertex may contain some collectible cumulative tokens , and one distinguished vertex is the exit location . indeed , a certificate for this game is simply an injective sequence of toll roads , because we may assume that the avatar always collects all the tokens it can reach without traversing toll roads , and therefore no toll read ever has to be traversed twice . a _ door _is a game element that can be open or closed , and may be traversed by the avatar if and only if it is open .key _ is a type of token that can be used by the avatar to open a closed door , upon contact .any key can open any door , but a key is `` consumed '' as soon as it is used .hence , the key - door paradigm is somewhat similar to the token - toll road one , with the difference that a door opened by a key remains open and can be traversed several times afterwards without consuming new keys .we distinguish again between _collectible _ keys , which can be found by the avatar and picked up , and _ cumulative _ keys , any number of which can be carried at the same time .many examples of keys are found in platform games and adventure games . in section [ s3 ], we will show how the lemmings game features cumulative keys that are not collectible , although this will be established through non - trivial abstractions on the game mechanics . to state the next result , which is an analogous of metatheorem [ m1b ] for the key - door paradigm , we further need to introduce the concept of _ one - way path _ , which is a path that can be traversed by the avatar in one specific direction only .[ m1c ] a game is * np*-hard if it contains doors and one - way paths , and either of the following holds : 1 .the game features _collectible _ keys and location traversal .the game features _ cumulative _ keys and location traversal .3 . the game features _collectible cumulative _ keys and the avatar has to reach an exit location .we reduce from hamiltonian cycle , which is * np*-complete even for directed planar graphs whose vertices have one incoming edge and two outgoing edges , or two incoming edges and one outgoing edge .all three parts of our proof are based on the same construction : given one such directed graph on vertices , we pick a vertex with indegree two and outdegree one ( which exists ) , and we attach to it a new outgoing edge , ending in a new vertex . then we construct a plane embedding of this graph ( maybe an orthogonal embedding ) , substituting each vertex with a game location , and each directed edge with a one - way path . will be the avatar s starting location and , if an exit location is required , it is placed in ( must be the final location anyway , because it has no outgoing edges ) .moreover , we place a closed door in each one - way path , except in the path between and . to prove part ( a ) , place one key in each location , except , and assume that the avatar must traverse every location (the last of which would be ) .after collecting the first key in , every time a new location ( except ) is reached , one key is used to open a door , and one new key is found . on the other hand ,as soon as an already visited location is reached , the only key in the avatar s possession is lost and no key is found , so afterwards the avatar is bound to traverse only paths with no door ( hence ) or with an already opened door . as a consequence , the level is solved if and only if and every location except has been visited , which is possible if and only if has a hamiltonian cycle . for part( b ) , we put no keys in the level , but we assume that the avatar already carries keys as the game starts .we assumed that each location must be visited , and therefore at least two of its incident paths doors must be opened ( unless the location is ) . hence , doors must be opened in total , and all the keys must be used . on the other hand ,if all the three incident paths doors of a location are opened , and all the locations are visited , a straightforward double counting argument shows that at least keys have been used , which is unfeasible .therefore , the avatar must follow a hamiltonian cycle of starting and ending in , and then visit . finally , for part ( c ) , we place two keys in each location , except in , and we place doors in the path between and .no key is carried by the avatar as the game starts .hence , the avatar must visit some locations to collect keys , return to with at least keys , and reach the exit in .the same double counting argument used for part ( b ) reveals that , if distinct locations are visited before reaching , then at at least doors must be opened .in particular , exactly doors are opened if and only if a cycle of is followed . because visiting distinct locations allows to collect exactly keys , the only way to return in with keys is to follow a cycle of length , i.e. , a hamiltonian cycle .there exists an * np*-complete game featuring doors , collectible cumulative keys , one - way paths , location traversal , in which the avatar has to reach an exit location .consider the game played on a graph in which some edges may implement doors or one - way paths , some distinguished vertices must be traversed by the avatar , some vertices contain collectible cumulative keys , and one vertex is the exit location .a certificate for this game is an injective sequence of distinguished vertices , vertices containing keys , and edges containg doors .indeed , all the keys contained in a vertex can be taken as soon as the vertex is reached , and any open door becomes a regular path and does not have to be opened a second time .hence the game is in * np * , and by metatheorem [ m1c ] it is * np*-complete .there are other ways to modify a door s status , such as pushing a _ pressure plate_. a pressure plate is a floor button that is operated whenever the avatar steps on it , and its effect may be either the opening or the closure of a specific door .each pressure plate is connected to just one door , and each door may be controlled by at most two pressure plates ( one opens it , one closes it ) .of course , all our hardness results will hold in the more general scenario in which any number of doors is controlled by the same pressure plate , or any number of pressure plates control the same door . in ( * ? ? ?* metatheorem 3 ) , foriek shows ( with a different terminology ) that a game is * np*-hard if the avatar has to reach an exit location , and the game elements include one - way paths , doors and pressure plates ( or 1-buttons , see the next subsection ) that can open doors . in the following metatheorm, we further explore the capabilities of pressure plates .we say that a game allows _ crossovers _ if there is a way to prevent the avatar from switching between two crossing paths . some 2-dimensional games natively implement crossovers through bridges or tunnels .in some other games , crossovers can be simulated through more complicated gadgets .[ m2 ] if a game features doors and pressure plates , and the avatar has to reach an exit location in order to win , then : 1 . even if no door can be closed by a pressure plate , and if crossovers are allowed , then the game is -hard .even if no two pressure plates control the same door , the game is * np*-hard .if each door may be controlled by two pressure plates , then the game is * pspace*-hard . to prove part ( a ), we give a - reduction from monotone circuit value . or and andgates are implemented as in figures [ f1a ] and [ f1b ] , the starting location is connected to all true input literals , and the exit is located on the output .it is easy to check that the output of an or gate can be reached by the avatar if and only if at least one of its two input branches is .similarly , both input branches of an and gate must be reached by the avatar in order for doors and to be opened and allow access to the output .the doors and in the and gate prevent the avatar from walking from one input branch to the other through the center of the gate , in case only one input branch is reachable . clearly , the exit is eventually reachable if and only if the output of the circuit is true . for part( b ) , observe that we can implement single - use paths as shown in figure [ f1c ] : in order to traverse the gadget , the avatar must walk on both pressure plates , thus permanently closing both doors .since we can also enforce location traversal by blocking the exit with several closed doors , which may be opened via as many pressure plates positioned in every location , we may indeed invoke metatheorem [ m1 ] . finally , to prove ( c ) , we implement a reduction framework from true quantified boolean formula , sketched in figure [ f2 ] .a given fully quantified boolean formula , where is in 3-cnf , is translated into a row of _ quantifier gadgets _ , followed by a row of _ clause gadgets _ , connected by several paths . traversing a quantifier gadget at any time sets the truth value of the corresponding boolean variable . on the other hand, each clause gadget can be traversed if and only if the corresponding clause of is satisfied by the current variable assignments .whenever traversing an existential quantifier gadget , the player can choose the truth value of the corresponding variable .on the other hand , the first time a universal quantifier gadget is traversed , the corresponding variable is set to true . when all variables are set , the player attempts to traverse the clause gadgets .if the player succeeds , he proceeds to the `` lower parts '' of the quantifier gadgets , where he is rerouted to the last universal quantifier gadget in the sequence .the corresponding variable is then set to false , and is `` evaluated '' again by making the player walk through all the clause gadgets .the process continues , forcing the player to `` backtrack '' several times , setting all possible combinations of truth values for the universally quantified variables , and choosing appropriate values for the existentially quantified variables in the attempt to satisfy .finally , when all the necessary variable assignments have been tested and keeps being satisfied , i.e. , if the overall quantified boolean formula is true , the exit becomes accessible , and the player may finish the level .conversely , if the quantified boolean formula is false , there is no way for the player to operate doors in order to reach the exit .next we show how to implement all the components of our framework using just doors and pressure plates .clause gadgets are straightforwardly implemented , as shown in figure [ f3 ] .there is a door for each literal in the clause , and the avatar may traverse the clause if and only if at least one of the doors is open .the existential quantifier gadget for variable is illustrated in figure [ f4 ] . , , etc .( respectively , , , etc . )denote the positive ( respectively , negative ) occurrences of in the clauses of .( respectively , ) denotes the -th occurrence of literal ( respectively , ) in . ]when traversing the upper part of the gadget from left to right , the player must choose one of the two paths , thus setting the truth value of to either true or false .this is done by appropriately opening or closing all the doors corresponding to occurrences of in .the doors labeled and prevent leakage between the two different paths of the existential quantifier gadget , enforcing mutual exclusion . finally , the lower part of the gadget is traversed from right to left when the player backtracks , and it is simply a straight path . a universal quantifier gadget for variable is shown in figure [ f5 ] .when the avatar enters the gadget from the top left , door gets closed and variable is set to true .then the avatar must exit to the top right , because door can not be traversed from right to left .when backtracking the first time , the avatar enters from the bottom right and , because door is still closed , it must take the upper path , thus setting variable to false .incidentally , door gets opened and door gets closed , thus preventing leakage to the top left entrance , and forcing the avatar to exit to the top right again . when backtracking the second time ( i.e. , when both truth values of have been tested ) , door is open and the avatar may finally exit to the bottom left . when done backtracking , the avatar will eventually enter this gadget again from the top left , setting to true again , etc .we note that , as a result of our constructions , each door is operated by exactly two pressure plates .for instance , the door labeled , located in some clause gadget , is opened and closed by exactly two pressure plates , both located in the quantifier gadget corresponding to variable .observe that our metatheorem [ m2].c is an improvement on ( * ? ? ?* metatheorem 4 ) , in that the _ long fall _ feature ( and thus the concept of gravity ) is not used , and it works with a more restrictive model of doors : in , arbitrarily many pressure plates can act on the same door , while we allow just two . as with previous metatheorems, we can prove that metatheorem [ m2 ] s statement is the best possible given its hypotheses .[ cor2 ] there exist games , , , featuring doors and pressure plates , in which the avatar has to reach an exit location , such that : 1 . in ,pressure plates can only open doors , crossovers are allowed , and is -complete .2 . in ,no two pressure plates control the same door , and is * np*-complete .3 . in , each door may be controlled by two pressure plates , and is * pspace*-complete .we consider games played on graphs whose vertices may contain pressure plates , and whose edges may contain doors. then , belongs to because the set of vertices accessible to the avatar can only increase whenever a pressure plate is activated .as new pressure plates become accessible , the avatar immediately activates them , opening new doors , until either the exit becomes accessible , or no new pressure plates are discovered . is in * np*because no pressure plate can undo the effects of another pressure plate .therefore , we may pretend that a pressure plate disappears as soon as it is activated .it follows that a certificate for this game is simply an injective sequence of pressure plates . finally , to see that is in * pspace* * npspace*(cf .savitch s theorem ) , it is sufficient to observe that the a level s _ state _ can be stored in linear space , allocating one bit for each door and storing the position of the avatar in the graph. then , a certificate is just a walk in the graph .metatheorem [ m2].c has a wide range of straightforward applications : most first - person shooters ( with the notable exception of wolfenstein 3d ) , adventure games , and dungeon crawls are all * pspace*-hard .this includes rpgs such as dungeon master , the eye of the beholder , and lands of lore , which natively implement doors operated by pressure plates .similar mechanisms can be implemented also in the first - person shooter doom and its sequels , via walkover lines and sector tags . in simple terms ,whenever the player - controlled avatar crosses a certain line on the ground , a `` block '' somewhere in the level is moved to a predefined location , thus simulating the opening or closure of a door .all the point - and - click adventure games based on lucasarts scumm engine , such as maniac mansion and the secret of monkey island , as well as most sierra s adventure games , easily fall in this category , too . a _ button _is similar to a pressure plate , except that the player may choose whether to push it or not , whenever his avatar encounters one .games with buttons are in general not harder than games with pressure plates , because a pressure plate can trivially simulate a button , as figure [ f6a ] shows . however , since the converse statement is not as clear , we will allow a single button to act on several doors , in contrast with pressure plates . a button acting on doorssimultaneously is called a _-button_. we obtain an analogous of metatheorem [ m2 ] for buttons . [ m3 ] if a game features doors and -buttons , and the avatar has to reach an exit location in order to win , then : 1 . if and crossovers are allowed , then the game is -hard .if , then the game is * np*-hard .if , then the game is * pspace*-hard . for part ( a ) , we mirror the proof of metatheorem [ m2].a , by using -buttons as opposed to pressure plates .indeed , pressing a button to open a door is never a `` wrong '' move , if the goal is to reach the exit location .for part ( b ) , we implement single - use paths as in figure [ f6b ] : in order to open door , one of the two buttons has to be pressed , thus permanently closing door or door . then we proceed as in the proof of metatheorem [ m2].b .finally , for part ( c ) , we use the gadget in figure [ f7a ] to simulate a generic pressure plate for : the only way to traverse the gadget from left to right ( the other direction is symmetric ) is to press the buttons as indicated in figures [ f7b ] , [ f7c ] , and [ f7d ] , incidentally activating also door .moreover , observe that , no matter how the six buttons are operated , there is no way to `` break '' the gadget by leaving its four doors in an open / closed state that is not the original one . now , by simulating general pressure plates , we can apply the * pspace*-hardness framework used for metatheorem [ m2].c , concluding the proof .( observe that our gadget can be further simplified : by inspecting the proof of metatheorem [ m2].c , it is apparent that each pressure plate is traversed by the avatar in only in one direction .hence , only three buttons are sufficient in our gadget , e.g. , those that are used to traverse it from left to right . )+ we can show that metatheorem [ m3].a and metatheorem [ m3].c are tight .there exist games and , featuring doors , in which the avatar has to reach an exit location , such that : 1 . features -buttons and is -complete . features -buttons , with , and is * pspace*-complete .once again , we consider games played on a graph , in which some vertices contain buttons , and some edges implement doors . to prove that belongs to , observe that each 1-button either opens or closes a door .1-buttons opening doors can be presses as soon as they become accessible , and 1-buttons closing doors may be ignored .therefore the game is equivalent to that of corollary [ cor2].a .similarly to the game of corollary [ cor2].c , belongs to * pspace* * npspace*because the level s state can be stored in linear space , and a certificate is the sequence of the avatar s positions and the -switches pressed .it remains an open problem to establish if also metatheorem [ m3].b is tight .there exists an * np*-complete game featuring doors and 2-buttons in which the avatar has to reach an exit location .it is easy to construct levels in which some 2-buttons have to be pressed more than once in order to reach the exit , therefore it is not trivially true that the sequence of buttons pressed is a polynomial certificate .in this section we apply the previous metatheorems to some well - known games . in some cases we will merely have to `` simulate '' all the required elements and mechanics with appropriate gadgets , and the reductionswill immediately follow from metatheorems statements .in other cases , the desired effects will be obtained by taking into account specific features of a metatheorem s proof , and noticing that the patterns we wish to construct have particular properties that , for the sake of brevity , are not mentioned in the actual statement of the metatheorem ( cf . section [ s2 ] s introduction ) . most of the results we prove are new .for the results that were already known , we either provide simplified reductions , or reductions that use a different set of game elements .the following table relates all the games that we consider in this section with their complexities , indicating whether or not each result is new .some of our reductions produce quite contrived levels or configurations , which are very unlikely to occur in the real games : for instance , a draw in starcraft is somewhat rare , and a tron configuration such as the one presented is definitely unnatural . for these games , designing reductions that preserve most of the relevant aspects of the gameplay remains a challenging open problem .the game is similar to sokoban , but with added gravity .the player - controlled avatar may push ( but not pull ) single boulders horizontally , excavate some special tiles , and must collect diamonds while avoiding monsters .when a certain amount of diamonds has been collected , an exit door appears , and the avatar has to reach it to beat the level .gravity affects boulders and diamonds , but not the avatar or the monsters . a proof that `` pushing blocks in gravity '' is * np*-hard has been given by friedman in 2002 , based on a rather involved reduction scheme and several gadgets that may be adapted to work with the slightly different `` physics '' of boulder dash ( namely , if a boulder is on the `` edge '' of a pit , it falls down even if it is not pushed ) .we give a much simpler proof that relies on metatheorem [ m1 ] .location traversal is trivially implemented , due to the presence of diamonds that have to be collected : we place one diamond in each relevant location , forcing the player to visit them all .a single - use path gadget is illustrated in figure [ fbd ] : when traversing the gadget in either direction for the first time , three boulders are pushed in the pits .on the second traversal attempt , the fourth boulder blocks the path .this is a psychedelic puzzle game in which a laser ray has to be reflected around in the level by rotating several mirrors .the laser ray is emanated by a laser beam and must be reflected to a predefined location , after hitting several items that must be collected , while avoiding static mines , and without reflecting the laser back to the source for too long ( which overheats the beam ) .all the relevant game elements are static , or can rotate in place .there are 16 possible orientations for the laser ray , and making it reach some location basically boils down to finding the correct way to orient the player - controlled mirrors .some tiles act as reflecting walls , some are opaque and absorb the ray , some special tiles act as teleporters , others as self - rotating mirrors , or self - rotating _ polarizators _ that may be traversed by the ray only in one direction at a time .all polarizators have eight possible orientations , and rotate at the same speed ( or they are static ) .there are also some prisms that randomly refract the ray , and some gremlins that attach to player - controlled mirrors and randomly reorient them .this is a remarkable example of an `` easy '' commercial game : deflektor is solvable in logarithmic space , which is a quite uncommon feature for a puzzle game , and possibly contributed to its modest success .the key observation is that the ray never needs to be reflected twice by the same mirror in order to reach some location , because it can be re - oriented to any direction already on its first reflection . for our purposes , prisms count as a special type of player - orientable mirror , as they effectively refract the ray in any desired direction and at the right moment , after waiting a long - enough time .similarly , self - rotating mirrors count as regular mirrors , as the player can indeed slow them down or accelerate them . on the other hand ,gremlins may be disregarded in our analysis , as their presence is merely a nuisance and never really prevents a level from being solved .next we show how to reduce deflektor to the problem undirected connectivity .recall that there are eight possible combined orientations of the polarizators , as they are either static or keep rotating at the same speed .each combined orientation yields a _ reachability graph_ on the game elements , which tells if a ray can be redirected by an object onto another .a reachability graph may be computed in by shooting the 16 possible rays from each mirror ( or prism ) and extending them until each ray is absorbed or reaches a relevant game element , such as a mirror or a collectible item .this necessarily happens after a finite number of reflections , because the possible ray slopes are rational , and a ray that is never absorbed must have a periodic trajectory .let be the disjoint union of all the s , in which the eight copies of the laser beam are connected to a common _beam vertex_. finding a path in from the beam vertex to one of the eight copies of an object means that the laser ray can be redirected to that object after suitably rotating the mirrors , and after waiting for the polarizers to be properly oriented .the final graph is obtained as the disjoint union of several copies of , one for each item to collect .let us arbitrarily order the items , and let be the copy of associated to the -th item ( here , the exit location counts as the last item in the list ) .then , the eight copies of the -th item in are linked to the beam vertex of .the beam vertex of will be called the _ starting vertex _ , and the eight copies of the last item in the last copy of are connected to a common vertex , which will be the _ending vertex_. thus , the final graph has a path from the starting vertex to the ending vertex if and only if the mirrors can be oriented in such a way that the first item in the list can be collected , then reoriented to collect the second item , etc . , and finally the exit location can be reached . because items can be collected in any order , this is equivalent to solving the level .the player has to guide a tribe of lemming creatures to safety through a hazardous landscape , by assigning them specific skills that modify their behavior in different ways .there is a limit to the number of times each skill can be assigned to a lemming , and skills range from building a short stair , to excavating a tunnel , to climbing vertical walls , etc . if no skill is assigned to a lemming , it keeps walking forward , turning around at walls , and falling into pits .hazards include deadly pools of water or lava , several kinds of traps , and long falls . to beat a level, at least a given percentage of lemmings has to reach one of several exit portals within a time limit .the complete set of rules , especially the ways lemmings behave in different landscapes , is quite complex , and has been described by the author in .the * np*-hardness of lemmings was already proved by cormode in using only digger skills .more recently , in , the author showed that lemmings is * pspace*-complete , even if there is only one lemming in the level , and only basher and builder skills are available ( the technique used is based on a variant of metatheorem [ m2].c ) . herewe propose a simple alternative proof of * np*-hardness , which uses only basher skills ( that allow lemmings to dig horizontally ) and relies on metatheorem [ m1c].b .our construction can be easily modified to work with miner skills , too ( used to dig diagonally ) .we model each location as in figure [ flm1 ] .as the game starts , exactly one lemming joins the level from the trapdoor , and is bound to stay in the enclosed area , walking back and forth .if either the indegree or the outdegree of the location is less than two , we suitably remove some of the passages marked by arrows . in the starting location , we fit a second trapdoor that releases another lemming in the upper corridor : this lemming is not a prisoner , and will be the avatar . in the final location , we replace the outgoing passages with an exit portal , which is intended for the avatar .we appropriately connect locations together as the arrows suggest , and we make sure that the right outgoing passage of the starting location leads to the exit location ( rather than the left passage ) .it is quite simple to implement paths that go in any direction and can be effectively traversed by lemmings : simple ways to build them have been described in and .all the lemmings in the level must reach an exit portal , and the initially available skills are bashers , where is the amount of locations in the level .the tiles with the steel texture in figure [ flm ] can not be excavated , hence any lemming that is not the avatar is trapped inside a cage , waiting for the avatar to rescue it by bashing the ground below , as figure [ flm2 ] illustrates . as a by - product of the lemmings moving rules , when a prisoner hits the leftmost wall of its cage , it turns around and climbs on the upper platform .then it falls down to the right , turns around at the wall , and walks on the lower ground from right to left .hence , the prisoner is bound to come out of its cage facing left as soon as the avatar bashes its ground , and it will inevitably reach the nearby exit portal .this implies that the avatar must visit every location in the level ( location traversal ) and that at least basher skills have to be used to rescue all the prisoners . at any time, the number of available basher skills minus the number of trapped lemmings will be understood as the number of keys carried by the avatar .note that a small amount of ground ( i.e. , a door ) must be bashed in order to exit a location from an unvisited outgoing edge , and the initial amount of keys is .this agrees with the key - door paradigm and the requirements of metatheorem [ m1c].b , except for the presence of a door in the path between and , which is matched by the extra available key ( this must be the last door to be opened anyway , hence the discrepancy can be safely ignored ) .the passages connecting locations are indeed one - way paths by construction , since lemmings can not change direction unless they encounter a wall .observe that the avatar may choose which outgoing path to take on its first traversal of a location .however , after the right path has been taken , there is no way to take the left path on the second traversal ( but not vice versa ) .fortunately , by inspecting the proof of metatheorem [ m1c].b , this does not appear to be an issue , because it is not restrictive to assume that each location will be visited only once , except for the starting location , in which both outgoing paths must be taken .but we made sure that the right path leads to the exit location , and therefore it must be the last path to be traversed , which is indeed feasible .the player - controlled avatar must collect gold pieces while avoiding monsters , and is able to dig holes into certain floor tiles ( those that look like bricks ) , which regenerate after a few seconds .both the avatar and the monsters may fall into such holes , and the avatar can not jump .the avatar is killed when it is caught by a monster , but it can safely stand in the tile directly above a monster s head .monsters behave deterministically , according to the player s moves , although their behavior is often quite counterintuitive , as they do not always take the shortest path toward the player s avatar .when every gold piece has been collected , a ladder appears , leading to the exit .we apply metatheorem [ m1 ] : location traversal is implied by the collecting items feature , and a single - use path is illustrated in figure [ flr ] .on the first traversal , the avatar can safely land on top of the monster and dig a hole to the left .the ai will make the monster fall in the hole , so the avatar may follow it , land on its top again , and proceed through a ladder , while the brick tile regenerates and the monster remains trapped in the hole below .the avatar can not attempt to traverse the gadget a second time without getting stuck in the hole where the monster previously was ( recall that the avatar can neither jump , nor dig holes horizontally ) .this is the fantasy - themed sequel of deflektor ( see above ) , with a wizard shooting a ray of light in some direction , static gnomes holding orientable mirrors , kettles that must be collected by hitting them with the ray of light , and several new game elements .these include collectible keys that open locks , wandering monsters that eat kettles , movable blocks , etc .all the elements of deflektor are recreated in mindbender , with substantially identical mechanics , with one crucial exception : polarizators in mindbender are manually orientable by the player , whereas in deflektor they rotate on their own .the full game is easily * np*-hard and arguably * pspace*-complete , but the interesting fact is that even the subgame that is supposed to be `` isomorphic '' to deflektor is in fact * nl*-complete , thus harder than deflektor .we give a straightforward reduction from the * nl*-complete problem directed connectivity : first of all , we may assume that each vertex of the given graph has indegree and outdegree at most two .then , each such vertex is modeled with the gadget in figure [ fmb ] .the left and bottom teleporters correspond to incoming edges , while the upper and right teleporters correspond to outgoing edges .teleporters in different gadgets are connected together according to the topology of the given graph ( which may be non - planar ) .the central object is a polarizator , which can be oriented by the player and lets light rays pass in one direction only .the two gnomes can reflect the light either upward or rightward , as figure [ fmb2 ] exemplifies .no light ray can be redirected from the left teleporter to the bottom one , or vice versa , due to the polarizator .the starting vertex contains the wizard instead of the left ( or bottom ) teleporter , and the ending vertex contains the exit door instead of the top ( or right ) teleporter .the level is solvable if and only if a path exists from the starting vertex to the ending vertex .remarkably , no kettle has been used in the reduction .the player controls a yellow ball whose task is to collect all the _ pills _ in a maze , while avoiding _ghosts_. collecting some special _ power pills _ makes the avatar invulnerable for a short time , giving it the ability to temporarily disable ghosts upon contact . despite this seeming simplicity , the full set of rules is quite complicated , although just a few are relevant to our purposes .ghosts come in four different colors , and a ghost s color determines its behavior . however , all ghosts alternate between chase mode and scatter mode . in chase mode, they follow the avatar with different heuristics , and in scatter mode they head toward a preset location .there is also a frightened mode , which is entered when the avatar collects a power pill , and makes all ghosts move randomly .after a few seconds , the effects of the power pill expire and all ghosts are back to chase and scatter modes . if a ghost in frightened mode is touched by the avatar , it goes back to its starting location , a _ ghost house _ , and comes out again shortly . as a general rule ,every time there is a mode switch , all ghosts immediately reverse their direction .other than that , ghosts may never reverse direction , not even upon reaching a maze intersection , and not even when in frightened mode .( this is also a practical way for the player to tell when a mode switch occurs . )depending on the game level , all timings and speeds are subject to variations : ghosts may be faster or slower in different modes , and the durations of the three modes may vary . usually , during frightened mode , the avatar speeds up and the ghosts slow down .the decision problem is whether a level can be completed without losing lives .we assume full configurability of the amount of ghosts and ghost houses , speeds , and the durations of chase , scatter , and frightened modes .we do not alter the basic game mechanics or the ai , though .we prove * np*-hardness by applying metatheorem [ m1b].a .a location with an adjacent toll road is sketched in figure [ fpa ] .power pills are used to model tokens , so the starting location contains two power pills , and the final location contains none .hence , to properly enforce location traversal , we further place a normal pill in the final location .each toll road is implemented as a pair of parallel maze corridors , each of which contains a ghost house somewhere , spawning one red ghost .the two corridors are intended to be traversed in opposite directions by the avatar ( i.e. , they are one - way paths ) .chase and scatter modes have the same duration , and all ghosts have the same speed in both modes .let be the number of tiles each ghost covers between two mode switches .frightened mode lasts longer , but ghosts slow down , covering exactly tiles while in that mode .we make sure that each ghost house is found exactly tiles away from its corridor s entrance , and that each corridor is more than tiles long .as the game starts , the ghosts spawn in front of their respective ghost houses , and start in chase mode , following the corridor in some direction . whenever a mode switch occurs , all ghosts reverse direction , and they can not change it again until the next mode switch , because they never reach a maze intersection . as a result ,each ghost `` patrols '' a portion of length of its own corridor . by construction ,since frightened mode can be entered at most times , no ghost may ever leave its corridor . upon collecting a power pill , the avatar sspeed increases in such a way that it can cover tiles ( or slightly more ) into any adjacent corridor . by doing so, the avatar consumes a token and , if the corridor is traversed in the proper direction , the ghost is necessarily encountered and sent back to the ghost house . by the time the avatar has reached the end of the corridor ,the power pill s effects expire and the ghost comes out of the ghost house , making the toll road functional again .not to be confused with kplumber , with a similar theme but much different mechanics , in this puzzle game a long - enough pipe has to be constructed out of several pieces , randomly presented in a queue , starting from a given _ source location_. after a timer expires , a stream of water starts flowing from the source into the pipes , and the game is won if and only if the stream traverses a given number of tiles before spilling out . since the player can keep constructing pipes on the same tile , `` overwriting '' the previous pieces until he gets the piece that he wants , he may indeed shape the pipe as he pleases , if the initial timer lasts long enough .some obstacles are also present in each level , such as fire hydrants , on which pipes can not be built .membership in * np*is obvious . for * np*-hardness , we apply metatheorem [ m1 ] .we use obstacles to model the boundaries of locations and paths , as figures [ fpm1 ] and [ fpm3 ] illustrate .the resulting paths are necessarily single - use , as only one pipe can fit in them .we still need to establish location traversal .suppose we implemented our planar graph with orthogonal lines as edges and squares as vertices .let be the total length of the paths plus the area of the starting vertex , and be the side length of a generic vertex .if the number of vertices is , the number of paths is , because the graph is 3-regular ( refer to the proof of metatheorem [ m1 ] ) .imagine scaling our construction by an integer factor , in such a way that all paths preserve their unit width , but just increase their length .all the vertices are also scaled in size , except the starting vertex , which remains constant .then , the total length of all paths plus the starting vertex area becomes , and the area of a non - starting vertex becomes .when the pipe reaches a generic vertex , it can cover most of its tiles twice before taking the path toward another vertex ( cross - shaped pieces must be used , see figure [ fpm2 ] ) .the length of such a pipe is at least .let us set to a suitable , so that becomes negligible compared to .now , it is sufficient to set the required length of the pipe to to ensure that all the vertices will be covered by it .the player has to guide an avatar through several dungeon levels , opening gates , fighting guards , and avoiding traps .most gates are operated by pressure plates .the avatar can walk , run , jump , climb , duck , fight , etc .the game s * pspace*-hardness was first proved in , but the rather involved construction may be replaced by a somewhat simpler one based on metatheorem [ m2].c , which in addition does not rely on gravity , long falls , or on doors that can be opened by more than one pressure plate . to prevent the avatar from avoiding a pressure plate by jumping past it , we simply put it on an elevated tile , which has to be climbed in order to be traversed , as figure [ fpp ] shows .we can even do without vertical walls ( as in ) , because they can be substituted with unopenable gates .membership in * pspace* * npspace*(cf .savitch s theorem ) is quite obvious , as the whole level s configuration can be stored in linear space , and enemy guards have a very simple pseudo - random fighting pattern . in this tetris - like puzzle game ,levels are made of several colored bubbles , stacked in a hexagonal distribution .the player controls a cannon at the bottom of the screen , which can shoot new bubbles of random colors in any direction .bubbles attach to each other and , whenever at least three monochromatic bubbles form a connected set as a result of a shot , they pop .monochromatic triplets may indeed be present in the initial level configuration , and they pop only when hit by a new bubble of the same color .apart from colored bubbles , there are _ stone blocks _ that can not be popped ( but may fall if not held up by an anchor ) , and _ rainbow bubbles _ that turn the same color of any bubble that pops next to them , and can later be popped like normal bubbles .notably , if a set of at least three adjacent monochromatic bubbles is formed as a result of some rainbow bubbles turning that color , they immediately pop .this may even induce a `` chain reaction '' of exploding rainbow bubbles , during which the player is not allowed to shoot a new bubble , and must wait for the explosions to finish .we prove * np*-hardness by a reduction from planar 3-sat .several variable gadgets ( figure [ fpba ] ) are stacked on top of each other , slightly staggered , on the far left of the construction .the clause gadgets ( figure [ fpbb ] ) are on the right , far above the variable gadgets . to separate _variable layers _ from each other and from the clause gadgets , we put long _ shields _ of stone blocks , extending from each variable gadget to the far right of the construction . the last shield (i.e. , the one in the top layer ) also extends all around the whole construction , on the right , top and left sides , preventing bubbles shot by the player from bouncing on the sides of the screen .variables and clauses are connected via carefully shaped _fuses _ made of rainbow bubbles , forking and bending as in figure [ fpb1 ] . initially , only the bottom variable gadget is exposed , and the player may choose whether to pop the black or the white bubbles , which correspond to opposite truth values . popping one of the two sets say ,the black one causes three rainbow bubbles to turn black and pop immediately after .this triggers a chain reaction , in which at least three new rainbow bubbles turn black and pop at each step , consuming the fuse and eventually reaching the clause gadgets . at this point, a thin colored _ wire _ is reached in every clause gadget ( see figure [ fpbb ] ) , which pops if and only if it is black ( its color tells whether the corresponding literal in the clause is positive or negative ) .if it pops , the explosion propagates inside the clause gadget , eliminating the anchor .notice that the explosion can never `` backfire '' from the clause gadget and consume fuses corresponding to different variables , because each wire is connected to only two rainbow bubbles of its attached fuse . when the fuse of the first variable has been consumed , the remaining part of the variable layer falls , including the shield ( see figure [ fpba ] ) .the second variable layer is then exposed , and the process continues until all fuses have been consumed , and all shields have fallen .what eventually remains are the `` unsatisfied '' clause gadgets , whose wires are now impossible to reach , due to the surrounding _ sheaths _ made of stone blocks .notice that each variable layer has its own anchor , so its shield does not fall until the variable has been set by the player , even if all the clauses connected to that variable have already been satisfied .this proves * np*-hardness .completeness holds under the assumption that the player can always choose the color of his next bubble , which is not far from true in most cases , since bubbles can be either discarded by making them bounce back to the bottom of the screen , or can be stacked somewhere ( if done properly , not more than two bubbles per color need to be stacked at once ) .the player controls a furry ball that has to walk on blue tiles in order to paint them pink , while avoiding monsters .some tiles are made of ice and do not have to be painted , the avatar slides on them and is unable to change direction until it reaches a different type of tile , or its slide is blocked by a wall .some blue tiles fall apart when the avatar steps on them , opening a hole in the ground that becomes a deadly area .all the blue tiles have to be painted pink within a time limit in order to finish the level .several power - ups randomly appear , including an exit door and teddy bears of several colors , which let the player immediately skip the level when collected .the decision problem is whether a given level can be completed without losing lives , regardless of the power - ups that may randomly appear .the presence of breakable tiles yields an immediate application of metatheorem [ m1 ] .figure [ fsk ] shows how a location is constructed : location traversal is implied by the blue tiles , all of which have to be covered by the avatar . on the other hand , after traversing a path connecting two locations , the cracked tiles break and can not be accessed again , making it a single - use path .the reason why we use ice tiles is merely that they need not be painted , so the player s purpose is in fact to visit all locations , rather than all paths .for this reason , even though there exists a power - up that prevents the avatar from sliding on ice , our construction still works as intended .proving membership in * np*would be almost straightforward , were it not for a certain type of monster that turns tiles from pink to blue .on top of this , monster behavior is pseudo - random and partly depends on the player s moves . for these reasons , beating a level may conceivably take an exponentially long time , and any proof that the game lies in * np*would have to be carefully crafted .starcraft is a real - time strategy game in which two or more players have to train an army in order to destroy each other s bases .two types of resources can be gathered from the environment by special units called _workers_. resources allow to make new buildings in order to train more units , thus forming an army that can be sent to war .there are three possible _ races _ to choose , each of which has its unique unit types , each one with different parameters , such as hit points , range , damage , speed , etc .some units have special abilities , such as becoming invisible , casting offensive or defensive `` spells '' , etc .a player loses if and only if all his buildings are destroyed , regardless of the amount of units that he still has , or the amount of resources he gathered .most rts games are expected to be * exp*-hard , since they involve at least two players , and a match may last an arbitrarily long time .however , a simple * np*-hardness proof can be given via metatheorem [ m1b].a . the same reasoning applies , with minor changes , to several rts games other than starcraft , such as warcraft and age of empires . in our setting , the avatar will be a protoss probe ( i.e. , the protoss race s worker unit ) .when several probes will be found in the same area , we will identify one as the avatar , and all the other probes will represent tokens carried by the avatar . consistently with our avatar - token abstraction , we present a toll road in figure [ fsc1 ] . if a lone probe attempts to walk in the canyon , it is destroyed by a single shot of the enemy siege tank .but if two probes walk together , only one can be targeted and destroyed , whereas the second probe can make it past the siege tank while it reloads ( the two probes should stay a couple of tiles away from each other , to avoid splash damage ) . paraphrasing, the avatar can traverse the toll road if and only if it is carrying a token .it does not matter which probe is destroyed , because they are all equivalent . in general , if several probes attempt to traverse the canyon together , at least one is destroyed , and at least one survives .figure [ fsc2 ] shows how to implement a token lying in some location .the building is a protoss nexus , and there is a probe trapped behind a mineral field , worth exactly minerals . the probe can gather up to eight minerals at once , but then it must bring them to a nexus before it can gather more minerals .it follows that the trapped probe can not free itself , but it must wait for another probe to set it free by bringing all the minerals to the nexus . in our analogy , gathering all the minerals in a location to set a probe free corresponds to picking up a token .if now we add a free probe to the designated starting location , we are effectively placing an avatar there , which must set a new probe free every time it needs to traverse a toll road .we still have to enforce location traversal and ensure that no new probes are trained .recall from the proof of metatheorem [ m1b].a that there are locations with one token , plus a starting location with two tokens , and a final location with no tokens .therefore , in our generated map , the starting location has two mineral fields , and the final location has none .we place a nexus and a mineral field in the final location as well ( but no probe , i.e. , no token ) , so that there are minerals in total , and at least minerals in each location . in a different part of the map , we place copies of the `` crater '' shown in figure [ fsc3 ] . in each crater, there is a protoss gateway supported by a protoss pylon , and a terran supply depot .this completes our construction .it is clear that the terran player has no way to gather resources or train new units , in that he has no workers , and his only buildings are supply depots .moreover , each siege tank is bound to stay on a small platform , and switching from siege mode to tank mode has no use , because the shortened attack range would not allow it to hit any probe .on the other hand , let us assume that the protoss player starts with no resources , either .because there are no vespene geysers on the map , and only minerals are available , there are only two kinds of units that the protoss player may ever train : probes from nexuses and zealots from gateways .because none of them can fly and both are melee units , it follows that a zealot must be trained from each gateway in order to destroy the nearby supply depot . since training a zealot costs 100 minerals , all the minerals in every locationmust be gathered and spent just for training zealots .therefore , no additional buildings may be built , and no additional probes may be trained at the nexuses . in other terms , just the initial avatar can be used , and all the locations must be reached , which implies the location traversal feature .there are four subgames , one of which is a `` light cycle '' race between the player and several opponents .the race takes place in a rectangular grid whose external boundary is a deadly obstacle .the trail of each light cycle becomes a deadly obstacle as well , hence the safe areas become narrower and narrower as the race progresses .as soon as a light cycle hits an obstacle , it is eliminated , and also its trail is removed from the grid .the goal is to remain the sole survivor in the arena .we start from a grid - aligned embedding in which each vertex is a square , and paths have unit width .then we scale the construction up by some large - enough factor , while preserving the size of the vertices , and keeping all the paths of unit width .we do so to make the total area of all vertices negligible compared to the area of a face of the underlying plane graph .next we perform the same operation that we did for pipe mania ( see above ) : with the same notation , we scale the construction by a factor , so that the resulting combined path length is negligible with respect to the area of a single vertex .( in contrast with pipe mania , though , the starting vertex counts as a regular vertex and its area does not contribute to or . )then we proceed by implementing paths and locations , as sketched in figure [ ftr ] .each opponent light cycle is responsible for drawing the border of a face of the plane graph underlying our construction ( including the outer face ) , which is a grid - aligned polygon .when a light cycle is done drawing and meets its own trail again , it turns around and `` traps '' itself in a rectangle of area ( or slightly smaller ) inside the polygon it just outlined .this rectangle necessarily fits somewhere in the polygonal face , by the first step of the above construction . in figure [ ftr ]we see a path , traversed by the player s light cycle , which is bordered by two faces , the upper face arguably having a smaller perimeter than the lower one ( because the upper rectangle is bigger , and a larger part of it has been covered ) . while paths and vertices are constructed , we assume that the player `` waits '' by covering a small square in the starting vertex .this is feasible , because the perimeter of any face is much smaller than the area of a vertex , by construction .then the actual race starts , and the player has to cover enough locations to survive longer than his opponents .paths are obviously single - use , because they have unit width , and the player s trail is an obstacle even for the player himself .location traversal is implied by the fact that the player s light cycle must cover at least a length of slightly less than , so it must visit all vertices .e. d. demaine and r. a. hearn .playing games with algorithms : algorithmic combinatorial game theory . in _ games of no chance 3 _ , edited by m. h. albert and r. j. nowakowski , msri publications , 56:356 , 2009 .
we establish some general schemes relating the computational complexity of a video game to the presence of certain common elements or mechanics , such as destroyable paths , collectible items , doors opened by keys or activated by buttons or pressure plates , etc . then we apply such `` metatheorems '' to several video games published between 1980 and 1998 , including pac - man , tron , lode runner , boulder dash , deflektor , mindbender , pipe mania , skweek , prince of persia , lemmings , doom , puzzle bobble 3 , and starcraft . we obtain both new results , and improvements or alternative proofs of previously known results .
improved experimental techniques have made it possible to measure molecular fluctuations at a small scale , creating a need for a stochastic description of molecular data .typically , biochemical reaction networks are modelled as deterministic systems of ordinary differential equations ( odes ) , but these models assume the individual species are in high concentrations and do not allow for stochastic fluctuation .an alternative is stochastic models based on continuous - time markov chains . as an example of a stochastic reaction system ,consider [\kappa_2 ] } 2c,\ ] ] where are positive reaction constants .the network consists of three chemical species , and and two reactions .each occurrence of a reaction modifies the species counts , for example , when the reaction takes places , the amount of and molecules are each decreased by one , while two molecules of are created .the species counts are modelled as a continuous - time markov chain , where the transitions are single occurrences of reactions with transition rates and are the species counts . when modelled deterministically , the concentrations ( rather than the counts ) of the species change according to an ode system . in a classical paper ,kurtz explored the relationship between deterministic and stochastic reaction systems , using a scaling argument large volume limit to link the dynamical behaviour of the two types of systems to each other .other , mainly recent work , also points to close connections between the two types of systems . in this paperwe explore this relationship further .a fundamental link between structural network properties and dynamical features of deterministic reaction networks has been known since the 1970s and 1980s with the work of horn , jackson and feinberg .specifically , their theory concerns the existence and uniqueness of equilibria in _ complex balanced _systems , with the ` deficiency zero theorem ' playing a central role in this context .complex balanced systems were called cyclic balanced systems by boltzmann .they have attractive analytical and physical properties ; for example a ( pseudo-)entropy might be defined which increases along all trajectories ( boltzmann s h - theorem ) .a parallel theory for the stochastic regime is not available , and the very concept of `` complex balanced '' does not currently have a stochastic counterpart . in this paperwe develop a theory to fill this gap .we define _ stochastically complex balanced _systems through properties of the stationary distribution , and we prove results for stochastic reaction networks that are in direct correspondence with the results for deterministic models . in particular , we prove a parallel statement of the deficiency zero theorem and show that all deficiency zero reaction networks have product - form poisson - like stationary distributions , irrespectively whether they are complexed balanced or not .in fact , in the non - complexed balanced case , the network is complex balanced on the boundary of the state space . a second target of our study concerns product - form stationary distributions .such distributions are computationally and analytically tractable and appear in many areas of applied probability , such as , queueing theory , petri net theory , and stochastic reaction network theory .specifically , a complex balanced mass - action network has a product - form poisson - like stationary distribution on every irreducible component . as an example , the stationary distribution of the complex balanced reaction system is where is an irreducible component of the state space and is a normalising constant .we expand the above result on mass - action systems and give general conditions under which the converse statement is true .in particular , we are interested in providing a structural characterisation of the networks with product - form poisson - like stationary distributions . however , this class of networks is strictly larger than that of complex balanced networks , and a full characterisation seems hard to achieve .we illustrate this with examples .we first introduce the necessary notation and background material ; see for general references .we assume standard knowledge about continuous - time markov chains . we let , and be the real , the non - negative real and the positive real numbers , respectively .also let be the natural numbers including 0 . for any real number , denotes the absolute value of .moreover , for any vector , we let be the component of , the euclidean norm , and the infinity norm , that is , . for two vectors , we write ( resp. ) and ( resp. ) , if the inequality holds component - wise .further , we define to be one if , and zero otherwise , and similarly for the other inequalities . if then is said to be positive .finally , denotes the index set of the non - zero components . for example , if then . if and , we define with the conventions that and .a reaction network is a triple , where is a set of species , is a set of complexes , and is a set of reactions , such that for all .the complexes are linear combinations of species on , identified as vectors in .a reaction is denoted by .we require that every species is part of at least one complex , and that every complex is part of at least one reaction . in this way, there are no `` superfluous '' species or complexes and is completely determined by the set of reactions , which we allow to be empty. in , there are species ( ) , complexes ( ) , and reactions . given a reaction network ,the _ reaction graph _ of is the directed graph with node set and edge set .we let be the number of linkage classes ( connected components ) of the reaction graph .a reaction is _ terminal _ if any directed path that starts with is contained in a closed directed path .we let be the set of terminal reactions .a reaction network is _ weakly reversible _ , if every reaction is terminal .the network in is weakly reversible , since both reactions are terminal .the _ stoichiometric subspace _ of is the linear subspace of given by for , the sets are called the _ stoichiometric compatibility classes _ of ( fig . ) . for the network in , , which is 2-dimensional .we will consider a reaction network either as a deterministic dynamical system on the continuous space , or as a stochastic dynamical system on the discrete space .in the deterministic case , the evolution of the species concentrations at time is modelled as the solution to the ode for some functions and an initial condition .we require that the functions are continuously differentiable , and that if and only if . such functions are called _ rate functions _ , they constitute a _ deterministic kinetics _ for , and the pair is called a _deterministic reaction system_. if for all reactions , then the constants are referred to as _ rate constants _ and the modelling regime is referred to as _ deterministic mass - action kinetics_. in this case , the pair is called a _ deterministic mass - action system _ , where is the vector of rate constants . in the stochasticsetting , the evolution of the species counts at time is modelled as a continuous - time markov chain with state space . at any state ,the states that can be reached in one step are for , with transition rates .the functions are called _ rate functions _ , and we require that if and only if .a choice of these functions constitute a _stochastic kinetics _ for and the pair is called a _stochastic reaction system_. if the reaction occurs at time , then the new state is where denotes the previous state .if for any reaction then the constants are known as _ rate constants _ , as in the deterministic case , and the modelling regime is referred to as _stochastic mass - action kinetics_. the pair is , in this case , called a _stochastic mass - action system_. the evolution of the stochastic as well as the deterministic reaction system is confined to the stoichiometric compatibility classes , in fact , , as takes values in .let be a reaction network .a. a reaction network is a _ subnetwork _ of if . in this case, it follows that and .b. a system , deterministic or stochastic , is a _ subsystem _ of a system if is a subnetwork of and the rate functions agree on the reactions in . c. the subnetwork given by the set of terminal reactions is the _ terminal network _ of .we denote .furthermore , the subsystem of is called the _ terminal system _ of .the connected components of the reaction graph of the terminal network of are called _ terminal strongly connected component _ of . for any complex in ,we denote by the subsystem of whose reaction graph is the terminal strongly connected component containing as node . as an example , consider the mass - action system [\kappa_2]}2b\cee{<-[\kappa_3]}a\cee{->[\kappa_4]}0\cee{<=>[\kappa_5][\kappa_6]}c.\ ] ] here , there are two terminal strongly connected components , which are and . in particular , is equal to and is given by [\kappa_2]}2b.\ ] ] finally , if is a mass - action system , any subsystems is a mass - action systems as well and can be denoted by .in this section we will recapitulate the known characterisation of existence and uniqueness of positive equilibria in complex balanced systems and the connection between complex balanced systems and deficiency zero reaction networks .as we will show in the subsequent section , this characterisation can be fully translated into a similar characterisation for stochastic reaction networks .we start with a definition .[ def : cb ] a deterministic reaction system is said to be _ complex balanced _ if there exists a positive _ complex balanced equilibrium _ , that is , a positive equilibrium point for the system , such that the name ` complex balanced ' refers to the fact that the flow , at equilibrium , entering into the complex equals the flow exiting from the complex . as an example, the mass - action system in is complex balanced for any choice of and is a complex balanced equilibrium .the class of complex balanced systems is an extension of the class of detailed balanced mass - action systems . for mass - action systems , becomes with the convention that if . in the case of mass - action kinetics , we extend definition [ def : cb ] to the stochastic case , by saying that a stochastic mass - action system is complex balanced if the deterministic mass - action system is complex balanced. we might therefore refer to complex balanced mass - action systems without specifying whether they are stochastically or deterministically modelled .the next theorem is a slight generalization of a classical result , which provides the backbone for the further characterisation .the generalization includes a property of non - negative equilibria [ thm : complex_balanced ] if a deterministic reaction system is complex balanced , then is weakly reversible .moreover , if is mass - action kinetics , then all equilibria are complex balanced , that is , they fulfil .moreover , there exists exactly one positive equilibrium in each stoichiometric compatibility class , which is locally asymptotically stable . as we are not aware of a proof of this more general formulation , we provide one in appendix [ sec : proofs ] .the _ deficiency _ plays an important role in the study of complex balanced systems .the deficiency of is defined as where is the cardinality of , is the number of linkage classes of the reaction graph of and is the dimension of the stoichiometric subspace .the definition hides the geometrical interpretation of the deficiency , which we now will explore .let be a basis of .further , define for .let . then .the space is linearly isomorphic to the stoichiometric subspace if and only if . specifically , consider the homomorphism for , we have and is thus a surjective homomorphism .therefore , which implies that is an isomorphism if and only if .it further follows that the deficiency is a non - negative number .we state here a useful lemma on the deficiency of subnetworks .[ lem : subnet ] let be a reaction network with deficiency . then , the deficiency of any subnetwork of is smaller than or equal to .let and let be the corresponding subnetwork with deficiency .further , let and be the equivalent of and for , respectively . by andsince is a subspace of , we have which concludes the proof .we next state two classical results which elucidate the connection between complex balanced systems and deficiency zero systems .a proof of the first and of the second result can be found in and in , respectively .the results draw a connection between graphical and dynamical properties of a network .theorem [ thm : equilibrium_boundary ] is given here in a wider formulation than in ( see appendix [ sec : proofs ] for a proof ) .[ thm : deficiency_zero_iff ] the mass - action system is complex balanced for any choice of if and only if is weakly reversible and its deficiency is zero . [thm : equilibrium_boundary ] consider a deterministic reaction system , and assume that the deficiency of is zero . if is an equilibrium point and , then only if is terminalmoreover , if is mass - action kinetics with rate constants and , then the projection of onto the species space of is a complex balanced equilibrium of .it follows from theorem [ thm : equilibrium_boundary ] that an equilibrium point satisfies for the terminal system , though it is not necessarily a positive equilibrium of .the deficiency zero theorem , in the following formulation , is a consequence of the three previous theorems : [ thm : deficiency_zero ] consider a deterministic reaction system for which the deficiency is zero .then the following statements hold : a. if is not weakly reversible , then there exists no positive equilibria ; b. if is weakly reversible and is mass - action kinetics , then there exists within each stoichiometric compatibility class a unique positive equilibrium , which is asymptotically stable . the original formulation is richer than the one presented here , and . ( a ) the stoichiometric compatibility classes are of the form .( b ) the two irreducible components on are shown ( black circles and square ) , together with the possible transitions between the states .all states within a component are accessible from each other .the `` square '' component has no active reactions , both reactions are active on the `` black circles '' component .the grey states are transient states which are not in any irreducible component ., scaledwidth=80.0% ] to characterise the stochastic dynamics we introduce the following terminology .let be a reaction network .a. a reaction is _ active _ on if .b. a state is _ accessible _ from a state if there is a sequence of reactions such that 1 . , 2 . is active on for all .let be a reaction network .a non - empty set is an _ irreducible component _ of if for all and all , is accessible from if and only if .a reaction network is _ essential _ if the state space is a union of irreducible components .a reaction network is _ almost essential _ if the state space is a union of irreducible components except for a finite number of states .an essential network is also almost essential .a weakly reversible reaction network is essential .conditions for being essential can be found in .any irreducible component is contained in some stoichiometric compatibility class , and a stoichiometric compatibility class may contain several irreducible components ( fig . ) .the stationary distribution on an irreducible component is unique , if it exists .it is characterised by the _ master equation _ : for all .let denote the stochastic process associated with the system .if follows the law of at time , then the distribution of is for all future times . in this sense ,the stationary distribution describes a state of equilibrium of the system .moreover , if exists , then provided that with probability one .as discussed in section [ sec : introduction ] , a connection between mass - action complex balanced systems and their stationary distribution has been made in : [ thm : anderson ] let be a complex balanced mass - action system . then , there exists a unique stationary distribution on every irreducible component , and it is of the form where is a positive complex balanced equilibrium of and is a normalising constant . in this sectionwe derive stochastic statements corresponding to theorem [ thm : complex_balanced]-[thm : deficiency_zero ] .some of the proofs are deferred to appendix [ sec : proofs ] .we begin with a definition . for an irreducible component , the set of _ active reactions _ on consists of the reactions that are active on some .the subnetwork is called the _-network _ of and the subsystem of is called the _-system _ of .the reactions that are active on determine the dynamics of the stochastic system on .to study the stationary distributions , it is therefore convenient to analyse the -systems .note that is empty if and only if consists of a single state .as an example , consider the deficiency zero network , all molecules of and are irreversibly consumed through and , thus the only active reactions on an irreducible component are .the -network is therefore , which differs from the terminal system , .the next proposition states that for a deficiency zero reaction network for any irreducible component .note that proposition [ prop : terminal_strong_linkage_classes ] does not hold in general , for example , has for any , while .[ prop : terminal_strong_linkage_classes ] let be a reaction network and an irreducible component such that has deficiency zero .then , is a subnetwork of .in particular , this is true if the deficiency of is zero .see appendix [ sec : proofs ] for a proof .proposition [ prop : terminal_strong_linkage_classes ] can be useful because might be difficult to find , especially if there are many complexes . on the other hand , terminal reactions are easily identified by means of the reaction graph .the next definitions are inspired by definition [ def : cb ] .[ defi : complex_balanced_stationary_distribution ] let be a stochastic reaction system . a stationary distribution on an irreducible component is said to be _ complex balanced _ if for a mass - action system , becomes for any and , with the convention that if . in developing the theory for complex balanced equilibria in the deterministic setting ,an important role is played by requiring positivity of the complex balanced equilibrium .our aim is to introduce a similar concept for the stochastic systems . in the deterministic setting , if a state is positive then every rate function calculated on is positive .we find inspiration from this to give the next definition : an irreducible component is _ positive _ if .equivalently , an irreducible component is _ positive _ if all reactions are active on .the next definition follows naturally by analogy with the deterministic setting .[ def : stoch_cb ] a stochastic reaction system is said to be _ stochastically complex balanced _ if there exists a complex balanced stationary distribution on a positive irreducible component . if is positive , then and a complex balanced stationary distribution on satisfies with replaced by .note the similarity between definition [ def : stoch_cb ] and the definition of a complex balance equilibrium ( definition [ def : cb ] ) : the positivity of plays the role of the positivity in definition [ def : cb ] . also note the close similarity between and .[ thm : stochastic_complex_balanced ] let be a stochastic reaction system , and let be an irreducible component . if there exists a complex balanced stationary distribution on then is weakly reversible . moreover , if is mass - action kinetics with rate constants , there exists a complex balanced stationary distribution on if and only if the -system of is complex balanced . if this is the case , then has the form where is a positive complex balanced equilibrium of and is a normalising constant .the proof is in appendix [ sec : proofs ] .it is shown in that the stationary distribution is independent of the choice of complex balanced equilibrium of the -system , provided that it is positive .we are now ready to derive stochastic versions of theorem [ thm : complex_balanced]-[thm : deficiency_zero ] .in addition , we will show that a stochastically complexed balanced mass - action system is complex balanced and _vice versa_. hence , we will show that the deterministic and stochastic systems are intimately connected .the next corollary is an analogue of theorem [ thm : complex_balanced ] .[ cor : stochastic_complex_balanced_positive ] if a stochastic reaction system is stochastically complex balanced then is weakly reversible. moreover , a mass - action system is stochastically complex balanced if and only if it is complex balanced . if this is case , then on every irreducible component there exists a unique stationary distribution .such is a complex balanced stationary distribution and it has the form , where is a positive complex balanced equilibrium of . if is positive , then , by theorem [ thm : stochastic_complex_balanced ] if is stochastically complex balanced then is weakly reversible .moreover , if is mass - action kinetics with rate constants , it follows from theorem [ thm : stochastic_complex_balanced ] that there exists a complex balanced stationary distribution on if and only if is complex balanced . in this case , by theorem [ thm : anderson ] , a stationary distribution exists on every irreducible component and it is of the form . by theorem [ thm : stochastic_complex_balanced ] ,it is a complex balanced stationary distribution .corollary [ cor : stochastic_complex_balanced_positive ] might be considered a stochastic version of theorem [ thm : complex_balanced ] , especially if is taken to be equivalent to `` asymptotic stability '' for a deterministic equilibrium .part of the corollary is known ( see also theorem [ thm : anderson ] ) , and the whole corollary might therefore be considered as an extension of the result in on mass - action systems . in this sense ,theorem [ thm : stochastic_complex_balanced ] provides an even more general version , which deals with complex balanced subsystems of .we now state the parallel versions of theorem [ thm : deficiency_zero_iff]-[thm : deficiency_zero ] for the stochastic setting .[ cor : stoch_deficiency_zero_iff ] the mass - action system is stochastically complex balanced for any choice of if and only if is weakly reversible and its deficiency is zero .the result is an immediate consequence of corollary [ cor : stochastic_complex_balanced_positive ] and theorem [ thm : deficiency_zero_iff ] .[ thm : stationary_boundary ] consider a stochastic reaction system , and assume the deficiency of is zero .let be a state in an irreducible component and let in .then , only if is terminal . moreover , if is mass - action kinetics , then on the stationary distribution has the form where is a positive complex balanced equilibrium for the terminal system , and is a normalising constant .the proof is in appendix [ sec : proofs ] .[ thm : stoch_deficiency_zero ] consider a stochastic reaction system , and assume that the deficiency of is zero .then the following statements hold : a. if is not weakly reversible , then there exist no positive irreducible components ; b. if is weakly reversible , then is essential , and if is mass - action kinetics then there exists a unique stationary distribution on every irreducible component . the proof of the theorem is in appendix [ sec : proofs ] . in case ( i ) , theorem [ thm : stationary_boundary ] provides the form of the stationary distribution .hence we have characterised the stationary distribution for any deficiency zero reaction system , irrespectively whether it is complex balanced or not .consider the two stochastic mass - action systems [\kappa_2 ] } b,\quad 10a \cee{<=>[\kappa_3][\kappa_4 ] } 10b\quad\text{and}\quad a \cee{<=>[\kappa_1][\kappa_2 ] } b,\quad 10a \cee{->[\kappa_3 ] } 0.\ ] ] the behaviours of the two corresponding deterministic systems differ substantially , while the behaviours of the stochastic systems are equivalent on the irreducible components with an integer . indeed , in both cases the -system is [\kappa_2 ] } b,\ ] ] which is complex balanced ( theorem [ thm : deficiency_zero_iff ] ) .it follows from theorem [ thm : stochastic_complex_balanced ] that the stationary distribution on is for a suitable normalizing constant .the stationary distributions are complex balanced , but since is not positive in either of the two networks , we can not conclude that the systems are stochastically complex balanced . indeed , they are not for some choice of rate constants ( corollary [ cor : stoch_deficiency_zero_iff ] ) .incidentally , note that the second network is not almost essential .the above results draw parallels between stochastic and deterministic reaction networks . if a mass - action system is ( stochastically )complex balanced , then the stationary distribution on every irreducible component is a product - form poisson - like distribution .does the reverse statement hold true too ?if the stationary distribution is a product - form poisson - like distribution on some , or all irreducible components , does it follow that the system is complex balanced ? in the spirit of the first part of the paper we would like to achieve a full characterisation of stochastic systems with product - form poisson - like stationary distributions .however , even though the hypothesis of theorem [ thm : main_thm ] below is rather general , a full characterisation seems hard to achieve .[ thm : main_thm ] let be an almost essential reaction network , a vector of rate constants and a vector with positive entries . the probability distribution $ ] , defined by is a stationary distribution for the stochastic mass - action system for all irreducible components of if and only if is a complex balanced equilibrium for . by theorem [ thm : anderson ] ,if is a complex balanced equilibrium for , then the stationary distribution on all irreducible components is of the form .oppositely , assume that is the stationary distribution on for the stochastic mass - action system , for all irreducible components .since is almost essential , there exists a constant such that any states with belongs to an irreducible component . for any , such that we have that and for all .then , since is a stationary distribution and since and are in the same irreducible component for all , we have from for all satisfying .further , using , equation becomes which , by rearranging terms , leads to the equality holds for all satisfying , therefore the polynomials on the two sides of are equal .for any , let be the polynomial the monomial with maximal degree in is , and these differ for all complexes .this implies that , , are linearly independent on , and thus , the polynomials on the two sides of are equal if and only if hence , is a complex balanced equilibrium for and the proof is completed . to infer the existence of positive complex balanced equilibria in theorem [ thm : main_thm ] , the assumptions of the theorem could be weakened .specifically , it is only required that holds for a set of states whose geometry and cardinality allow us to conclude that the polynomials on the two sides of are the same . for to hold , we need to be in a irreducible component and we require and for all reactions , as well as the stationary distribution evaluated in and to be of the form . if a state satisfies this , we call it a _ good state_. a more general condition than being almost essential could be chosen case by case and depends on the monomials appearing in .for example , if the set of complexes coincides with the set of species , then the polynomials in are linear and the existence of _ good states_ in general position implies the existence of a positive complex balanced equilibrium . in general ,let be the total degree of the polynomials in .then it is sufficient to have lines in general position with more than good states on each of them .therefore , to conclude that a system is complex balanced it is sufficient to check the behaviour of a finite number of states , lying on a finite number of irreducible components .however , it follows from examples [ ex : not_complex_balanced ] and [ ex : not_complex_balanced_2 ] that the existence of arbitrarily many good states on a few irreducible components does not imply the existence of a positive complex balanced equilibrium in general . finally , in order to postulate that the mass - action system is complex balanced, it is necessary that the vector appearing in theorem [ thm : main_thm ] is the same for every irreducible component , as shown in example [ ex : not_complex_balanced_different_c ] .the following examples are also meant to give an idea of why it is hard to obtain a full characterization of stochastic mass - action systems with a product - form poisson - like stationary distribution on some irreducible component .[ ex : not_complex_balanced ] let and let be an integer . consider the stochastic mass - action system {\rho } 2a,\ ] ] where and are the rate constants .the reaction network is almost essential .it is shown in appendix [ sec : calculations ] that the stationary distribution on the irreducible component has the form with , namely where is a normalising constant .however , the mass - action system is not complex balanced as the reaction network is not weakly reversible ( theorem [ thm : complex_balanced ] ) .in particular , by theorem [ thm : main_thm ] , not all irreducible components can have a stationary distribution of the form with .trivially , the absorbing states and have it . additionally , we should point out that there is not an _ equivalent _ system on ( that is , a stochastic mass - action system with the same transition rate matrix on the states of as ) which is complex balanced . consider the case . since the transition from to is possible according to ,any equivalent mass - action system must contain the reaction , with rate constant .it can be further shown that any equivalent weakly reversible mass - action system must contain the connected component ( a+b ) at ( 0,0 ) ; ( 2b ) at ( 2,0.6 ) ; ( 2a ) at ( 4,0 ) ; ( 2b ) edge node ( 2a ) ( 2a ) edge node ( a+b ) ( a+b ) edge node ( 2b ) ; this prevents the system from being complex balanced , since there is not a fulfilling for the three complexes , and .let and let be an integer . consider the modification of example [ ex : not_complex_balanced ] given by [\rho_2 ] } b\qquad 2b \cee{<=>[\phantom{\theta-}\rho_1+\rho_3\phantom{()1}][\rho_3 ] } 2a,\ ] ] which is weakly reversible .if we let and , then the system reduces to that of example [ ex : not_complex_balanced ] by removing the two reversible reactions .it can be shown that for any parameter choice , is still a stationary distribution on the irreducible component . however , for some choice of parameters the mass - action system is not complex balanced .this can be seen either by direct computation on the system of complex balance equations or by noting that the deficiency of the network is 1 , so there must be a choice of parameters which prevents positive complex balanced equilibria by theorem [ thm : deficiency_zero_iff ] .it can be further shown that irreducible components different from do not possess a product - form poisson - like stationary distribution .[ ex : not_complex_balanced_2 ] consider the stochastic mass - action system with and two positive integers , {\rho\theta_1\theta_2 } b & 2b & \xrightarrow{\rho(\theta_1+\theta_2 - 1 ) } 2a\\[-0.3 cm ] 3a & \xrightarrow[\phantom{\rho(\theta_1+\theta_2 - 1)}]{\rho } a+2b\qquad & 2a+b & \xrightarrow[\phantom{\rho(\theta_1+\theta_2 - 1)}]{\rho } 3b .\end{aligned}\ ] ] the reaction network is almost essential . for any ,consider the irreducible component .then and , defined as in , are the ( unique ) stationary distributions on the irreducible components and , respectively . for the relevant calculations see appendix [ sec : calculations ] . however , the mass - action system is not complex balanced , since the reaction network is not weakly reversible ( theorem [ thm : complex_balanced ] ) .[ ex : not_complex_balanced_different_c ] theorem [ thm : main_thm ] can be also used to compute the stationary distribution of a stochastic mass - action system which behaves as a complex balanced system on the irreducible components . consider the weakly reversible ( and therefore essential ) stochastic mass - action system [\kappa_2 ] } 2a\qquad a+b \cee{<=>[\kappa_3][\kappa_4 ] } 2a+b.\ ] ] on every irreducible component , , the associated continuous time markov chain , which describes the evolution of the counts of , has the same distribution as the process associated with [\kappa_2+\kappa_4\theta]}2a,\ ] ] because the transition rates coincide .the latter system is complex balanced for any choice of rate constants .the stationary distribution has the form ( theorem [ thm : main_thm ] ) for some positive constant .the latter gives the stationary distribution of the original system as well .however , the rate of the poisson distribution does depend on , in which case the original system can not be complex balanced ( corollary [ cor : stochastic_complex_balanced_positive ] ) .for the same reason the example does not contradict theorem [ thm : main_thm ] .there are not many means to explicitly calculate the stationary distribution of a stochastic mass - action system . as an example , theorem [ thm : stochastic_complex_balanced ] can be used to determine the stationary distributions of mass - action systems like [\kappa_2]}d,\quad 2a\cee{<=>[\kappa_3][\kappa_4]}2b,\quad a\cee{->[\kappa_5]}0.\ ] ] indeed , for any irreducible component different from , the -system is given by [\kappa_2]}d,\ ] ] which is weakly reversible and has deficiency zero , therefore it is complex balanced . hence , the stationary distribution on has the form where and denote the entries relative to and , respectively . alternatively , since the terminal system is given by [\kappa_2]}d,\quad 2a\cee{<=>[\kappa_3][\kappa_4]}2b,\ ] ] theorem [ thm : stationary_boundary ] can be used to compute the stationary distribution . on every irreducible component , it is given by which is equivalent to the previous formula since and are constantly on all irreducible components .if the system does not fulfil the conditions of theorem [ thm : stochastic_complex_balanced ] and neither can be cast as a birth - death process , theorem [ thm : main_thm ] might be useful .the following mass - action system is considered in : by theorem [ thm : main_thm ] , the stationary distribution can not be poisson . indeed , it is given by the distribution of , where and are two independent poisson random variables with rates and , respectively .hence , in , the following system is also considered : [\kappa_2]}a\qquad 2a\cee{<=>[\kappa_3][\kappa_4]}3a.\ ] ] it has the stationary distribution }{i(i-1)(i-2)+\theta_3i}\quad\text{for } x\in{\mathbb{n}},\ ] ] where , , and is a normalising constant .it is interesting that is a poisson distribution if and only if .in fact , and in accordance with our results , the mass - action system is complex balanced if and only if .corollary [ cor : stoch_deficiency_zero_iff ] provides a characterisation of reaction networks that are stochastically complex balanced for any choice of rate constants .it is natural to wonder whether a stationary distribution of the form on some irreducible component for all choices of rate constants implies something specific about the -system .if for specific form we intend deficiency zero and weakly reversible , this is not the case , as this is violated in example [ ex : not_complex_balanced_different_c ] .however , in example [ ex : not_complex_balanced_different_c ] the system might be described _equivalently _ by means of a weakly reversible deficiency zero system for any irreducible component .the question of whether this is always true remains open .we provide here two more examples .[ ex : equivalent_to_complex_balanced_finite_case ] consider the stochastic mass - action system }2b\qquad a+3b\cee{->[\kappa_2]}3a+b.\ ] ] the underlying reaction network is considered in figure [ figure ] . on the irreducible component , the markov chain associated with the system has the same distribution as the markov chain associated with [3\kappa_2]}2b,\ ] ] since the transition rates coincide .it is interesting to note that the dynamics of the two systems are different when they are deterministically modelled . due to theorem [ thm : deficiency_zero_iff ] ,the latter system is complex balanced for any choice of rate constants .therefore , by theorem [ thm : main_thm ] , the stationary distribution on has the form on both systems for any choice of rate constants .the same argument does not hold , in this case , for the other irreducible components .the same phenomenon as in example [ ex : equivalent_to_complex_balanced_finite_case ] is observed in the stochastic mass - action system }3a+b\qquad a+3b\cee{->[\kappa_2]}2b.\ ] ] on the irreducible component , the markov chain associated with the system has the same distribution as the markov chain associated with [\kappa_2]}3a+b,\ ] ] since the transition rates coincide , and the latter network is weakly reversible and has deficiency zero .here we state some preliminary results that will be needed in appendix [ sec : proofs ] . [lem : path ] let be a reaction network . if is a directed path in the reaction graph of , and , then is accessible from .first , note that it is sufficient to note that if , then for any , we have this concludes the proof .[ lem : weakly_reversible ] let be an irreducible component such that has deficiency zero . then , is weakly reversible . in particular ,if has deficiency zero , has deficiency zero and is weakly reversible for every irreducible component .if is empty then is weakly reversible and there is nothing to prove .otherwise , if is non - empty , let .by hypothesis , there exists a state in with .this means that is accessible from .moreover , since belongs to an irreducible component , we have that is accessible from as well , which implies that for a certain choice of .in particular , . by the hypothesis of deficiency zero, it follows that , because , defined in , is an isomorphism between the spaces and associated with . therefore , for some integers . since the vectors are linearly independent , for all .hence , each that appears in the sum , must appear at least twice , once with coefficient , once with .consequently , by iteratively reordering the terms , the reactions form a union of directed closed paths in the reaction graph of .in particular , the reaction is contained in a closed directed path of the reaction graph of , and since this is true for every reaction in , is weakly reversible .we conclude the proof by lemma [ lem : subnet ] , since if has deficiency zero , so does every subnetwork of .[ lem : reactions_gamma ] let be a weakly reversible reaction network , and let be an irreducible component . then , for any complex we have one inclusion is trivial , since .for the other inclusion , fix .suppose that there exists with .it follows that any reaction is active on , and therefore is contained in .moreover , since is weakly reversible , for any reaction in of the form , there exists a directed path in the reaction graph of from to .hence , by lemma [ lem : path ] , is accessible from , which implies that is in and that is in , since . therefore , to conclude the proof it suffices to prove that there exists with .if it were no with , then no reaction of the form would be in .since , there exists a reaction of the form .this means that there is , such that .hence , is in with , which concludes the proof .it is proven in that if a deterministic reaction system is complex balanced , then is weakly reversible .by , we also know that if is mass - action kinetics , then all positive equilibria are complex balanced , and there exists exactly one positive equilibrium in each stoichiometric compatibility class , which is locally asymptotically stable . therefore , to conclude the proof we only need to prove that in a complex balanced mass - action system , the eventual equilibria on the boundary of are also complex balanced .first of all note that any subsystem of corresponding to a linkage classes of is complex balanced . indeed , the projection of a positive complex balanced equilibrium of onto the space of the species of satisfies for any complex of ,hence it is a positive complex balanced equilibrium of .let be an equilibrium point on the boundary .consider a linkage class of , and assume that for any species appearing in the linkage class .then , the projection of onto the species of is a positive equilibrium of , and therefore complex balanced .it follows that satisfies for any complex of .oppositely , assume that there exists a species appearing in the linkage class , such that ( this can only happen on a boundary state ) .remember that by mass - action kinetics , all the rates of reactions whose source complex contains are zero . in particular , all the rates of reactions degrading are zero .consider a complex in that contains . by weakly reversibility , there exists a reaction in .if contains , then .if does not contain , then the reaction produces .since the rate of all reactions degrading is zero at and is an equilibrium , then must be zero as well . by mass - action kinetics , this means that there exists a species such that appears in and . by iteratively applying the same argument with the new species and by weakly reversibility , we obtain that for any reaction in .it follows that satisfies for any complex in , since the equation reduces to .equation is therefore satisfied for any complex of and is a complex balanced equilibrium .this concludes the proof . by (* theorem 6.1.2 ) , if is an equilibrium point and , then only if is terminal . moreover , if , then for every complex of . now, suppose that is mass - action kinetics with rate constants , and that with ( and therefore ) .consider by the first part of the statement , the reaction graph of the subnetwork is a union of terminal strongly connected components of , and therefore is weakly reversible .moreover , by lemma [ lem : subnet ] , the deficiency of is 0 .it is not hard to see that the canonical projection of onto the space of the species is a positive equilibrium point of , and therefore complex balanced by theorem [ thm : deficiency_zero_iff ] . the proof is concluded by and by noting that , for any complex , if is empty there is nothing to prove .suppose that this is not the case . since has deficiency zero , by lemma [ lem : weakly_reversible ] , it is weakly reversible . for any , by definition thereexists such that , which in turn implies .therefore , for any directed path in the reaction graph of that starts with , all the reactions in the path belong to , by definition of .since is weakly reversible , this can only happen if , and this proves the first part of the statement . to conclude the proof ,note that if the deficiency of is zero , then by lemma [ lem : subnet ] the deficiency of is zero as well .for the first part of the statement , consider a continuous - time markov chain with state space and transition rate from to given by if , and zero otherwise .the master equation for is with the convention that if . by definition [ defi : complex_balanced_stationary_distribution ] , a stationary distribution for existsand it is of the form , for a suitable normalising constant . since is positive for any ( because it is a stationary distribution on an irreducible component ) , then by standard markov chain theory , we have that for any two states , if is accessible from , then is accessible from .fix and with .then , a directed path from to exists in the graph associated with . the second components of the form of the states in the path , by construction , determine a directed path in the reaction graph of from to . hence , any reaction is contained in a closed directed path , which means that is weakly reversible .assume now that is mass - action kinetics with rate constants and that is a positive complex balanced equilibrium of .then , by theorem [ thm : anderson ] , there exists a ( unique ) stationary distribution on of the form . if a species is not in , then the value of is constant for any , and can be obtained from by modifying the normalising constant . by theorem[ thm : complex_balanced ] and lemma [ lem : reactions_gamma ] , we have that with if .therefore , for any and , which leads to , since is of the form . to prove the converse we first introduce a new stochastic mass - action system , which is given by the reactions of the form where are fictitious species in one to one correspondence with the complexes .the rate constant of the reaction is given by .it is not difficult to see that the sum of the fictitious species is conserved for any possible trajectory .moreover , since any directed path in the reaction graph of corresponds to a directed path in the reaction graph of , we have that is weakly reversible by the first part of the proof .consider the set every state in is of the form , where and is considered as the vector in with entry 1 in the position corresponding to the species and 0 otherwise . since is an irreducible component of and the sum of the fictitious species is conserved , no state outside is accessible from any state in , according to .moreover , the master equation on can be written as if we choose for some positive constant , then the master equation is satisfied due to definition [ defi : complex_balanced_stationary_distribution ] .therefore , if is chosen as a suitable normalising constant , is a stationary distribution on .consider the linear homomorphism as defined in , for the reaction network .let denote the cardinality of a set , and note that .for any vector of the basis of , we have .since the vectors with are linear independent , is an isomorphism and the deficiency of is 0 .since is a deficiency zero weakly reversible reaction network , it follows from theorem [ thm : deficiency_zero_iff ] that the mass - action system is complex balanced . therefore , by theorem [ thm : anderson ] , we have that has the form for a positive complex balanced equilibrium , on any irreducible component contained in .since does not depend on , we have for any .fix a complex . since is weakly reversible ,there exists a reaction that is active on .fix such that .then for any we have .if we plug the formula for in for our choice of and , we obtain which leads to the proof is concluded by the fact that the above holds for any fixed , which means that is a positive complex balanced equilibrium of . by lemma [ lem : weakly_reversible ], is weakly reversible . moreover , for , if then .this implies that for any directed path in the reaction graph of that starts with , all the reactions in the path belong to , by definition of . since is weakly reversible , every directed path in the reaction graph of that starts with is contained in a closed directed path .this implies that , and proves the first part of the statement .now assume that is mass - action kinetics with rate constants .if the deficiency of is zero , then by lemma [ lem : subnet ] the deficiency of the terminal network is zero as well .moreover , is weakly reversible by definition , thus by theorem [ thm : deficiency_zero_iff ] is complex balanced for any choice of rate constants .let be the stochastic process associated with .by the first part of the statement , on only terminal reactions take place and these involve a subset of the species only . without loss of generality , we can assume that is constituted by the first species of .therefore , is of the form , with and .moreover , we have that on , the projection is distributed as the process associated with , for which is an irreducible component .let be a positive complex balanced equilibrium for .hence , by theorem [ thm : anderson ] or corollary [ cor : stochastic_complex_balanced_positive ] , the stationary distribution of the process on is of the form .for the first part , we prove that if an irreducible component is positive , then is weakly reversible .this simply follows from lemma [ lem : weakly_reversible ] : indeed , by the lemma , is weakly reversible and since is positive , . to prove the second part, we have to show that a weakly reversible reaction network is essential , and this is done in .moreover , a deficiency zero weakly reversible mass - action system is complex balanced , and the proof is concluded by theorem [ thm : anderson ] or corollary [ cor : stochastic_complex_balanced_positive ] .in example [ ex : not_complex_balanced ] , we claim that the stationary distribution on the irreducible component has the form to prove this , it is sufficient to show that satisfies the master equation for every point of .the master equation on is given by by plugging in the formula for and after dividing by and we obtain =\frac{1}{x_1!x_2!}[x_1(\theta-1)+x_2(x_2 - 1)].\ ] ] if we multiply by and substitute , it follows that that is which always holds true because the terms cancel each other .in example [ ex : not_complex_balanced_2 ] , we change the notation to .then we claim that the stationary distributions on the irreducible components and are and , respectively , where as before we prove that is the stationary distribution on . the case with is analogue .we prove the result by consider the master equation for on a point , which is as following : as we did for the previous calculations , we plug in the expression for , then divide by , and multiply by .we obtain finally , by substituting with and by performing the calculations , we obtain , which means that the above equation is satisfied . , _ continuous time markov chain models for chemical reaction networks _, in design and analysis of biomolecular circuits : engineering approaches to systems and synthetic biology , heinz koeppl , douglas densmore , gianluca setti , and mario di bernardo , eds . ,springer , 2011 , pp . 342 .
stochastic reaction networks are dynamical models of biochemical reaction systems and form a particular class of continuous - time markov chains on . here we provide a fundamental characterisation that connects structural properties of a network to its dynamical features . specifically , we define the notion of ` stochastically complex balanced systems ' in terms of the network s stationary distribution and provide a characterisation of stochastically complex balanced systems , parallel to that established in the 70 - 80ies for deterministic reaction networks . additionally , we establish that a network is stochastically complex balanced if and only if an associated deterministic network is complex balanced ( in the deterministic sense ) , thereby proving a strong link between the theory of stochastic and deterministic networks . further , we prove a stochastic version of the ` deficiency zero theorem ' and show that any ( not only complex balanced ) deficiency zero reaction network has a product - form poisson - like stationary distribution on all irreducible components . finally , we provide sufficient conditions for when a product - form poisson - like distribution on a single ( or all ) component(s ) implies the network is complex balanced , and explore the possibility to characterise complex balanced systems in terms of product - form poisson - like stationary distributions .
in this paper , we develop som - based methods for the task of anomaly detection and visualization of aircraft engine anomalies . the paper is organized as follows : section [ sec : intro ] is an introduction to the subject , giving a small review of related articles . in section [ sec :overview ] , the different components of the system proposed are being described in detail .section [ sec : data ] presents the data that we used in this application , the experiments that we carried out and their results .section 4 presents a short conclusion .health monitoring consists in a set of algorithms which monitor in real time the operational parameters of the system .the goal is to detect early signs of failure , to schedule maintenance and to identify the causes of anomalies .here we consider a domain where health monitoring is especially important : aircraft engine safety and reliability .snecma , the french aircraft engine constructor , has developed well - established methodologies and innovative tools : to ensure the operational reliability of engines and the availability of aircraft , all flights are monitored . in this way , the availability of engines is improved : operational events , such as d&c ( delay and cancellation ) or ifsd ( in - flight shut down ) are avoided and maintenance operations planning and costs are optimized .this paper follows other related works .for example , have proposed the _ continuous empirical score _ ( ces ) , an algorithm for health monitoring for a test cell environment based on three components : a clustering algorithm based on em , a scoring component and a decision procedure . in , a similar methodology is applied to detect change - points in aircraft communication , addressing and reporting system ( acars ) data , which are basically messages transmitted from the aircraft to the ground containing on - flight measurements of various quantities relative to the engine and the aircraft . in , a novel _ star _architecture for kohonen maps is proposed .the idea here is that the center of the star will capture the normal state of an engine with some rays regrouping normal behaviors which have drifted away from the center state and other rays capturing possible engine defects . in this paper, we propose a new anomaly detection method , using statistical methods such as projections on kohonen maps and computation of confidence intervals .it is adapted to large sets of data samples , which are not necessarily issued from a single engine .note that typically , methods for health monitoring use an extensive amount of expert knowledge , whereas the proposed method is fully automatic and has not been designed for a specific dataset . finally , let us note that the reader can find a broad survey of methods for anomaly detection and their applications in and .flight data consist of a series of measures acquired by sensors positioned on the engine or the body of the aircraft .data may be issued from a single or multiple engines .we distinguish between _ exogenous _ or _ environmental _ measures related to the environment and _ endogenous _ or _ operational _ variables related to the engine itself .the reader can find the list of variables in table [ tab : variables ] .for the anomaly detection task , we are interested in operational measures. however , environmental influence on the operational measures needs to be removed to get reliable detection .+ exh & exhaustion gas temperature + n2 & core speed + temp1 & temperature at the entrance of the fan + pres & static pressure before combustion + temp2 & temperature before combustion + ff & fuel flow + + alt & altitude + temp3 & ambient temperature + sp & aircraft speed + n1 & fan speed + + eng & engine index + age & engine age + the entire procedure consists of two main phases . 1 .the first phase is the _ training _ or _ learning _ phase where we learn based on healthy data .* we cluster data into clusters of environmental conditions using only environmental variables .* we correct operational measures variables from the influence of the environment using a linear model , and we get the residuals ( corrected values ) . * next , a som is being learned based on the residuals .* we calibrate the anomaly detection component by computing the confidence intervals of the distances of the corrected data to the som .the learning phase is followed by the _ test _phase , where novel data are taken into account .* each novel data sample is being clustered in one of the environment clusters established in the training phase . * itis then being corrected of the environment influence using the linear model estimated earlier . *the test sample is projected to the kohonen map constructed in the training phase and finally , the calibrated anomaly detection component determines if the sample is normal or not .an important point is the choice of the clustering method .note that clustering is carried out on the _ environmental _ variables .the most popular clustering method is the hierarchical ascending classification algorithm , which allows us to choose the number of clusters based on the explained variance at different heights of the constructed tree .however in this work our goal is to develop a more general methodology that could process even high - dimensional data and it is well - known that hac is not adapted to this kind of data .consequently , we are particularly interested in methods based on subspaces such as hddc , since they can provide us with a parsimonious representation of high - dimensional data .thus , we will use hddc for the environment clustering , despite its less good performance for low - dimensional data . in order to test the capacity of the proposed system to detect anomalies , we need data with anomalies . however , it is very difficult to get them due to the extraordinary reliability of the aircraft engines and we can not fabricate them because deliberately damaging the engine or the test cell is clearly not an option .therefore , we create artificial anomalies by corrupting some of the data based on expert specifications that have been established following well - known possible malfunctions of aircraft engines . corrupting the data with anomaliesis carried out according to a _ signature _ describing the defect ( malfunction ) .a signature is a vector .following , a corruption term is added to the nominal value of the signal for a randomly chosen set of successive data samples .figure [ fig : anomaly_example - a ] gives an example of the corruption of the ff variable for one of the engines .figure [ fig : anomaly_example - b ] shows the corrupted variable of the corrected data , that is , after having removed the influence of the environmental variables . in order to build an anomaly detection component, we need a clustering method to define homogeneous subsets of corrected data .we choose to use the som algorithm for its well - known properties of clustering organized with respect to each variable of the data as well as its visualization ability .the output of the algorithm is a set of prototype vectors that define an `` organized '' map , that is , a map that respects the topology of the data in the input space .we can then color the map according to the distribution of the data for each variable . in this way, we can visually detect regions in the map where low or high values of a given variable are located .a smooth coloring shows that it is well organized . in the next section ,we show how to use these properties for the anomaly detection task . in this subsection , we present two anomaly detection methods that are based on confidence intervals .these intervals provide us with a `` normality '' interval of healthy data , which we can then use in the test phase to determine if a novel data sample is healthy or not .we have already seen that the som algorithm associates each data sample with the nearest prototype vector , given a selected distance measure . usually , the euclidean distance is selected .let be the number of the units of the map , the prototypes .for each data sample , we calculate , its distance to the map , namely the distance to its nearest prototype vector : where .note that this way of calculating distance will give us a far more useful measure than if we had just utilized the distance to the global mean , _i.e. _ .the confidence intervals that we use here are calculated using distances of training data to the map .the main idea is that the distance of a data sample to its prototype vector has to be `` small '' .so , a `` large '' distance could possibly indicate an anomaly .we propose a global and a local variant of this method . during the training phase, we calculate the distances , , according to equation ( 1 ) .we can thus construct a confidence interval by taking the -th percentile of the distances , , as the upper limit .the lower limit is equal to since a distance is strictly positive .we define thus the confidence interval \label{eq : global_confint}\end{aligned}\ ] ] for a novel data sample , we establish the following decision rule : the choice of the -th percentile is a compromise taking into account our double - sided objective of a high anomaly detection rate with the smallest possible false alarm rate . moreover , since the true anomaly rate is typically very small in civil aircraft engines , the choice of such a high percentile , which also serves as an upper bound of the normal functioning interval , is reasonable . in a similar manner , in the training phase, we can build a confidence interval for every cluster . in this way, we obtain confidence intervals , by taking the -th percentile of the _ per _ cluster distances as the upper limit \label{eq : local_confint}\end{aligned}\ ] ] for a novel data sample ( in the test phase ) , we establish the following decision rule : this section , we present the data that we used for our experiments as well as the processing that we carried out on them . data samples in this dataset are snapshots taken from the cruise phase of a flight .each data sample is a vector of endogenous and environmental variables , as well as categorical variables .data are issued from distinct engines of the same type . for each time instant , there are two snapshots , one for the engine on the left and another one for the engine on the right .thus , engines appear always in pairs .snapshots are issued from different flights .typically , there is one pair of snapshots per flight .the reader can find the list of variables in table [ tab : variables ] .the dataset we used here contains data samples and variables .we have divided the dataset into a training set and a test set . for the training set , we randomly picked data samples among the that we dispose of in total .the test set is composed of the remaining data samples .we have verified that all engines are represented in both sets .we have sorted data based on the engine i d ( primary key of the sort ) and for a given engine , based on the timestamp of the snapshot .we normalize the data ( center and scale ) because the scales of the variables were very different .clustering is carried out on environmental variables to define clusters of contexts .due to the large variability of the different contexts ( extreme temperatures very high or very cold and so on ) , we have to do a compromise between a good variance explanation and a reasonable number of clusters ( to keep a sufficient number of data in each cluster ) .if we compare hddc to the hierarchical ascending classification ( hac ) algorithm in terms of explained variance , we observe that the explained variance is about 50 % for five clusters for both algorithms .and as mentioned before , we prefer to use hddc to present a methodology which can be easily adapted to high - dimensional data .let be the number of clusters .we correct the operational variables of environmental influence using the procedure we described in section 2 . after the partition into 5 clusters based on environmental variables , we compute the residuals of the operational variables as follows : if we set n1 , temp3 , sp , alt et age , we write where is one of the operational variables , is the engine index , is the cluster number , is the observation index .moreover , is the intercept , is the effect of the engine and the effect of the cluster . by analyzing the residuals , one can observe that the model succeeds in capturing the influence of the environment on the endogenous measures , since the magnitude of the residuals is rather small ( between -0.5 and + 0.5 ) .the residuals therefore capture behaviors of the engine which are not due to environmental conditions .the residuals are expected to be centered , _i.e. _ to have a mean equal to . however , they are not necessarily scaled , so we re - scale them . generally speaking ,since residuals are not smooth , we carry out smoothing using a moving average of width ( central element plus elements on the left plus elements on the right ) .we note that by smoothing , we lose data samples from the beginning and the end .therefore , we end up with a set of residual samples instead of the that we had initially .next , we construct a self - organizing map ( som ) based on the residuals ( figure [ fig : som_app ] ) . we have opted here for a map of neurons ( ) because we need a minimum of observations per som cluster in order to calculate the normal functioning intervals with precision .endogenous variables .black cells contain high values of the variable while white ones contain low values .red dots refer to anomalies and green dots to healthy data for two different types of defects bearing on the variables n2 and exh .the proposed method clusters them in different regions of the map .the size of each dot is proportional to the number of points of the cluster.,scaledwidth=120.0% ] the last step is the calibration of the detection component by determining the global and local confidence intervals based on the distances of the data to the map . for the global case , according to equation [ eq : global_confint ] , we have : \end{aligned}\ ] ] in a similar manner , we derive the upper limits of the local confidence intervals , ranging from to . in the test phase, we assume that novel data samples are being made available .we first corrupt these data following the technique proposed in section [ sec : anomalies ] .snecma experts provided us with signatures of known defects ( anomalies ) , that we added to the data . for data confidentiality reasons ,we are obliged to anonymize the defects and we refer to them as `` defect '' , `` defect '' etc .we start by normalizing test data with the coefficients used to normalize training data earlier .we then cluster data into environment clusters using the model parameters we estimated on the training data earlier .next , we correct data from environmental influence using the model we built on the training data . in this way , we obtain the test residuals , that we re - scale with the same scaling coefficients used to re - scale training residuals .we apply a smoothing transformation using a moving average , exactly like we did for training residuals .we use the same window size , _i.e. _ .smoothing causes some of the data to be lost , so we end up with test residuals instead of the we had initially . finally , we project data onto the kohonen map that we built in the training phase and we compute the distances as in equation ( 1 ) .we apply the decision rule , either the global decision rule of ( [ eq : decision_global ] ) or the local one of ( [ eq : decision_local ] ) . in order to evaluate our system, we calculate the detection rate ( ) and the false alarms rate ( ) : .detection rate ( ) and false alarm rate ( ) for different types of defects and for both anomaly detection methods ( global and local ) for test data . [ cols= " < ,< , < , < , < " , ] in table [ tab : taux ] , we can see detection results for all defects and for both detection methods ( global and local ) .it is clear that both methods succeed in detecting the defects , almost without a single miss .the global method has a lower false alarm rate than the local one .this is because in our example , confidence intervals can not be calculated reliably in the local case since we have few data per som cluster .figure [ fig : gconfint ] shows the distance of each data sample ( samples on the horizontal axis ) to their nearest prototype vector ( equation [ eq : dist ] ) .the light blue band shows the global confidence interval that we calculated in the training phase .red crosses show the false alarms and green stars the correct detections . due to limited space in this contribution , the figures related to the local detection can be found in the following url : https://drive.google.com/folderview?id=0b0ejciu-platzzdqr25ovjnnatg&usp=sharingwe have developed an integrated methodology for the analysis , detection and visualization of anomalies of aircraft engines .we have developed a statistical technique that builds intervals of `` normal '' functioning of an engine based on distances of healthy data from the map with the aim of detecting anomalies .the system is first calibrated using healthy data .it is then fully operational and can process data that was not seen during training .the proposed method has shown satisfying performance in anomaly detection , given that it is a general method which does not incorporate any expert knowledge and that it is , thus , a general tool that can be used to detect anomalies in any kind of data .another advantage of the proposed method is that the use of the dimension allows to carry out multi - dimensional anomaly detection in a problem of dimension .moreover , the representation of the operational variables given by the use of the distance to the som is of a higher granularity than that of the distance from the global mean .last but not least , the use of som allows us to give interesting visualizations of healthy and abnormal data , as seen in figure [ fig : som_app ] .an extension of our work would be to carry out anomaly detection for datastreams using this method .a naive solution would be to re - calibrate the components of the system with each novel data sample , but it would be very time - consuming .instead , one can try to make each component of the system to operate on datastreams. 4 bouveyron , c. , girard , s. , and schmid , c. ( 2007a ) .high - dimensional data clustering . , 52(1):502519 .chandola , v. , banerjee , a. , and kumar , v. ( 2009 ) .outlier detection : a survey . , 41(3 ) .cme , e. , cottrell , m. , verleysen , m. , and lacaille , j. ( 2010a ) .aircraft engine health monitoring using self - organizing maps . in _ advances in data mining .applications and theoretical aspects _ , pages 405417 .cme , e. , cottrell , m. , verleysen , m. , lacaille , j. , et al .( 2010b ) .self organizing star ( sos ) for health monitoring . in _ proceedings of the european conference on artificial neural networks _ , pages 99104 .duda , r.o . ,hart , p.e .( 1973 ) _ pattern classification and scene analysis_.new york : john wiley & sons , inc .kohonen , t. ( 2001 ) ._ self - organizing maps _ , volume 30 .lacaille , j. , cme , e. , et al .sudden change detection in turbofan engine behavior . in _ proceedings of the the eighth international conference on condition monitoring and machinery failure prevention technologies _, pages 542548 .lacaille , j. and gerez , v. ( 2011 ) .online abnormality diagnosis for real - time implementation on turbofan engines and test cells .pages 579587 .lacaille , j. , gerez , v. , and zouari , r. ( 2010 ) . an adaptive anomaly detector used in turbofan test cells . in _ proceedings of the annual conference of the prognostics and health management society_. markou , m. ( 2003a ) . ., 83(12):24812497 .markou , m. ( 2003b ) . ., 83(12):24992521 .
we develop an application of som for the task of anomaly detection and visualization . to remove the effect of exogenous independent variables , we use a correction model which is more accurate than the usual one , since we apply different linear models in each cluster of context . we do not assume any particular probability distribution of the data and the detection method is based on the distance of new data to the kohonen map learned with corrected healthy data . we apply the proposed method to the detection of aircraft engine anomalies . monitoring , aircraft , som , clustering , anomaly detection , confidence intervals
one of the possible ways to model brownian motors / ratchets is to describe them as particles ( which model the protein molecules ) traveling along a designated track ( see ) . at a microscopic scalesuch a motion is conveniently described as a diffusion process with a deterministic drift . on the other hand , the designated track along which the molecule is traveling can be viewed as a tubular domain of some random shape . in particular, such a domain can have many random `` wings '' added to it .( see fig.1 .the shaded areas represent the `` wings '' . ) in this paper we are going to introduce a mathematically solvable model of the brownian motor and discuss some interesting relevant questions around this problem .our model is based on ideas similar to that of and ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* chapter 7 ) .the model is as follows .let be a pair of piecewise smooth functions with .let be a tubular -d domain of infinite length , i.e. it goes along the whole -axis . at the discontinuities of , we connect the pieces of the boundary via straight vertical lines .the domain models the `` main '' channel in which the motor is traveling .let a sequence of `` wings '' ( ) be attached to .these wings are attached to at the discontinuities of the functions .consider the union .an example of such a domain is shown in fig.1 , in which one can see four `` wings '' .we assume that , after adding the `` wings '' , for the domain , the boundary has two smooth pieces : the upper boundary and the lower boundary .let be the inward unit normal vector to .we make some assumptions on the domain . * assumption 1 . * the set of points for which there are points at which the unit normal vector is parallel to the -axis : has no limit points in .each such point corresponds to only one point for which .* assumption 2 .* for every the cross - section of the region at level , i.e. , the set of all points belonging to with the first coordinate equal to , consists of either one or two intervals that are its connected components .that is to say , in the case of one interval this interval corresponds to the `` main channel '' ; and in the case of two intervals one of them corresponds to the `` main channel '' and the other one corresponds to the wing .the wing will not have additional branching structure .also , for some we have .let us take into account randomness of the domain . keeping the above assumptions in mind, we can assume that the functions and the shape of the wings ( ) are all random .thus we can view the shape of as random .we introduce a filtration , as the smallest -algebra corresponding to the shape of \} ] .let ( ) be the operator corresponding to the shift along -direction : consists of the same shapes as those in but correspond to the domain \} ] be the first time that the process , starting from a point , hits .the limit )}{a}\ ] ] exists in -probability and can be viewed as the inverse of the average effective speed of transportation of the particle inside . using the results in sections 2 and 3 we can calculate this limit .this is done in section 4 .( in particular , see theorem 7 . ) in the last section 5 we mention briefly problems for multidimensional channels , for random channels changing in time , and some other generalizations .let us , for the present and for the next section , work with a fixed shape of .in the language of random motions in random environment the convergence results that we are going to state are in the so called `` quenched '' setting .we will allow this shape to be random in section 4 .we shall find the limiting slow motion of the diffusion process inside .first of all we need to construct from the domain a graph ( see fig.1 ) . for let be the cross - section of the domain with the line .the set may have several connected components .we identify all points in each connected component and the set thus obtained , equipped with the natural topology , is homeomorphic to a graph .we label the edges of this graph by ( there might be infinitely many such edges ) .we see that the structure of the graph consists of many edges ( such as , ... in fig.1 ) that form a long line corresponding to the domain and many other short edges ( such as , ... in fig.1 ) attached to the long line in a random way .a point can be characterized by two coordinates : the horizontal coordinate , and the discrete coordinate being the number of the edge in the graph to which the point belongs .let the identification mapping be .we note that the second coordinate is not chosen in a unique way : for being an interior vertex of the graph we can take to be the number of any of the several edges meeting at the vertex .the distance between two points and belonging to the same edge of the graph is defined as ; for belonging to different edges of the graph it is defined as the geodesic distance , where the minimum is taken over all chains of vertices connecting the points and . for an edge consider the `` tube '' in .the `` tube '' can be characterized by the interval ] ,we denote the set to be the connected component of that corresponds to the `` tube '' : ] to a certain markov process on .a sketch of the proof of this fact is in the next section .the process is a diffusion process on with a generator and the domain of definition .we are going now to define the operator and its domain of definition . for each edge we define an operator : here is the average of the velocity field on the connected component , with respect to lebesgue measure in -direction . at places where , the above expression for is understood as a limit as : for simplicity of presentationwe will assume throughout this paper the following .* assumption 6 . *the function is a constant .the case of non - constant can be treated in a similar way .the only difference is that the calculations are a little bit more bulky . to be more precise , in the ordinary differential equations we are going to solve in the proof of theorem 2 and lemma 1the constant will be replaced by , and these equations can be solved correspondingly .we also let the operator can be represented as a generalized second order differential operator ( see ) where , for an increasing function , the derivative is defined by , and the operator is acting on functions on the graph : for being an interior point of the edge we take .the domain of definition of the operator consists of such functions satisfying the following properties . the function must be a continuous function that is twice continuously differentiable in in the interior part of every edge ; there exist finite limits ( which are taken as the value of the function at the point ) ; there exist finite one - sided limits along every edge ending at and they satisfy the gluing conditions where the sign `` '' is taken if the values of for points are and `` '' otherwise .here ( when is an exterior vertex ) or ( when is an interior vertex ). for an exterior vertex with only one edge attached to it the condition ( 3 ) is just .such a boundary condition can also be expressed in terms of the usual derivatives instead of .it is .we remark that we are in dimension 2 so that these exterior vertices are accessible , and the boundary condition can be understood as a kind of ( not very standard ) instantaneous reflection . in dimension 3 or higherthese endpoints do not need a boundary condition , they are just inaccessible . for an interior vertex the gluing condition ( 3 ) can be written with the derivatives instead of . for being one of the we define ( for each edge the limit is a one - sided one ) .then the condition ( 3 ) can be written as it can be shown as in ( * ? ? ?* ; * ? ? ?* section 2 ) that the process exists as a continuous strong markov process on .we fix the shape of . for every , every and every us consider the distribution of the trajectory starting from a point in the space }({\gamma}) ] with values in : the probability measure defined for every borel subset }({\gamma}) ] .we shall now briefly give a sketch of the proof of theorem 1 announced in the previous section .the averaging within each edge is a routine adaptation of the arguments of ( * ? ? ?* ; * ? ? ?* section 3 ) . within one edge , the motion of the component is given by the integral form of the stochastic differential equation and the one for the limiting motion looks like from the above two formulas we see that in order to prove the convergence of to as in the interval we just need the estimates of and the estimate of ( `` averaging with respect to local time '' ) is exactly the same as that of ( * ? ? ?* ; * ? ? ?* section 3 ) . for the estimate of we can introduce an auxiliary function satisfying the problem the solvability of this equation is guaranteed by the fact that ( this is the key point in averaging ) .the solution is bounded with bounded derivatives . applying the generalized it s formula ( see ( * ? ? ?* ; * ? ? ?* section 3 , equation ( 3.1 ) ) ) to the function we see that multiplying both sides by and taking into account the problem that satisfies it is immediate to get an estimate of .these justify the averaging within one edge .the gluing conditions can be obtained using the results of and the girsanov formula . to this endone can introduce an auxiliary process in via the following stochastic differential equation here is the local time for the process at .this is exactly ( * ? ? ?* ; * ? ? ?* formula ( 1.4 ) ) .the limiting process within an edge is governed by . by applying theorem 1.2 in see that the gluing condition is just the gluing condition in ( 3 ) . on the other hand ,the measure corresponding to the process is related to the measure corresponding to the process in }(d) ] for some .let the process start from .let ) ] . the random variable is distributed according to our stationary and mixing assumptions .we have the following. * theorem 2 . * _ we have _ )}{a}=2\int_0^\infty k(t)\exp(-2{\beta}t)dt \ , \ ] ] _ where the function ._ * proof .* we see that ) ] ._ let ) ] .let ) ] .weak convergence of processes as to and finiteness of ) ] for fixed and imply that we have )={\mathbf e}_0^w\tau^{d_0}((-\infty , a]) ] is the solution of the same problem we used in the proof of theorem 2 with replaced by and replaced by .since as we see that we have )=\lim{\limits}_{{\delta}{\downarrow}0}{\mathbf e}_0^w\tau^{\delta}((-\infty , a ] ) \\{ \displaystyle}{=\lim{\limits}_{{\delta}{\downarrow}0}\left(2\int_0^a \dfrac{dy}{l_0^{\delta}(y)}\int_0^y l_0^{\delta}(t)\exp\left(-2{\beta}(y - t)\right)dt+ 2\int_{-\infty}^0 l_0^{\delta}(t)\exp\left(2{\beta}t\right)dt \int_0^a \dfrac{1}{l_0^{\delta}(y)}\exp\left(-2{\beta}y\right)dy\right ) } \\ { \displaystyle}{=2\int_0^a \dfrac{dy}{l_0(y)}\int_0^y l_0(t)\exp\left(-2{\beta}(y - t)\right)dt+2\int_{-\infty}^0 l_0(t)\exp\left(2{\beta}t\right)dt \int_0^a \dfrac{1}{l_0(y)}\exp\left(-2{\beta}y\right)dy } \ .\end{array}\ ] ] following our stationarity and mixing assumptions , after deleting the `` wings '' , the remaining channel still satisfies the stationarity and mixing assumptions .therefore by the same calculation as in the proof of theorem 2 , and using the above formula , we see that we have the following .* theorem 4 . * _ for the process defined as above we have _ )}{a}=2\int_0^\infty k(t)\exp(-2{\beta}t)dt \ , \ ] ] _ where and we allow jumps of the function . _ on the other hand , using the same argument of one can show that the process can be viewed as the limiting slow motion as of the _ part _ of the process within the domain . to be precise , consider the domain introduced in section 1 and the corresponding process in .let be an additive functional .( it is called the proper time of the domain , see . )we introduce the time inverse to and continuous on the right .let .then we can use the same arguments of to prove the weak convergence as of the processes to the process .consider the process moving in the domain as in section 1 .let be the cross - section width corresponding to the domain .as before we see that it could have jumps .let be the limiting slow motion as of , defined as in section 2 .let ) ] . since is the limiting slow motion of and is the limiting slow motion of the part of inside , we see from the above discussions that we have )=\int_0^{\tau((-\infty , a])}{\mathbf 1}({\mathbf{\mathfrak{y}}}^{-1}(y_t)\subset d_0)dt \ .\ ] ] by theorem 4 and the above relation we see that we have the following .* corollary 1 .* )}{\mathbf 1}({\mathbf{\mathfrak{y}}}^{-1}(y_t)\subset d_0)dt}}{a}=2\int_0^\infty k(t)\exp(-2{\beta}t)dt\ ] ] _ where _ .this fact will be used in the next subsection .has jumps.,width=377,height=188 ] in the general case when there is branching the domain consists of a domain that has a cross - section width which allows occasional jumps .the `` wings '' ( ) are then attached at the jumps of the domain . in this caseour graph consists of two types of edges .the first type of edges correspond to the domain as we discussed in the previous section .the second type of edges correspond to the `` wings '' attached to . in order to calculate the effective speed of transportation we shall first calculate the expected time that the process spends at one fixed `` wing '' . as a first step we do not consider the random shape but perform the calculation for a fixed shape .also , we shall first consider the simplest case that has only three edges : ] for some .let the process start from the point .let ) ] .we have the following. * lemma 1 .* )=2\text{sign}(r ) \int_0^rl_3(t)\exp(2{\beta}t)dt \int_0^a \dfrac{1}{l_2(y)}\exp(-2{\beta}y)dy \ .\ ] ] * proof . *let for example .the function ) ] .we assume that the graph still consists of edges ] be the time that the process spends at before it exits from ] is just equal to ) ] .let ) ]. then ) ] can also be calculated using lemma 1 and a shift . from the strong markov property of see that )={\mathbf e}_q^w\tau_{i_3}((-\infty , a])-{\mathbf e}^w_q\tau_{i_3}((-\infty,0]) ] be the time that the process spends in the edge before it exits from ] .we assume that we have the following .* assumption 7 .* the random variable is a bounded random variable : . for define the random variable we assume that , for some constant , we have the following .* assumption 8 .* and .thus we see that for some constant we have we also see that for some constant .all these random quantities are distributed according to our stationarity and mixing assumptions .let ) ] . by our assumption on stationarity and mixing , using corollary 2 , we see that we have the following .* lemma 2 . *_ we have _}{\mathbf 1}({\mathbf{\mathfrak{y}}}^{-1}(y_t)\not \subset d_0)dt}}{a}=2{\mathbf e}n{\mathbf e}\text{sign}(r)\int_0^rl_{\text{wing}}(t)\exp(2{\beta}t)dt \int_0^\infty \dfrac{1}{l_0(y)}\exp(-2{\beta}y)dy\ ] ] _ in probability ._ * proof .* let the `` wings '' located to the left of the point have -coordinate .let the `` wings '' located to the right of the point ( including possibly the point ) and not exceeding have -coordinate .we see that we have }{\mathbf 1}({\mathbf{\mathfrak{y}}}^{-1}(y_t)\not \subset d_0)dt=\sum{\limits}_{k=1}^\infty m(q_{-k},r_{-k},a)+\sum{\limits}_{k=1}^{n(a)}m(q_k , r_k , a ) \ .\ ] ] for we have thus taking into account assumption 7 we see that we have therefore }{\mathbf 1}({\mathbf{\mathfrak{y}}}^{-1}(y_t)\not \subset d_0)dt}}{a}=\lim{\limits}_{a{\rightarrow}\infty}\dfrac{\sum{\limits}_{k=1}^{n(a)}m(q_k , r_k , a)}{a}=\lim{\limits}_{a{\rightarrow}\infty } \dfrac{\sum{\limits}_{k=1}^{n(a)}m(q_k , r_k , a)}{n(a)}\dfrac{n(a)}{a } \ .\ ] ] on the other hand , we have here and thus we can write by the remark after assumption 8 we see that } \\ { \displaystyle}{=\dfrac{a}{{\beta}l_0 } \sum{\limits}_{k=1}^{n(a)}\exp(-2{\beta}(a - q_k))-\dfrac{a}{{\beta}l_0}n(a)\exp(-2{\beta}a ) \ . } \end{array}\ ] ] thus by our assumption 7 again we see that therefore using the weak law of large numbers for triangular arrays and taking into account our assumptions on mixing and stationarity we see that in probability . to be more precise, we write thus we have let .we see that it suffices to prove in probability .pick any . by chebyshev inequalitywe have the estimate here by the stationarity assumption we see that is a constant .also , by the exponentially mixing condition we see that , . by our assumption 7we see that for some . thus for any have on the other hand we have almost surely .thus we can conclude with the final result . adding the two equations in corollary 1 and lemma 2 we have the following .* theorem 5 . * _ we have _)}{a } \\ \\ { \displaystyle}{=2\int_0^\infty k(t)\exp(-2{\beta}t)dt + 2{\mathbf e}n{\mathbf e}\text{sign}(r)\int_0^rl_{\text{wing}}(t)\exp(2{\beta}t)dt \int_0^\infty \dfrac{1}{l_0(y)}\exp(-2{\beta}y)dy } \end{array}\ ] ] _ in probability . here_ let ) ] into consecutive pieces alternatively of -length and for some .the total time spent by the process before it exits from is the sum of those times spent in domains of -length and those times spent in domains of -length .as we are taking we can also let and the average time spent in domains of -length will not contribute . on the other hand ,since the process has a deterministic positive drift in the -direction , we see that as the motion inside different domains of -length will be asymptotically independent .( the motion against the flow is a large deviation effect . )thus ) ] .thus from theorem 6 we see that we have the following theorem .* theorem 7 . * _ we have _)}{a } \\ \\ { \displaystyle}{=2\int_0^\infty k(t)\exp(-2{\beta}t)dt + 2{\mathbf e}n{\mathbf e}\text{sign}(r)\int_0^rl_{\text{wing}}(t)\exp(2{\beta}t)dt \int_0^\infty \dfrac{1}{l_0(y)}\exp(-2{\beta}y)dy } \end{array}\ ] ] _ in probability . here . _\1 . the results of previous sections can be extended to the case of a multidimensional channel where , are -dimensional domains assumed to be bounded and consisting of a finite number of connected components ; each contains the origin .the boundary of is assumed to be smooth , except , maybe , a number of -dimensional manifolds ( like the discontinuity points in the -dimensional case ) .let be the graph homeomorphic ( in the natural topology ) to the set of connected components of the sets for various .let the functions be defined as -dimensional volumes of corresponding connected components .consider the process governed by the operator inside the domain .let .then , under mild additional conditions the limit exists in probability , and is given by theorem 7 .let be a continuous time markov chain with states and .let functions , , , be piecewise smooth , and , . put and .define the process in as the process governed by the operator inside with the normal reflection on the boundary at the times when is continuous ( then is constant ) .let jump to at times when has jumps .( actually , we need this condition to define the process in a unique way ; it is not important since we are interested in the limit as . )then one can prove that the slow component of the process converges as to the process described by the equation where , .it is known that for certain drift terms , the process demonstrates the so called ratchet effect : if is identically equal to or , the process tends to as , but if is the markov chain ( independent of ) the process tends to .we assumed that the `` wings '' have a simple structure each of them corresponds to just one edge of the graph ( fig.1 ) .one can consider the case of more complicated wings , like , for instance , at the vertex in fig.3 .one can also include in the consideration the case when the `` obstacles '' in the channel are such that the corresponding graph has loops like that in fig.3 .if the channel is not `` uniformly narrow '' but has points on axis such that in the -neighborhoods , of those points the channel has the `` diameter '' of order , the limiting process on the graph can have delays or even traps .this will lead to different behavior of .freidlin , m. , wentzell , a. , necessary and sufficient conditions for weak convergence of one - dimensional markov processes . _ the dynkin festschrift : markov processes and their applications _ , birkhauser , ( 1994 ) pp . 95109 .
we consider in this paper a solvable model for the motion of molecular motors . based on the averaging principle , we reduce the problem to a diffusion process on a graph . we then calculate the effective speed of transportation of these motors . _ keywords : _ brownian motors / ratchets , averaging principle , diffusion processes on graphs , random environment . _ 2010 mathematics subject classification numbers : _ 60h30 , 60j60 , 92b05 , 60k37 .
a basic feature of classical information is that it can be copied and distributed to an unlimited number of users .however , as one considers quantum information the fundamental task of broadcasting for pure states is impossible , and this implies severe limitations to very useful purposes such as parallel computation , networked communications , and secret sharing .a perfect distribution of the information encoded into input systems equally prepared in a pure state to users would correspond to the so called _ quantum cloning _, which is forbidden by the laws of quantum mechanics . nevertheless , the case of mixed input states is different , since one needs only the local state of each final user to be equal to the input state , whereas the global output state is allowed to be correlated .this fact opens the possibility to generalize the idea of cloning to quantum maps that output correlated states such as their local reduced states are copies of the input .this generalized version of quantum cloning was named _ quantum broadcasting _ in ref . . in this workthe impossibility of perfect broadcasting was proved in the case of a single input copy whenever the set of states to be broadcast contains a pair of noncommuting density matrices .this proof was later often considered as the mixed states - scenario extension of the no - cloning theorem .however , it was recently shown that even noncommuting quantum states can be perfectly broadcast provided a suitable number of input copies is available . moreover ,a new phenomenon can occur , which was named _superbroadcasting _ : for two - level systems ( qubits ) , equally prepared in an unknown mixed input state , the information contained in the direction of the bloch vector can be distributed to users and the local state of each final user can be more pure than the initial copies .an intuitive explanation of the superbroadcasting effect is provided by the statement that superbroadcasting shifts the noise from local purities to global correlations .one of the issues of superbroadcasting is then a deeper understanding of the role of correlations of different nature . while there are correlations which improve the accessibility of information encoded in multiple systems , the case of superbroadcasting points out that other kind of correlations are in fact detrimental in this respect .this leads to an amount of information in the global output state that is lower than the sum of informations contained in the local reduced states , i. e. the total information in absence of correlations .natural questions then arise at this stage .are the correlations among the final users solely quantum , or is it possible to purify the local states by introducing just classical correlations ? moreover , in the optimal broadcasting protocol , the distribution of information is achieved by coherently manipulating input systems , and the true direction of the bloch vector remains unknown during the whole procedure .what happens if one first uses the input copies to estimate the direction of the bloch vector , and then distributes pure states pointing in the estimated direction ?is it still possible , on average , to increase the purity of local states ?a preliminary extensive analysis of bipartite correlations at the output of superbroadcasting maps suggests that no bipartite entanglement is present , whereas the analysis of multipartite entanglement is still an open problem .the fact that the practical protocol for achieving superbroadcasting involves pure state cloning suggests on the other hand that the output state contains quantum correlations coming from the structure of the tensor product hilbert space and its symmetric subspace . in this paper , we will consider a semiclassical procedure for broadcasting , which consists of measurement and subsequent repreparation of the quantum states , usually referred to as the _ measure - and - prepare _ scheme .we call this scheme semiclassical because broadcasting occurs via extraction and processing of classical information , though the information is retrieved by a collective measurement which might be strictly quantum , being generally a nonlocal measurement .we show that the phenomenon of superbroadcasting can still be observed in this case , even though the scaling factors obtained by this scheme are suboptimal .such a procedure introduces only classical correlations among the final copies , as the joint output state remains fully separable .the remarkable presence of superbroadcasting even in the semiclassical scenario can be explained as a change of encoding of the classical information about the direction of the bloch vector .in fact , the tensor product of identical qubit states provides an encoding of direction , in which the information is spread in a nonlocal way over the whole -qubits system . in order to extract such an information, one needs either a collective measurement or a statistical processing of single - qubit measurements .however , after the information has been extracted , it can be redistributed exploiting a new encoding , which is more favourable to single users .this result can be interpreted as a proof that on one hand optimal superbroadcasting involves quantum effects that can not be simulated by extracting and re - using classical information , and on the other hand the phenomenon of superbroadcasting itself is improved by entanglement but not necessarily due to it .moreover , we will show that the fidelity of the optimal estimation of direction coincides with the fidelity of the optimal superbroadcasting protocol in the limit .this provides the first example of generalization to arbitrary mixed states of the relation between cloning and state estimation , which was known in the literature for pure states . in this paperwe will also address the optimal approximation of a _ universal not broadcasting _, namely the impossible transformation which corresponds to a combination of ideal purification , quantum cloning , and spin flip ( universal not ) .we will derive the optimal physical map , observing how in this case the semiclassical procedure achieves the optimal fidelity .in other words , the optimal universal not broadcasting can be viewed as a purely classical processing of information , as it happens in the case of pure input states .the paper is organised as follows . in sect .[ sec : preliminary ] we introduce the main tools that will be employed to describe symmetric and covariant broadcasting maps . in sect .[ sec : estimation ] we derive the covariant superbroadcasting map achieved by optimal estimation of the direction of the bloch vector and conditional repreparation of the output states . in sect .[ sec : not ] we derive the optimal covariant not broadcasting map and show that it can be achieved by semiclassical means . in sect .[ sec : phase ] we study the phase covariant case , we derive the phase covariant semiclassical map and compare it with the universal case . finally , in sect .[ sec : conc ] we summarise the main results of this paper and discuss their perspectives .symmetry considerations play a fundamental role in the analysis of broadcasting maps , where the input states are identically prepared states , and the output states are required to be permutationally invariant , in order to equally distribute information among many users . a very convenient tool to deal with permutation invarianceis the so - called schur - weyl duality , which relates the irreducible representations of the permutation group to the irreducible representations of the group . for a system of qubits , it is possible to decompose the hilbert space as a clebsch - gordan direct sum where is 0 ( 1/2 ) for even ( odd ) , , and here is the quantum number associated to the total angular momentum , and the spaces carry the irreducible representations of . in other words , for any , we have where is the irreducible representation labeled by the quantum number , and is the identity in . according to the schur - weyl duality ,the action of a permutation of the hilbert spaces in the tensor product can be represented in the same way as in eq .( [ wedsu2 ] ) , with the only difference that the roles of and are exchanged , namely the action of permutation is irreducible in and is trivial in . in this decomposition ,a permutation invariant operator has the form in particular , the state of identically prepared qubits can be written as where is a positive operator on the hilbert space with =1 ] .once the estimation is performed the output state of the broadcasting procedure is where denotes the eigenvector of for the eigenvalue [ a not broadcasting can be obtained replacing with its orthogonal complement in the above formula ] .accordingly , the local state of each user is and it is independent of the number of users . in the followingwe will require the broadcasting map to be covariant under rotations .this corresponds to require the property where denotes a rotation in the three - dimensional space , and is a two by two matrix representing the rotation in the single - qubit hilbert space . in other words , we require that , if the bloch vector of the input copies is rotated by , then also the output state is rotated by the same rotation . in order to have a covariant broadcasting map the povm density must be itself covariant , namely it must satisfy the property for any rotation . in this way, the probability distribution has the property and , therefore , the output state ( [ outstate ] ) satisfies the covariance property ( [ covbroad ] ) . in this framework, we want the local state to be as close as possible to the pure state . for this purpose ,the estimation strategy will be optimised in order to maximize the single - site fidelity in the case of the universal not broadcasting , the single - user output state is and one considers its fidelity with the pure state .clearly , in the classical procedure both broadcasting and not broadcasting have the _same fidelity_. due to the invariance property ( [ invprob ] ) , the fidelity does not depend on the actual value of the direction , and it is enough to maximize it for a fixed direction , for example the positive direction of the -axis . for this reason , from now on we will denote the fidelity simply with .the estimation strategy that maximizes the fidelity can be found in a simple way by exploiting the decomposition ( [ cem ] ) of the input state .first , due to the special form of the states , without loss of generality we can restrict our attention to povms of the form where each is a povm in the representation space , namely and in fact , if is any povm , then the corresponding probability distribution is = \nonumber\\ & \sum_j\tr [ \widetilde m_j(\hat{\vett n})~ \rho_j ( \vett n , r)]~ , \end{split}\ ] ] where ] , the output state for final users is prepared as where denotes the eigenvector of for the eigenvalue + 1 . as in the previous sections , we focus on the single - site reduced output , namely and the fidelity of this procedure is given by following the same arguments presented in section [ sec : estimation ] , it can be proved that and have parallel bloch vectors , that is , and the fidelity can be again calculated as . by exploiting the results of ref . , the single - site output bloch vector length turns out to be ,\ ] ] where . in fig .[ fig2 ] we report the plot of the scaling factor for the phase covariant classical broadcasting procedure .notice that its performances are always better than in the universal case reported in fig .[ f : fig1 ] .as expected , the single - site output bloch vector length ( [ eq : rphase ] ) coincides with the corresponding quantity calculated for the optimal phase covariant superbroadcaster in ref . in the limit of infinite output copies .finally , notice that in the phase covariant case for states of the form [ eq : phase - fam ] the not gate can always be achieved unitarily by a -rotation around the axis .therefore the optimal phase covariant not broadcasting has the same fidelity as the optimal phase covariant superbroadcaster in ref . .recently , bae and acn gave an argument to prove that the asymptotic cloning of pure states is equivalent to state estimation .the argument consists in noticing that , when restricted to a single output hilbert space , a symmetric cloning from to copies is an entanglement breaking channel , and , therefore , it can be realized by the semiclassical measure - and - prepare scheme , namely the single user output states are given by ~\rho_i~,\ ] ] where the povm represents the quantum measurement performed on the input , and is the ( single user ) output state prepared conditionally to the outcome . as a consequence ,if the input of the cloning machine is the pure state , then the single site cloning fidelity is = \<\psi|~\rho_{out}^1~ = \sum_i~\tr[p_i \rho^{\otimes n}]~ \<\psi| \rho_i |\psi\> ] . in the case of mixed states ,a similar argument can be exploited to give a general explanation to the fact that in the ideal case of infinite users the fidelity of the optimal superbroadcasting is achieved by a semiclassical scheme .in fact , analogously to ref . since the output states of superbroadcasting are invariant under permutations , for also the superbroadcasting transformation is an entanglement breaking channel , when restricted to a single user .therefore , it can be realized by measurement and subsequent repreparation , and the single user output states are written as in eq . , with suitable and .moreover , as for the case of cloning , also in the case of superbroadcasting the figure of merit is the fidelity of the output state with a _ pure _ state the eigenvector of the input density operator corresponding to the maximal eigenvalue ( see eq .( [ fid ] ) for the universal , and eq .( [ fidphase ] ) for the phase covariant case ) .it is then clear that asymptotically the fidelity of the optimal universal ( phase covariant ) superbroadcasting coincides with that of the optimal estimation of direction ( phase ) . in general, the above reasoning shows that superbroadcasting with infinite users is equivalent to the estimation of the eigenstate corresponding to the largest eigenvalue of the input density matrix .this result generalizes the well - known relation between cloning and state estimation to the case of mixed states .in this paper we considered the problem of quantum broadcasting , and in particular we analysed the possibility of broadcasting input qubit states to output qubits with the same bloch vector direction , just by estimating the direction by a collective measurement on the input qubits and then preparing outputs correspondingly .the main result is that this strategy allows to achieve superbroadcasting , namely to have output copies which are even more pure than the input ones , at the expense of classical correlations in the global output state .this superbroadcasting is suboptimal , but asymptotically converges to the optimal one , confirming also in the case of mixed states the fact that state estimation and cloning are asymptotically equivalent .we first considered the universal broadcasting , and then the broadcasting of the antipodal state , the so called universal not . for this purpose, we proved that the semiclassical strategy is optimal .finally , we considered the phase covariant version of the broadcasting problem , showing that superbroadcasting occurs with suboptimal purification rate , which is still better than the one for universal semiclassical superbroadcasting .the main interest of the summarized results is twofold . on one hand ,our results prove that superbroadcasting can be achieved by a semiclassical procedure , and then coherent manipulation of quantum information is not necessary , even though optimal superbroadcasting requires it . on the other hand, the practical interest of our results is that the semiclassical rates exhibit a good approximation of the optimal rates , and can be more easily achieved experimentally .50 w. k. wootters and w. h. zurek , _ nature _ * 299 * , 802 ( 1982 ) ; d. dieks , phys .a , * 92 * , 271 ( 1982 ) ; h. p. yuen , phys .lett . a * 113 * , 405 ( 1986 ) ;g. c. ghirardi , referee report of n. herbert , found .* 12 * , 1171 ( 1982 ) .h. barnum , c. m. caves , c. a. fuchs , r. jozsa , and b. schumacher , phys .lett . * 76 * , 2818 ( 1996 ) .g. m. dariano , c. macchiavello , and p. perinotti , phys .lett . * 95 * , 060503 ( 2005 ) .f. buscemi , g.m .dariano , c. macchiavello , and p. perinotti , quant - ph/0602125 r. demkowicz - dobrzanski , phys .a * 71 * , 062321 ( 2005 ) .j. i. cirac , a. k. ekert , and c. macchiavello , phys .lett . * 82 * , 4344 ( 1999 ) .f. buscemi , g.m .dariano , c. macchiavello , and p. perinotti , quant - ph/0510155 .d. bru , a. ekert , and c. macchiavello , phys .lett . * 81 * , 2598 ( 1998 ) .d. bru and c. macchiavello , phys .a * 253 * , 249 ( 1999 ) .j. bae and a. acn , phys .. lett . * 97 * , 030402 ( 2006 ) .v. buzek , m. hillery , and r. f. werner , phys .a * 60 * , r2626 ( 1999 ) .a. s. holevo , _ probabilistic and statistical aspects of quantum theory _ ( north holland , amsterdam 1982 ) .f. buscemi , g. m. dariano , p. perinotti , and m. f. sacchi , phys .a * 314 * , 374 ( 2003 ) .g. m. dariano , c. macchiavello , and p. perinotti , phys .a * 72 * , 042327 ( 2005 ) .f. buscemi , g. m. dariano , and c. macchiavello , phys .a * 72 * , 062311 ( 2005 ). m. horodecki , p. w. shor and m. b. ruskai , rev .phys * 15 * , 629 ( 2003 ) .
we address the problem of broadcasting copies of a generic qubit state to copies by estimating its direction and preparing a suitable output state according to the outcome of the estimate . this semiclassical broadcasting protocol is more restrictive than a general one , since it requires an intermediate step where classical information is extracted and processed . however , we prove that a suboptimal superbroadcasting , namely broadcasting with simultaneous purification of the local output states with respect to the input ones , is possible . we show that in the asymptotic limit of the purification rate converges to the optimal one , proving the conjecture that optimal broadcasting and state estimation are asymptotically equivalent . we also show that it is possible to achieve superbroadcasting with simultaneous inversion of the bloch vector direction ( universal not ) . we prove that in this case the semiclassical procedure of state estimation and preparation turns out to be optimal . we finally analyse semiclassical superbroadcasting in the phase - covariant case .
parameter estimation for dynamic systems of nonlinear differential equations from noisy measurements of some components of the solution at discrete times is a common problem in many applications . in the bayesian statistical framework ,the particle filter ( pf ) is a popular sequential monte carlo ( smc ) method for estimating the solution of the dynamical system and the parameters defining it in a sequential manner . among the different variants of pf proposed in the literature , the algorithm of estimates the state variable along with the model parameters by combining an auxiliary particle technique with approximation of the posterior density of the parameter vector by gaussian mixtures or an ensemble of particles drawn from the density .efficient time integration is crucial in the implementation of pf algorithms , particularly when the underlying dynamical system is stiff and can not be solved analytically , therefore requiring the use of specialized numerical solvers . in ,suitable linear multistep methods ( lmms ) for stiff problems are used within a - type pf , and the variance of the innovation term in the pf is assigned according to estimates of the local error introduced by the numerical solver . in the present work , we explain how to organize the calculations efficiently on multicore desktop computers , making it possible to follow a large number of particles .computed examples show that significant speedups can be obtained over a naive implementation .the inherently parallel nature of pf algorithms is well - known ( see , e.g. , ) , and software packages have been made available for implementing parallel pfs on various platforms ( e.g. , ) . in this paper , we show how to reformulate the pf with lmm time integrators proposed in to make it most amenable to parallel and vectorized environments .the general approach can be straightforwardly adapted to different computing languages .computational advantages of the new formulations are illustrated with two sets of computed examples in matlab .the derivation of the lmm pf - smc method that we are interested in , inspired by the algorithm proposed in , can be found in . for sake of completeness , the lmm pf - smc procedure is outlined in algorithm 1 , where denotes the lmm of choice .+ ' '' '' * algorithm 1 : lmm pf - smc sampler * ' '' '' given the initial probability density : 1 ._ initialization : _ draw the particle ensemble from : compute the parameter mean and covariance : set .propagation : _ shrink the parameters by a factor .compute the state predictor using lmm : 3 ._ survival of the fittest : _ for each : * compute the fitness weights * draw indices with replacement using probabilities ; * reshuffle 4 . _proliferation : _ for each : * proliferate the parameter by drawing * using lmm error control , estimate * draw ; * repropagate using lmm and add innovation : 5 . _ weight updating : _ for each , compute 6 . if , update increase and repeat from 2 . ' '' '' the main computational bottlenecks in the implementation of the algorithm come from the numerical time integrations in step 2 and step 4 , in particular when , due to the stiffness of the system , either extremely small time steps or the use of specially designed numerical schemes are needed to avoid the amplification of unstable modes . indeed , the need to use tiny time steps has been identified as a major bottleneck for pf algorithms ; see , e.g. , . among the available ode solvers for stiff systems ,lmms have the advantage being well - understood when it comes to stability properties and local truncation error estimates . the latter ,which in turn defines the accuracy of the integrator , and for which classical estimation methods exist , provides a natural way to assign the variance of the innovation term in step 4 ; we refer to for the details .we illustrate how the organization of the computations affects the computing time of the proposed lmm pf - smc algorithm on a system of nonlinear odes , which could arise , e.g. , from multi - compartment cellular metabolism models : the parameters and are known , and is the input function , where is the non - negative part of , and , , , and are given . in applications arising from metabolic studies ,the components , and of the solution of the ode system , which will be referred to as the states of the system , are typically concentrations of substrates and intermediates . in our computed examples ,the data consist of noisy observations of all three state components , and at 50 time instances , and the goal is to estimate the states at all time instances as well as the unknown parameters and , , which are , respectively , the maximum reaction rates and affinity constants in the michaelis - menten expressions of the reaction fluxes . since the system of odes ( [ diff eq system ] ) is stiff for some values of the unknown parameters , we propagate and repropagate the particles using implicit lmms , e.g. , from the adams - moulton ( am ) or backward differentiation formula ( bdf ) families .implicit methods require the solution of a nonlinear system of equations at each time step , which is done with a newton - type scheme . by carefully organizing the calculationsso as to take maximal advantage of the multicore environment , it turns out that the time required by implicit lmm time integrators is comparable to that required by the explicit adams - bashforth ( ab ) integrators , which are not suitable for stiff problems .many desktop computers and programming languages provide vectorized and multicore environments which can significantly reduce the execution time of pf methods when they are formulated to take advantage of these features .all of the computed examples in this paper were produced using a dell alienware aurora r4 desktop computer with 16 gb ram and an intel core^^ i7 - 3820 processor ( cpu @ 3.60ghz ) with 8 virtual cores , i.e. , 4 cores and 8 threads with hyper - threading capability , using the matlab r2013a programming language .when testing the parallel performance , we set the local cluster to have a maximum of 8 matlab workers , and we took as baseline the execution time of the lmm pf - smc algorithm on a single processor .it is straightforward to see that the propagation and re - propagation steps of algorithm 1 are naturally suited to parallelization by subdividing the particles among the different processors .this can be done by reorganizing the ` for ` loops in the algorithm so that they are partitioned and distributed among the available processors ( or workers ) in the pool , which is achieved in matlab with the commands ` matlabpool ` ( or ` parpool ` ) and ` parfor ` .not surprisingly , the best parallel performance occurs when all workers take approximately the same time to complete the task , because the slowest execution time determines the speed of the parallel loop .this can be achieved by prescribing the same time step for all particles in the time integration procedure .we remark that most ode solvers , including the matlab built - in time integrators , guarantee a requested accuracy in the solution by adapting the time step , a practice which may cause the propagation of two different particles to take very different times , depending on the stiffness induced by different parameter values .the spread of the computing times needed for the numerical integration of a particle ensemble is rather wide for systems , like the one in this example , whose stiffness is highly sensitive to the values of the unknown parameters .this violates the principle of equal load on the workers which is essential for a good parallel performance .propagation of all particles by lmms with the same fixed time step , on the other hand , ensures that the time required for each particle is the same , eliminating idle time . to present the results of our computed examples , we introduce two key concepts in parallel computing : speedup and parallel efficiency .the _ speedup _ using processors is the ratio , where is the execution time of the sequential algorithm and is the execution time of the parallel algorithm on processors , while the _ efficiency _ using processors is defined as .efficiency is a performance measure used to estimate how well the processors are utilized in running a parallel code : , trivially , for algorithms run sequentially , i.e. , on a single processor . for further details ,see , e.g. , ..[par_table5]cpu times ( in seconds ) sequentially and in parallel with 8 workers , along with the corresponding speedup and efficiency , for applying the pf - smc algorithm to solve the parameter estimation problem for system ( [ diff eq system ] ) using the first three lmm time integrators of each family with fixed time step and two sample sizes , and particles , respectively .[ cols="^,^,^,^,^,^,^,^,^ " , ] time series estimates of parameters , , , and for problem ( [ eq : advdiff ] ) when .,title="fig:",width=115 ] time series estimates of parameters , , , and for problem ( [ eq : advdiff ] ) when .,title="fig:",width=115 ] time series estimates of parameters , , , and for problem ( [ eq : advdiff ] ) when .,title="fig:",width=115 ] time series estimates of parameters , , , and for problem ( [ eq : advdiff ] ) when .,title="fig:",width=115 ] time series estimates of parameters , , , and for problem ( [ eq : advdiff ] ) when .,title="fig:",width=115 ]the use of stable , fixed time step lmm solvers in pf - smc algorithms lends itself in a natural way to both parallelizing and vectorizing the computations , thus providing a competitive alternative to running independent parallel chains in monte carlo simulations . in this paper, we consider these two different implementation strategies for a recently proposed pf - smc algorithm , and we illustrate the advantages with computed examples using two stiff test problems with different features . the results in tables [ par_table5 ] show that in the case where the stiffness of the dynamical system is very sensitive to the parameters to be estimated , as for system ( [ diff eq system ] ) , vectorizing the lmm pf - smc algorithm results in significant speedup over the sequential and even parallel implementations . moreover , for the vectorized version , the cpu times when using implicit and explicit methods are closer , and increasing the order of the method has little effect . in the case of a large system where the stiffness is an intrinsic feature anddoes not depend much on the values of the unknown parameters , as for system ( [ eq : odesys ] ) , on the other hand , vectorization of the pf - smc using implicit lmms does not perform better than the sequential implementation , but parallelization of the algorithm speeds up the calculations .our results suggest that both the size and structure of the problem determine whether the parallelized or vectorized version of the algorithm is more efficient .this work was partly supported by grant number 246665 from the simons foundation ( daniela calvetti ) and by nsf dms project number 1312424 ( erkki somersalo ) .a. lee , c. yau , m. b. giles , a. doucet and c. c. holmes , on the utility of graphics cards to perform massively parallel simulation of advanced monte carlo methods , _ j. comput . graph . statist ._ , * 19 * ( 2010 ) , 769789 .j. liu and m. west , combined parameter and state estimation in simulation - based filtering , in _ sequential monte carlo methods in practice _ ( eds .a. doucet , j. f. g. de freitas and n. j. gordon ) , springer , new york ( 2001 ) , 197223 . m. west , mixture models , monte carlo , bayesian updating and dynamic models , in _ computing science and statistics : proceedings of the 24th symposium on the interface _ ( ed. j. h. newton ) , interface foundation of america , fairfax station , va ( 1993 ) , 325333 .
particle filter ( pf ) sequential monte carlo ( smc ) methods are very attractive for the estimation of parameters of time dependent systems where the data is either not all available at once , or the range of time constants is wide enough to create problems in the numerical time propagation of the states . the need to evolve a large number of particles makes pf - based methods computationally challenging , the main bottlenecks being the time propagation of each particle and the large number of particles . while parallelization is typically advocated to speed up the computing time , vectorization of the algorithm on a single processor may result in even larger speedups for certain problems . in this paper we present a formulation of the pf - smc class of algorithms proposed in , which is particularly amenable to a parallel or vectorized computing environment , and we illustrate the performance with a few computed examples in matlab . + * keywords * : parallel computing , vectorization , particle filters , sequential monte carlo , linear multistep methods . + * msc - class * : 65y05 , 65y10 ( primary ) ; 62m20 , 65l06 , 62m05 ( secondary ) .
in many machine learning problems , the distance metric used over the input data has critical impact on the success of a learning algorithm .for instance , -nearest neighbor ( -nn ) classification , and clustering algorithms such as -means rely on if an appropriate distance metric is used to faithfully model the underlying relationships between the input data points .a more concrete example is visual object recognition .many visual recognition tasks can be viewed as inferring a distance metric that is able to measure the ( dis)similarity of the input visual data , ideally being consistent with human perception .typical examples include object categorization and content - based image retrieval , in which a similarity metric is needed to discriminate different object classes or relevant and irrelevant images against a given query . as one of the most classic and simplest classifiers , -nn has been applied to a wide range of vision tasks and it is the classifier that directly depends on a predefined distance metric . an appropriate distance metric is usually needed for achieving a promising accuracy .previous work ( , ) has shown that compared to using the standard euclidean distance , applying an well - designed distance often can significantly boost the classification accuracy of a -nn classifier . in this work ,we propose a scalable and fast algorithm to learn a mahalanobis distance metric .mahalanobis metric removes the main limitation of the euclidean metric in that it corrects for correlation between the different features .recently , much research effort has been spent on learning a mahalanobis distance metric from labeled data .typically , a convex cost function is defined such that a global optimum can be achieved in polynomial time .it has been shown in the statistical learning theory that increasing the margin between different classes helps to reduce the generalization error .inspired by the work of , we directly learn the mahalanobis matrix from a set of _ distance comparisons _ , and optimize it via margin maximization .the intuition is that such a learned mahalanobis distance metric may achieve sufficient separation at the boundaries between different classes .more importantly , we address the scalability problem of learning the mahalanobis distance matrix in the presence of high - dimensional feature vectors , which is a critical issue of distance metric learning .as indicated in a theorem in , a positive semidefinite trace - one matrix can always be decomposed as a convex combination of a set of rank - one matrices .this theorem has inspired us to develop a fast optimization algorithm that works in the style of gradient descent . at each iteration, it only needs to find the principal eigenvector of a matrix of size ( is the dimensionality of the input data ) and a simple matrix update .this process incurs much less computational overhead than the metric learning algorithms in the literature .moreover , thanks to the above theorem , this process automatically preserves the property of the mahalanobis matrix . to verify its effectiveness and efficiency ,the proposed algorithm is tested on a few benchmark data sets and is compared with the state - of - the - art distance metric learning algorithms . as experimentally demonstrated , -nn with the mahalanobis distance learned by our algorithms attains comparable ( sometimes slightly better ) classification accuracy .meanwhile , in terms of the computation time , the proposed algorithm has much better scalability in terms of the dimensionality of input feature vectors .we briefly review some related work before we present our work . given a classification task, some previous work on learning a distance metric aims to find a metric that makes the data in the same class close and separates those in different classes from each other as far as possible .xing proposed an approach to learn a mahalanobis distance for supervised clustering .it minimizes the sum of the distances among data in the same class while maximizing the sum of the distances among data in different classes .their work shows that the learned metric could improve clustering performance significantly .however , to maintain the property , they have used projected gradient descent and their approach has to perform a _ full _ eigen - decomposition of the mahalanobis matrix at each iteration .its computational cost rises rapidly when the number of features increases , and this makes it less efficient in coping with high - dimensional data .goldberger developed an algorithm termed neighborhood component analysis ( nca ) , which learns a mahalanobis distance by minimizing the leave - one - out cross - validation error of the -nn classifier on the training set .nca needs to solve a non - convex optimization problem , which might have many local optima .thus it is critically important to start the search from a reasonable initial point .goldberger have used the result of linear discriminant analysis as the initial point . in nca , the variable to optimize is the projection matrix .the work closest to ours is large margin nearest neighbor ( lmnn ) in the sense that it also learns a mahalanobis distance in the large margin framework . in their approach ,the distances between each sample and its `` target neighbors '' are minimized while the distances among the data with different labels are maximized .a convex objective function is obtained and the resulting problem is a semidefinite program ( sdp ) . since conventional interior - point based sdp solvers can only solve problems of up to a few thousand variables , lmnn has adopted an alternating projection algorithm for solving the sdp problem . at each iteration ,similar to , also a full eigen - decomposition is needed .our approach is largely inspired by their work .our work differs lmnn in the following : ( 1 ) lmnn learns the metric from the pairwise distance information . in contrast , our algorithm uses examples of proximity comparisons among triples of objects ( , example is closer to example than example ) . in some applications like image retrieval, this type of information could be easier to obtain than to tag the actual class label of each training image .rosales and fung have used similar ideas on metric learning ; ( 2 ) more importantly , we design an optimization method that has a clear advantage on computational efficiency ( we only need to compute the leading eigenvector at each iteration ) .the optimization problems of and are both sdps , which are computationally heavy .linear programs ( lps ) are used in to approximate the sdp problem .it remains unclear how well this approximation is .the problem of learning a kernel from a set of labeled data shares similarities with metric learning because the optimization involved has similar formulations .lanckriet and kulis considered learning kernels subject to some pre - defined constraints .an appropriate kernel can often offer algorithmic improvements .it is possible to apply the proposed gradient descent optimization technique to solve the kernel learning problems .we leave this topic for future study .the rest of the paper is organized as follows .section [ sec : knnmm ] presents the convex formulation of learning a mahalanobis metric . in section [ sec : knnmm - sdp ] , we show how to efficiently solve the optimization problem by a specialized gradient descent procedure , which is the main contribution of this work .the performance of our approach is experimentally demonstrated in section [ sec : experiments ] .finally , we conclude this work in section [ sec : conclusion ] .in this section , we propose our distance metric learning approach as follows .the intuition is to find a particular distance metric for which the margin of separation between the classes is maximized . in particular, we are interested in learning a quadratic mahalanobis metric . let denote a training sample where is the number of training samples and is the number of features . to learn a mahalanobis distance, we create a set that contains a group of training triplets as , where and come from the same class and belongs to different classes .a mahalanobis distance is defined as follows .let denote a linear transformation and be the squared euclidean distance in the transformed space .the squared distance between the projections of and writes : according to the class memberships of , and , we wish to achieve and it can be obtained as it is not difficult to see that this inequality is generally not a convex constrain in because the difference of quadratic terms in is involved . in order to make this inequality constrain convex ,a new variable is introduced and used throughout the whole learning process . learning a mahalanobis distance is essentially learning the mahalanobis matrix .becomes linear in .this is a typical technique to _ convexify _ a problem in convex optimization . in our algorithm ,a _ margin _ is defined as the difference between and , that is , similar to the large margin principle that has been widely used in machine learning algorithms such as support vector machines and boosting , here we maximize this margin to obtain the optimal mahalanobis matrix .clearly , the larger is the margin , the better metric might be achieved . to enable some flexibility , , to allow some inequalities of not to be satisfied ,a soft - margin criterion is needed .considering these factors , we could define the objective function for learning as where constrains to be a matrix and denotes the trace of . indexes the training set and denotes the size of . is an algorithmic parameter that balances the violation of and the margin maximization . is the slack variable similar to that used in support vector machines and it corresponds to the soft - margin hinge loss . enforcing removes the scale ambiguity because the inequality constrains are scale invariant . to simplify exposition, we define therefore , the last constraint in can be written as note that this is a linear constrain on .problem is thus a typical sdp problem since it has a linear objective function and linear constraints plus a conic constraint .one may solve it using off - the - shelf sdp solvers like csdp .however , directly solving the problem using those standard interior - point sdp solvers would quickly become computationally intractable with the increasing dimensionality of feature vectors .we show how to efficiently solve in a fashion of first - order gradient descent .it is proved in that _ a matrix can always be decomposed as a linear convex combination of a set of rank - one matrices_. in the context of our problem , this means that , where is a rank - one matrix and .this important result inspires us to develop a gradient descent based optimization algorithm . in each iteration, can be updated as where is a rank - one and trace - one matrix . is the search direction .it is straightforward to verify that , and hold .this is the starting point of our gradient descent algorithm . with this update strategy , the trace - one and positive semidefinteness of always retained .we show how to calculate this search direction in algorithm [ alg:2 ] .although it is possible to use subgradient methods to optimize non - smooth objective functions , we use a differentiable objective function instead so that the optimization procedure is simplified ( standard gradient descent can be applied ) .so , we need to ensure that the objective function is differentiable with respect to the variables and .let denote the objective function and be a loss function .our objective function can be rewritten as the above problem adopts the hinge loss function that is defined as .however , the hinge loss is not differentiable at the point of , and standard gradient - based optimization cam be applied directly . in order to make standard gradient descent methods applicable, we propose to use differentiable loss functions , for example , the squared hinge loss or huber loss functions as discussed below .+ the squared hinge loss function can be represented as as shown in fig .[ fig : loss ] , this function connects the positive and zero segments smoothly and it is differentiable everywhere including the point .we also consider the huber loss function in this work : where is a parameter whose value is usually between and . a huber loss function with plotted in fig .[ fig : loss ] .there are three different parts in the huber loss function , and they together form a continuous and differentiable function .this loss function approaches the hinge loss curve when .although the huber loss is more complicated than the squared hinge loss , its function value increases linearly with the value of .hence , when a training set contains outliers or samples heavily contaminated by noise , the huber loss might give a more reasonable ( milder ) penalty than the squared hinge loss does .we discuss both loss functions in our experimental study .again , we highlight that by using these two loss functions , the cost function that we are going to optimization becomes differentiable with respect to both and .* initialize * : such that [ alg:0 ] [ alg:2 ]the proposed algorithm maximizes the objective function iteratively , and in each iteration the two variables and are optimized alternatively . note that the optimization in this alternative strategy retains the global optimum because is a convex function in both variables and are not coupled together .we summarize the proposed algorithm in algorithm [ alg:0 ] .note that is a scalar and line 3 in algorithm [ alg:0 ] can be solved directly by a simple one - dimensional maximization process .however , is a matrix with size of . recall that is the dimensionality of feature vectors .the following section presents how is efficiently optimized in our algorithm .let be the domain in which a feasible lies .note that is a convex set of .as shown in line 4 in algorithm [ alg:0 ] , we need to solve the following maximization problem : where is the output of line 3 in algorithm [ alg:0 ] .our algorithm offers a simple and efficient way for solving this problem by explicitly maintaining the positive semidefiniteness property of the matrix .it needs only compute the largest eigenvalue and the corresponding eigenvector whereas most previous approaches such as the method of require a full eigen - decomposition of .their computational complexities are and , respectively .when is large , this computational complexity difference could be significant .let be the gradient matrix of with respect to and be the step size for updating . recall that we update in such a way that , where and . to find the that satisfies these constraints and in the meantimecan best approximate the gradient matrix , we need to solve the following optimization problem : the optimal is exactly where is the eigenvector of that corresponds to the largest eigenvalue .the constraints says that is a outer product of a unit vector : with .here is the euclidean norm .problem can then be written as : \bf v $ ] , subject to .it is clear now that an eigen - decomposition gives the solution to the above problem . hence , to solve the above optimization, we only need to compute the leading eigenvector of the matrix .note that still retains the properties of after applying this process . clearly , a key parameter of this optimization process is which implicitly decides the total number of iterations .the computational overhead of our algorithm is proportional to the number of iterations .hence , to achieve a fast optimization process , we need to ensure that in each iteration the can lead to a sufficient reduction on the value of .this is discussed in the following part .we employ the backtracking line search algorithm in to identify a suitable .it reduces the value of until the wolfe conditions are satisfied .as shown in algorithm [ alg:2 ] , the search direction is .the wolfe conditions that we use are .the result of backtracking line search is an acceptable which can give rise to sufficient reduction on the function value of .we show in the experiments that with this setting our optimization algorithm can achieve higher computational efficiency than some of the existing solvers .the goal of these experiments is to verify the efficiency of our algorithm in achieving comparable ( or sometimes even better ) classification performances with a reduced computational cost .we perform experiments on data sets described in table [ table : dataset ] . for some data sets, pca is performed to remove noises and reduce the dimensionality .the metric learning algorithms are then run on the data sets pre - processed by pca .the wine , balance , vehicle , breast - cancer and diabetes data sets are obtained from uci machine learning repository , and usps , mnist and letter are from libsvm for mnist , we only use its test data in our experiment .the orlface data is from att research and twin - peaks is downloaded from l. van der maaten s website .the face and background classes ( 435 and 520 images respectively ) in the image retrieval experiment are obtained from the caltech-101 object database . in order to perform statistics analysis ,the orlface , twin - peaks , wine , balance , vehicle , diabetes and face - background data sets are randomly split as 10 pairs of train / validation / test subsets and experiments on those data set are repeated 10 times on each split . [ cols="<,^,^,^,^,^,^,^,^",options="header " , ]we have proposed a new algorithm to demonstrate how to efficiently learn a mahalanobis distance metric with the principle of margin maximization .enlightened by the important theorem on matrix decomposition in , we have designed a gradient descent method to update the mahalanobis matrix with cheap computational loads and at the same time , the property of the learned matrix is maintained during the whole optimization process .experiments on benchmark data sets and the retrieval problem verify the superior classification performance and computational efficiency of the proposed distance metric learning algorithm .chang and c .- j .lin , `` libsvm : a library for support vector machines , '' 2001 .[ online ] .available : http : // www .tw/ cjlin / libsvmtools / datasets/[http : // www . csie .tw/ cjlin / libsvmtools / datasets/ ] l. fei - fei , r. fergus , and p. perona , `` learning generative visual models from few training examples : an incremental bayesian approach tested on 101 object categories , '' in_ workshop on generative - model based vision , in conjunction with ieee conf ._ , washington , d.c . ,july 2004 .g. r. g. lanckriet , n. cristianini , p. bartlett , l. el ghaoui , and m. i. jordan , `` learning the kernel matrix with semidefinite programming , '' _ j. mach ._ , vol . 5 , no . 1 ,pp . 2772 , dec .2004 .a. w. m. s. , m. worring , s. santini , a. gupta , and r. jain , `` content - based image retrieval at the end of the early years , '' _ ieee trans .pattern anal ._ , vol . 22 , no . 12 , pp .13491380 , dec . 2000 . c. shen , a. welsh , and l. wang , `` psdboost : matrix - generation linear programming for positive semidefinite matrices learning , '' in _ _ proc .neural inf . process .syst.__1em plus 0.5em minus 0.4em vancouver , canada : mit press , dec .2008 , pp . 14731480 .k. q. weinberger , j. blitzer , and l. k. saul , `` distance metric learning for large margin nearest neighbor classification , '' in _ proc . adv . neural inf . process ._ , vancouver , canada , dec .2006 , pp . 14751482 .e. p. xing , a. y. ng , m. i. jordan , and s. russell , `` distance metric learning , with application to clustering with side - information , '' in _ _ proc . adv .neural inf .syst.__1em plus 0.5em minus 0.4emvancouver , canada : mit press , dec .2003 , pp . 505512 .l. yang , r. sukthankar , and s. c. h. hoi , `` a boosting framework for visuality - preserving distance metric learning and its application to medical image retrieval , '' _ ieee trans . pattern anal ._ , vol .32 , no . 1 , jan .
for many machine learning algorithms such as -nearest neighbor ( -nn ) classifiers and -means clustering , often their success heavily depends on the metric used to calculate distances between different data points . an effective solution for defining such a metric is to learn it from a set of labeled training samples . in this work , we propose a fast and scalable algorithm to learn a mahalanobis distance metric . the mahalanobis metric can be viewed as the euclidean distance metric on the input data that have been linearly transformed . by employing the principle of margin maximization to achieve better generalization performances , this algorithm formulates the metric learning as a convex optimization problem and a positive semidefinite ( ) matrix is the unknown variable . based on an important theorem that a trace - one matrix can always be represented as a convex combination of multiple rank - one matrices , our algorithm accommodates any differentiable loss function and solves the resulting optimization problem using a specialized gradient descent procedure . during the course of optimization , the proposed algorithm maintains the positive semidefiniteness of the matrix variable that is essential for a mahalanobis metric . compared with conventional methods like standard interior - point algorithms or the special solver used in large margin nearest neighbor ( lmnn ) , our algorithm is much more efficient and has a better performance in scalability . experiments on benchmark data sets suggest that , compared with state - of - the - art metric learning algorithms , our algorithm can achieve a comparable classification accuracy with reduced computational complexity . large - margin nearest neighbor , distance metric learning , mahalanobis distance , semidefinite optimization .
the problems posed by dune mobility have been solved in practice using different techniques .small dunes can be mechanically flattened so that sand moves as individual grains rather than as a single body . however , such methods are too expensive for large dunes .these can be immobilized covering them with oil or by the erection of fences .these solutions have the drawback of not providing a long term protection since the sand remains exposed . to overcome this shortcoming ,a suitable solution is to vegetate the sand covered areas in order to prevent sediment transport and erosion .this is particularly important for coastal management where a strong sand transport coexists with favorable conditions for vegetation growth .the stabilization of mobile sand using vegetation is an ancient technique .this method has been used with excellent results on coastal dunes in algeria , tunisia , north america , united kingdom , western europe , south africa , israel among others .vegetation tries to stabilize sand dunes , preventing sand motion and stimulating soil recovery .recently , we developed a mathematical description of the competition between vegetation growth and sand transport .this model was capable to reproduce the transformation of active barchan dunes into parabolic dunes under the action of vegetation growth as can be found in real conditions . since numerical simulations are orders of magnitude faster than the real evolution , we are able to study the entire inactivation process and to forecast thousands of years of real evolution .parabolic dunes are vegetated dunes that , when active , migrate along the prevailing wind direction .they arise under uni - directional wind and in places partially covered by plants and have a typical shape with the ` nose ' pointing downwind and the two arms pointing upwind , contrary to barchan dunes where the horns point downwind ( fig . 1 ) .vegetation covers most of the arms of parabolic dunes and a fraction of their nose depending on the activation degree of the dune , i.e. how fast the dune moves .an active parabolic dune has a sandy nose ( fig . 1 , left ) while an inactive one is almost totally covered by plants ( fig . 1, right ) .plants are typically placed along the lee size of the dune , which is protected from wind erosion . on the contrary ,the interior side exposed to the wind is devoid of vegetation .there , erosion is strong enough to prevent vegetation growth .= 0.5 mm the migrating velocity of parabolic dunes is several times smaller than that of barchan dunes , and in general , they have an intermediate shape between fully active crescent dunes , like barchans , and completely inactive parabolic dunes . the activation degree of the parabolic dune is characterized by the vegetation cover pattern over it , which gives information about the areas of sand erosion and deposition pattern responsible for the motion of the dune .in this work we study the degree of activation of parabolic dunes in north - eastern brazil by direct measurements of the vegetation cover on them .we further present a method to extend the local information about vegetation cover to the whole dune by comparing the measured vegetation density cover with the gray scale level of high resolution satellite images .the empirical vegetation cover is finally quantitatively compared to the numerical solution of an established model for sand transport coupled with vegetation growth .along all the coast of the province of cear in the north - east of brazil ( fig .2 ) , sand dunes are totally or partially stabilized by vegetation . on one hand , the humid climate of the region , with intense precipitation during the rain season from february to july ( fig .3a ) , amplifies the role of vegetation as an active agent in the sandy landscape evolution . on the other hand , the ubiquity of beaches plenty of sediments combined with a strong and highly uni - directional ese coastal wind ( fig .3b ) , create favorable conditions for the evolution of crescent sand dunes . furthermore , since the wind is stronger during the dry season ( fig .3 ) , both processes , the aeolian sediment transport and the biomass production , lead to a competing effect that completely reshapes the coastal landscape by the development of parabolic dunes through the inversion of barchans and their further deactivation by the vegetation growth . in order to obtain information about the distribution of vegetation over coastal dunes , we went to fortaleza during the rain season to measure the shape of some parabolic dunes and the vegetation that covers them ( fig . 4 ) .these dunes are located in iguape , on the east of fortaleza , and pecem , taiba and paracuru , on the west ( fig.2 ) .the geographical coordinates of all points are recorded with gps and inserted in the digital dune map . by using satellite landsat images and a topographic map we were able to select parabolic dunes with different degrees of inactivation and vegetation cover density .figures 4a , b , c , and d , show , in order of activation , the four measured parabolic dunes on the west coast of fortaleza , while figs .4e , f , and g , show the other three dunes from the iguape region , on the east coast . in general , the most active ones were located in taiba and iguape ( shown in figs .4d and g , respectively ) , while those in paracuru ( figs .4a and b ) were among the most inactive ones .since plants locally slow down the wind they can inhibit sand erosion as well as enhance sand accretion , as it is shown in fig . 5 . this dynamical effect exerted by plants on the wind , which is characterized by the drag force acting on them , is mainly determined by the frontal area density , where is the total plant frontal area , i.e. the area facing the wind , of vegetation placed over a given sampling area .furthermore , the vegetation cover over a dune is defined by the basal area density , where is the total plant basal area , i.e. the area covering the soil , on .= 0.5 mm the distinction between both densities is crucial for the modeling of the vegetation effect over wind strength and thus sand transport . from fig .5 it is clear that plants act as obstacles that absorb part of the momentum transferred to the soil by the wind . as a result ,the total surface shear stress can be divided into two components , a shear stress acting on the vegetation and a shear stress acting on the sand grains . when plants are randomly distributed and the effective shelter area for one plant ( see fig .5 ) is proportional to its frontal area , the absorbed shear stress is proportional to the vegetation frontal area density times the undisturbed shear . using thisit has been proposed that the fraction of the total stress acting on sand grains is given by where is the ratio of plant to surface drag coefficients and the constant is a model parameter that accounts for the non - uniformity of the surface shear stress .the term arises from the relation between the sandy and the total area . although and can be calculated from direct measurements of the plants , in order to estimate the parameters and in eq .[ taus ] we need a far more complex procedure since in this case we have to measure the drag forces acting on the plants and the shear stresses on the sand surface with and without plants .we visited the field location during the rain season when winds are exceptionally weak .the vegetation cover over any of the measured dunes includes at least six different species .we focused on measuring the static quantities like densities , rather than the dynamic ones like and . in the section regarding the numerical solution of the model we use reported values from the literature for these two parameters basal and frontal area density , and , can be indirectly estimated from the local number , basal area and frontal area of each species of plants over a characteristic area on the dune figure 6 shows an sketch illustrating the local basal and frontal area of a given plant .= 0.5 mm in order to measure the basal and frontal vegetation area density over the dunes shown in fig . 4 ,which are mainly covered by grass , we select five to ten points along the longitudinal and transversal main axes of the parabolic dune ( red dots in fig .4 ) . on every pointwe identify each plant the number of times it appears in a study area m and measure their characteristic length , height , number of leafs and leaf area ( fig . 7 , left ). some species of the typical vegetation we found are shown on the right side of fig .7 . using the morphological information of each species we measure the fraction of the total leaf area of each plant that covers the soil , and the fraction that faces the wind .in general , we found the same qualitative distribution of plants on all measured parabolic dunes ( fig .the area between the arms of the dune is totally covered by plants , while their density reduces on the windward side where sand erosion is very strong and increases once again on the lee side , where most of the sand deposition occurs .by using the collected vegetation data we were able to calculate , through eqs .( [ rho ] ) and ( [ lambda ] ) , the basal and frontal plant density at some particular points on the dunes .the first interesting result is that both densities are proportional to each other ( fig .9 ) with a proportionality constant with a reasonable dispersion in spite of the different dunes and types of vegetation .therefore , the plant basal density , also called cover density , , can be used to characterize the interaction between vegetation and the wind strength given by eq .this result agrees with measurements on creosote bush reported in wyatt et al .they also found the same value of besides the enormous differences in the vegetation type , from bush in their case to grass in ours . and the frontal area density .the proportionality constant of the fit ( solid line ) is .each point represents a different sampling area on the dunes situated in iguape , shown in fig . 4 bottom , ( ) and the others dunes , fig .4 top , ( ).,scaledwidth=70.0% ] in order to estimate the inactivation degree of the whole parabolic dune , based only on the plant cover density , we have to extend the few measured values of to the full dune body .the gray - scale of the satellite image ( with a resolution 0.6 m / pixel ) suggests that vegetation density determines the image darkness .therefore , a crude approximation for the cover density at the dune is obtained by relating the density cover to its normalized image gray - scale value .this value is defined as where is the gray - scale value of a given point and and are defined by the normalization conditions and respectively .these normalization conditions are obtained from those points in the image that we know are either bare sand or fully covered with plants . andthe normalized gray - scale value from the satellite image .each point represents another sampling area and the plots correspond to : ( a ) the three dunes from iguape ( fig . 4,e , f and g ) , ( b ) the two dunes from paracuru , ( ) ( fig .4a ) and ( ) ( fig .4b ) , ( c ) the dune from pecem ( fig .4d ) and ( d ) the dune from taiba ( fig .the fit parameter in eq .[ c ] has the values 0.8 , 0.22 , 0.08 and 0.18 , respectively.,scaledwidth=100.0% ] figure 10 shows a clear correlation between both and for each image . by assuming that the cover density decreases linearly with and that is symmetric with respect to the main diagonal , we propose the fitting curve where the fitting parameter changes for different images due to the alteration of the respective gray - scale . with equation ( [ c ] )we can estimate the density cover over the whole parabolic dune .figure 11 shows the resulting density cover calculated from the gray - scale of the images in fig .yellow ( light ) represents free sand , while dark green ( dark ) represents total cover , and thus , total inactivation . with the help ofthe color - scale one identifies the zones where sand transport occurs .the windward side in the interior part is the most active part of the dune , as consequence of the erosion that prevents plants to grow . on the contrary, plants apparently can resist sand deposition since they accumulate clearly at the lee side and the crest of the dune .= 0.5 mm another important conclusion is that the degree of activation of a dune apparently depends on its distance from the place where it was born .the three dunes in iguape ( fig .11 ) are deactivated according to their distance from the sea shore , the most active being the one nearest to the sea ( fig .11 , bottom right ) .similarly , the same occurs with other studied parabolic dunes .the numerical study of parabolic dunes has recently experienced new developments based on cellular automaton models .furthermore , we have recently proposed a continuum approach for the study of the competition process between vegetation growth and sand erosion . as was stated in the previous section, plants can locally slow down the wind , reducing erosion and enhancing sand accretion . on the other hand ,sand is eroded by strong winds denuding the roots of the plants and increasing the evaporation from deep layers . as a result, there is a coupling between the evolution of the sand surface and the vegetation that grows over it , controlled by the competition between the reduction of sand transport rate due to plants and their capacity to survive sand erosion and accretion ( the full model is presented in the appendix ) .= 0.5 mm in order to compare with measurements we perform numerical calculations of parabolic dunes emerging from two different initial conditions .figure 12a , shows the parabolic dune resulting from the evolution of a barchan dune under active vegetation growth , while in fig .12b the parabolic dune emerges from the evolution of a blow - out , i.e. a spot of bare sand within a vegetated surface , which are common in coastal systems .from fig . 12 , it is clear that for both initial conditions a parabolic dune evolves with the characteristic vegetated arms pointing upwind and a sandy windward side .moreover , the convex and heavily eroded windward side finishes in a cut - edge vegetated crest , in sharp contrast with the barchan dune where the windward side is concave and the crest smoothly rounded . as can be seen in fig . 12 , it is in the morphology , and not the vegetation distribution , where the two parabolic dunes evolving from different initial conditions differ most .although the dunes are of similar sizes , the windward side of the blow - out parabolic dune is more than twice the size of the windward side of the barchan - born parabolic dune .furthermore , the blow - out dune is more elongated than the corresponding barchan - born one and concentrates a higher sand volume in its ` nose ' .this can be understood from their respective evolutions : on one hand , the former dune is growing from a spot of bare sand in the vegetated surface and its volume is increasing due to the sediment that is continuously added on the vegetated surface .on the other hand , the barchan - born parabolic dune has a total sand volume fixed from the beginning and since this volume is distributed over a growing surface , the parabolic dune is continuously shrinking , particularly its ` nose ' . after comparing the simulated parabolic dunes ( fig .12 ) with the real ones from brazil ( fig .the similarity between the dune emerging from a blow - out and the measured ones seems evident .figure 13 depicts one dune from iguape and a simulated blow - out in their color - scale ( fig .13a and c ) and in the gray - scale corresponding to the quantitative vegetation cover density ( fig .13b and d ) .although the real dune has twice the size of the measured one , their morphology is very similar .examples of their similarities are : their relative length and width ; the position of the slip - face , that is sharper along the windward side than in the ` nose ' , and the gentile slope in the windward side and the front of the ` nose ' compared to the step lee side .regarding the distribution and density of the vegetation , both dunes also share strong similarities ( fig .13b and d ) .the distribution of the vegetation is clearly divided into two regions , the windward side almost devoid of plants and the vegetated lee side ( fig .as was discussed in the previous sections , this is a direct consequence of the competition between sand transport and vegetation growth , while on the heavily eroded windward side plant roots are uncovered and dried , on the lee side they survive sand accretion ( see fig .= 0.5 mm = 0.5 mm however , although the distribution of vegetation is qualitatively similar on both dunes , there is an important quantitative difference regarding the vegetation cover density : on the lee side of the simulated dune ( fig .13d ) the vegetation density reaches a far higher value , in fact very close to one , than in the real dune ( fig .this could be consequence of the way the effect of the vegetation on the sand transport is modeled .we assume that the shear stress partition , i.e. the wind slow down , is the only way plants can affect the sand transport , but there are other effects .for instance , plants act as physical barriers for the saltating grains that are carried by the wind .plants can trap them by direct collisions as well as by reducing the wind stress . in this case, a given vegetation density can be more effective in avoiding soil erosion than when one only considers the wind slow down .this can explain why the lee side on both dunes can have the same protecting role while having different vegetation density cover .we presented measurements on real parabolic dunes along the coast of brazil , concerning their shape and the vegetation cover on them .the vegetation cover over a dune was estimated by the number and size of plants in a characteristic area of the dune .to do so we identify the species of plants present on the dune and count the number of times they appear in the study area and measure their characteristic length , height and total leaf area . by using the vegetation data we were able to calculate the plant cover density at particular points on the dunes and compare it with the gray scale of the satellite image .doing so we found a relation between both the vegetation cover density and the image gray - scale which leads us to estimate the density cover on the whole parabolic dune .then we compare the empirical vegetation cover data with the result of our simulations to validate quantitatively the distribution of vegetation over the dune .we conclude that the model indeed captures the essential aspects of the interaction between the different geomorphological agents , i.e. the wind , the surface and the vegetation .furthermore , it gives arguments in favor of the possible origin of parabolic dunes on the brazilian coast as coming from a blow - out , rather than from an early active coastal barchan dune system . through thiswe could estimate the degree of inactivation of the dune and reconstruct the previous dune history .we wish to thank labomar in fortaleza for the kind help during the field work .this study was supported by the volkswagenstiftung , the max plank prize and the dfg .the vegetated dune model consists of a system of continuum equations in two space dimensions that combines a description of the average turbulent wind shear force above the dune including the effect of vegetation , a continuum saltation model , which allows for saturation transients in the sand flux , and a continuum model for vegetation growth .the model can be sketched as follows : ( i ) first , the wind over the surface is calculated with the model of that describes the perturbation of the shear stress due to a smooth hill or dune .the fourier - transformed components of this perturbation are where x and y mean , respectively , parallel and perpendicular to the wind direction , , and are modified bessel functions , and and are the components of the wave vector , i.e. the coordinates in fourier space . is the fourier transform of the height profile , is the vertical velocity profile which is suitably non - dimensionalized , is the depth of the inner layer of the flow , and is the aerodynamic roughness . is a typical length scale of the hill or dune and is given by the mean wavelength of the fourier representation of the height profile .( ii ) next , the effect of the vegetation over the surface wind -shear stress partitioning- is calculated by eq .[ taus ] , which gives the fraction of total stress acting on the grains .( iii ) the sand flux is calculated using the shear velocity , where is the air density , with the equation where ] is called saturation length ; is the minimal threshold shear velocity for saltation and is gravity , while and are empirically determined model parameters and the mean grain velocity at saturation , , is calculated numerically from the balance between the forces on the saltating grains ; ( iv ) the change in surface height is computed from mass conservation : , where is the bulk density of the sand ; ( v )if sand deposition leads to slopes that locally exceed the angle of repose , , the unstable surface relaxes through avalanches in the direction of the steepest descent , and the separation streamlines are introduced at the dune lee .each streamline is fitted by a third order polynomial connecting the brink with the ground at the reattachment point , and defining the separation bubble " , in which the wind and the flux are set to zero .( vi ) finally , the vegetation growth rate is calculated from the surface change using the phenomenological equation where it is assumed that a plant of height can grow up to a maximum height with an initial rate . to close the model , the basal area density introduced in eq .[ taus ] is just , and the frontal area density as fig .the model is evaluated by performing steps i ) through vi ) computationally in an iterative manner .anthonsen , k.l . ,clemmensen , l.b . ,jensen , j.h .evolution of a dune from crescentic to parabolic from in response to a short - term climatic changes - rabjerg - mile , skagen - odde , denmark .geomorphology , 17:63 - 77 .
in this work we present measurements of vegetation cover over parabolic dunes with different degree of activation along the north - eastern brazilian coast . we are able to extend the local values of the vegetation cover density to the whole dune by correlating measurements with the gray - scale levels of a high resolution satellite image of the dune field . the empirical vegetation distribution is finally used to validate the results of a recent continuous model of dune motion coupling sand erosion and vegetation growth . coastal morphology , parabolic dunes , dune deactivation
finding image representations with a dimensionality reduction while maintaining relevant information for classification , remains a major issue .effective approaches have recently been developed based on locally orderless representations as proposed by koendering and van doom .they observed that high frequency structures are important for recognition but do not need to be precisely located .this idea has inspired a family of descriptors such as sift or hog , which delocalize the image information over large neighborhoods , by only recording histogram information .these histograms are usually computed over wavelet like coefficients , providing a multiscale image representation with several wavelets having different orientation tunings .this paper introduces a new geometric image representation obtained by grouping coefficients that have co - occurrence properties across an image class .it provides a locally orderless representation where sparse descriptors are delocalized over groups which optimize the coefficient co - occurrences , and can be interpreted as a form of parcellization .section [ geom - sec ] reviews wavelet image representations and the notion of sparse geometry through significant sets .section [ mixt - sec ] introduces our co - occurrence grouping model which is optimized with a maximum likelihood approach .groups are computed from a training sequence in section [ bandfosec ] , using a bernoulli mixture approximation .applications to face image compression are shown in section [ comp - sec ] and the application of this representation is illustrated for mnist image classifications in section [ mnist - sec ] .sparse signal representations are obtained by decomposing signals over bases or frames which take advantage of the signal regularity to produce many zero coefficients .a sparse representation is obtained by keeping the significant coefficients above a threshold , the original signal can be reconstructed with a dual family , and the resulting sparse approximation is .wavelet transforms compute signal inner products with several mother wavelets having a specific direction tuning , and which are dilated by and translated by : .separable wavelet bases are obtained with mother wavelets , in which case the total number of wavelets is equal to the image size .let be the cardinal of the set . in absence of prior information on ,the number of bits needed to code in is .one can also verify that the number of bits required to encode the values of coefficients in is proportional to and is smaller than so that the coding budget is indeed dominated by which carries most of the image information .in a supervised classification problem , a geometric model defines a prior model of the probability distribution . there is a huge number of subsets in . estimating the probability from a limited training set thus requires using a simplified prior model .a signal class is represented by a random vector whose realizations are within the class and whose significance sets are included in .a mixture model is introduced with co - occurrence groups of constant size , which define a partition of the overall index set co - occurrence groups are optimized by enforcing that all coefficients have a similar behavior in a group and hence that is either almost empty or almost equal to with a high probability .the mixture model assumes that the distributions of the components are independent .the distribution is assumed to be uniform among all subsets of of cardinal .let be its distribution , this co - occurrence model is identified with a maximum log - likelihood approach which computes a training sequence of images that belong to a class , we optimize the group co - occurrence by approximating the maximum likelihood with a bernoulli mixture .let be the significant set of .the log likelihood is calculated with the maximization of this expression is obtained using the stirling formula which approximates the first term by the entropy of a bernoulli distribution .let us write and , the bernoulli probability distribution associated to .let us specify the groups by the inverse variables such that .it results that the distribution is generally unknown and must therefore be estimated .the estimation is regularized by approximating this distribution with a piecewise constant distribution over a fixed number of quantization bins , that is small relatively to the number of realizations . the likelihood ( [ conasdnwosn ] )is thus approximated by a likelihood over the bernoulli mixture , which is optimized over all parameters : the following algorithm , minimizes ( [ conasdnwosn3 ] ) by updating separately the bernoulli parameters , the distribution and the grouping variables .the minimization algorithm begins with a random initialization of groups of same size .the empirical histograms are initialized to uniform distributions .the algorithm iterates the following steps : * step 1 : given and compute which minimizes ( [ conasdnwosn3 ] ) by minimizing * step 2 : update to minimize ( [ conasdnwosn3 ] ) as the normalized histogram of the updated parameters over a predefined number of bins . *step 3 : update the group indexes to minimize ( [ conasdnwosn3 ] ) by minimizing for groups of constant size .this algorithm is guaranteed to converge to a local maxima because each step further increases the log - likelihood .in fact , it is the equivalent of the -means algorithm adapted to the mixture model considered here .to illustrate the efficiency of this grouping strategy , it is first applied to the compression of face images that have been approximately registered .a database of 170 face images were used for training and a different set of 30 face images were used for testing .figure [ cooc ] shows the optimal co - occurrence groups obtained over wavelet coefficients by applying the maximum log - likelihood algorithm on the training set .the encoding cost of the significance map using the optimized model is equal to minus the log - likelihood of this model .figure [ bitrate ] shows the evolution of the average bit budget needed to encode the significance maps with the bernoulli mixture over optimized co - occurrence groups , depending upon the groups size .the optimal group size which maximizes the log - likelihood and hence minimizes the encoding cost over all group sizes is . as a function of . _dashed _ : bit rate ( equal to minus the log likelihood , in bits per pixel ) using the optimal groups of size as a function of . ] when is equal to the image size , there is a single group and the encoding is thus equivalent to a standard image coding using no prior information on the class . the bit rateis also compared with a bernoulli mixture computed with a partition into square groups , as a function of .figure [ bitrate ] shows that the optimized co - occurrence grouping improves the bit rate by 20 % relatively to the case where there is a single group , and also with respect to the fixed square groups , which means that the optimal grouping provides a geometric information which is stable across the image class .the optimal group size also gives an estimation of the image deformations that are due to variations of scaling and eye positions and to intrinsic variations of faces in the database .this section shows the classification ability of our geometric representation despite the presence of strong variability in the images .the test is performed using the standard mnist database of digits .this database is relatively simple and without any modification of the image representation an svm classifier can reach of error with a training set of 60,000 images .this section shows that our geometric co - occurence model can learn with much less training elements and for more complex images . to take into account texture variation phenomena , which are a central difficulty for geometric models , a white noise texture is introduced .a digit image ] where $ ] is a normalized gaussian white noise .the significance maps of these digits are simply obtained with a thresholding as shown in figure [ mnist_figure ] .it yields a binary image with a low density binary texture on the digit background and high density texture on the digit support .visually , the digit is still perfectly recognizable despite the texture variability . with training images an svm with a polynomial kernel yields a very low recognition rate of * 21% * on a different set of test images .figure [ mnist_figure ] shows the optimal co - occurrence groups of size computed with the minimization algorithm of section [ bandfosec ] . despite the geometric variability ,the algorithm is able to extract co - occurrence groups that do correspond to the digit structures and their deformations . to each digit ,corresponds an optimized co - occurrence grouping .let be the likelihood of the significance map of with the grouping model .an svm classifier is trained on the feature vector , of dimension with groups of size . with training imagesthis classifier yields a recognition rate of * 9% * on a different set of test images . a simple maximum likelihood classifier ( map ) associates to each test image the digit class with training examples , this simple classifier yields a recognition rates of 18% for random digits , which is already better than the svm applied on the original pixels .this paper introduces a new approach to define the geometry of a class of images computed over a sparse representation , using co - occurrence groups .these co - occurrence groups are computed with a maximum log likelihood estimation calculated over optimized bernoulli mixture model .an algorithm is introduced to optimize the group computation .the application to face image compression shows the efficiency of this encoding approach , and the ability to compute co - occurrence groups that provide stable information across the class .a classification test is performed over textured digits , which shows that the algorithm can take into account texture geometry and provide much better classification rates than a standard pixel based image representation .b. thirion , g. flandin , p. pinel , a. roche , p. ciuciu , and j .- b . dealing with the shortcomings of spatial normalization : multi - subject parcellation of fmri datasets " .brain mapp . , 27(8):678 - 693 , aug .s. mallat , a wavelet tour of signal processing , the sparse way " , academic press , 3rd edition , 2008 .
a geometric model of sparse signal representations is introduced for classes of signals . it is computed by optimizing co - occurrence groups with a maximum likelihood estimate calculated with a bernoulli mixture model . applications to face image compression and mnist digit classification illustrate the applicability of this model .
[ [ data ] ] * data * + + + + + + the state of oklahoma ( usa ) has recently experienced a dramatic surge in seismic activity that has been correlated with the intensification of waste water injection . here, we focus on the particularly active area near guthrie ( oklahoma ) . in this region , the oklahoma state geological survey ( ogs ) cataloged 2021 seismic events from 15 february 2014 to 16 november 2016 ( see figure [ fig : mapevents ] ) .their seismic moment magnitudes range from -0.2 to 5.8 .we use the continuous ground velocity records from two local stations gs.ok027 and gs.ok029 ( see figure [ fig : mapevents ] ) .gs.ok027 was active from 14 february 2014 to 3 march 2015 .gs.ok029 was deployed on 15 february 2014 and has remained active since .signals from both stations are recorded at 100 hz on 3 channels corresponding to the three spatial dimensions : hhz oriented vertically , hhn oriented north - south and hhe oriented west - east .[ [ generating - location - labels ] ] * generating location labels * + + + + + + + + + + + + + + + + + + + + + + + + + + + + we partition the 2021 earthquakes into 6 geographic _clusters_. for this we use the k - means algorithm , with the euclidean distance between epicenters as the metric .the centrods of the clusters we obtain define 6 areas on the map ( figure [ fig : mapevents ] ) .any point on the map is assigned to the cluster whose centrod is the closest ( i.e. , each point is assigned to its vorono cell ) .we find that 6 clusters allow for a reasonable partition of the major earthquake sequences .our classification thus contains 7 labels , or _ classes _ in the machine learning terminology : class 0 corresponds to seismic noise without any earthquake , classes 1 to 6 correspond to earthquakes originating from the corresponding geographic area . [ [ extracting - windows - for - classification ] ] * extracting windows for classification * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we divide the continuous waveform data into monthly _streams_. we normalize each stream individually by subtracting the mean over the month and dividing by the absolute peak amplitude ( independently for each of the 3 channels ) .we extract two types of 10 second long _ windows _ from these streams : windows containing events and windows free of events ( i.e. containing only seismic noise ) . to select the event windows and attribute their geographic cluster , we use the catalogs from the ogs .together , gs.ok027 and gs.ok029 yield 2918 windows of labeled earthquakes for the period between 15 february 2014 and 16 november 2016 .we look for windows of seismic noise in between the cataloged events .because some of the low magnitudes earthquakes we wish to detect are most likely buried in seismic noise , it is important that we reduce the chance of mislabeling these events as noise .this is why we use a more exhaustive catalog created by to select our noise examples .this catalog covers the same geographic area but for the period between 15 february and 31 august 2014 only and does not locate events .this yields 831,111 windows of seismic noise . [[ trainingtesting - split ] ] * training / testing split * + + + + + + + + + + + + + + + + + + + + + + + + we split the windows dataset into two independent sets : a test set and a training set .the test set contains all the windows for july 2014 ( 209 events and 131,072 windows of noise ) while the training set contains the remaining windows .[ [ dataset - augmentation ] ] * dataset augmentation * + + + + + + + + + + + + + + + + + + + + + + deep classifiers like ours have many trainable parameters .they require a large amount of examples of each class to ovoid overfitting and generalize correctly to unseen examples . to build a large enough dataset of events , we use streams recorded at two stations ( gsok029 and gsok27 , see figure s3 ) .the input of our network is a single waveform recorded at either of these stations .furthermore , we generate additional event windows by perturbing existing ones with zero - mean gaussian noise .this balances the number of event and noise windows during training , a strategy to regularize the network and prevent overfitting .[ [ convnetquake ] ] * convnetquake * + + + + + + + + + + + + + + our model is a deep convolutional network ( figure [ fig : network ] ) .it takes as input a window of 3-channel waveform data and predicts its label ( noise or event , with its geographic cluster ) .the parameters of the network are optimized to minimize the discrepancy between the predicted labels and the true labels on the training set ( see the methods section for details ) . [[ detection - accuracy ] ] * detection accuracy * + + + + + + + + + + + + + + + + + + + + in a first experiment to assess the _ detection _ performance of our algorithm , we ignore the geographic label ( i.e. , labels 16 are considered as a single `` earthquake '' class ) . the detection accuracy is the percentage of windows correctly classified as earthquake or noise .our algorithm successfully detects all the events cataloged by the ogs , reaching 100 accuracy on event detection ( see table [ table : results ] ) . among the 131,972 noise windows of our test set ,convnetquake correctly classifies 129,954 noise windows .it classifies 2018 of the noise windows as events . among them ,1902 windows were confirmed as events by the autocorrelation method ( detailed in the supplementary materials ) .that is , our algorithm made 116 false detections , for an accuracy of 99.9 on noise windows .[ [ location - accuracy ] ] * location accuracy * + + + + + + + + + + + + + + + + + + + we then evaluate the _ location _ performance . for each of the detected events , we compare the predicted class ( 16 ) with the true geographic label . we obtain 74.5 location accuracy on the test set ( see table [ table : results ] ) . for comparison with a `` chance '' baseline , selecting a class at random would give accuracy. we also experimented with a larger number of clusters ( 50 , see figure s4 ) and obtained 22.5 in location accuracy , still 10 times better than chance at .this performance drop is not surprising since , on average , each class now only provides 40 training samples , which is insufficient for proper training .[ [ probabilistic - location - map ] ] * probabilistic location map * + + + + + + + + + + + + + + + + + + + + + + + + + + + + our network computes a probability distribution over the classes .this allows us to create a probabilistic map of earthquake location .we show in figure [ fig : mapproba ] the maps for a correctly located event and an erroneous classification . for the correctly classified event ,most of the probability mass is on the correct class .this event is classified with approximately 99 confidence . for the misclassified event ,the probability distribution is more diffuse and the location confidence drops to 40 . [ [ generalization - to - non - repeating - events ] ] * generalization to non - repeating events * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + our algorithm generalizes well to waveforms very dissimilar from those in the training set .we quantify this using synthetic seismograms , comparing our method to template matching .we generate day - long synthetic waveforms by inserting multiple copies of a given template over a gaussian noise floor , varying the signal - to - noise - ratio ( snr ) from -1 to 8 db .an example of synthetic seismogram is shown in figure s2 .we choose two templates waveforms and ( shown in figure s1 ) . using the procedure described above, we generate a training set using and two testing sets using and respectively .we train both convnetquake and the template matching method ( see supplementary materials ) on the training set ( generated with ) . on the testingset , both methods successfully detect all the events . on the other testing set ( containing only copies of ), the template matching method fails to detect inserted events even at high snr .convnetquake however recognizes the new ( unknown ) events .the accuracy of our model remarkably increases with snr ( see figure [ fig : perf_synth ] ) . for snrshigher than 7 db , convnetquake detects all the inserted seismic events .many events in our dataset from oklahoma are non - repeating events ( we highlighted two in figure [ fig : mapevents ] ) .our experiment on synthetic data suggests that methods relying on template matching can not detect them while convnetquake can .[ [ earthquake - detection - on - continuous - records ] ] * earthquake detection on continuous records * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we run convnetquake on one month of continuous waveform data recorded with gs.ok029 in july 2014 .the 3-channel waveforms are cut into 10 second long , non overlapping windows , with a 1 second offset between consecutive windows to avoid possibly redundant detections .our algorithm detects 4225 events never cataloged before by the ogs .this is about 5 events per hour .autocorrelation confirms 3949 of these detections ( see supplementary for details ) .figure [ fig : july_waveforms ] shows the most repeated waveform ( 479 times ) among the 3949 detections .[ [ comparison - with - other - detection - methods ] ] * comparison with other detection methods * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we compare our _ detection _ performances to autocorrelation and fingerprint and similarity thresholding ( fast , reported from ) .both techniques can only find repeating events , and do not provide event location . used autocorrelation and fast to detect new events during one week of continuous waveform data recorded at a single station with the a single channel from 8 january 2011 to 15 january 2011 .the bank of templates used for fast consists in 21 earthquakes : a 4.1 that occurred on 8 january 2011 on the calaveras fault ( north california ) and 20 of its aftershocks ( 0.84 to 4.10 , a range similar to our dataset ) .table [ table : results ] reports the classification accuracy of all three methods .convnetquake has an acccuracy comparable to autocorrelation and outperforms fast .[ [ scalability - to - large - datasets ] ] * scalability to large datasets * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the runtimes of the autocorrelation method , fast , and convnetquake necessary to analyze one week of continuous waveform data are reported in table [ table : results ] .our runtime excludes the training phase which is performed once .similarly , fast s runtime excludes the time required to build the database of templates .we ran our algorithm on a dual core intel i5 2.9 ghz cpu .it is approximately 13,500 times faster than autocorrelation and 48 times faster than fast ( table [ table : results ] ) .convnetquake is highly scalable and can easily handle large datasets .it can process one month of continuous data in 4 minutes 51 seconds while fast is 120 times slower ( 4 hours 20 minutes , see figure [ fig : scaling_prop]a ) .like other template matching techniques , fast s database grows as it creates and store new templates during detection . for 2 days of continuous recording , fast s database is approximately 1 gb ( see figure [ fig : scaling_prop]b ) .processing years of continuous waveform data would increase dramatically the size of this database and adversely affect performance .our network only needs to store a compact set of parameters , which entails a constant memory usage ( 500 kb , see figure [ fig : scaling_prop]b ) .convnetquake achieves state - of - the - art performances in probabilistic event detection and location using a single signal . for this, it requires a pre - existing history of cataloged earthquakes at training time .this makes it ill - suited to areas of low seismicity or areas where instrumentation is recent . in this study we focused on local earthquakes , leaving larger scale for future workfinally , we partitioned events into discrete categories that were fixed beforehand .one might extend our algorithm to produce continuous probabilistic location maps .our approach is ideal to monitor geothermal systems , natural resource reservoirs , volcanoes , and seismically active and well instrumented plate boundaries such as the subduction zones in japan or the san andreas fault system in california .convnetquake takes as input a 3-channel window of waveform data and predicts a discrete probability over categories , or _classes _ in the machine learning terminology . classes to correspond to predefined geographic `` clusters '' and class 0 corresponds to event - free `` seismic noise '' .the clusters for our dataset are illustrated in figure [ fig : mapevents ] .our algorithm outputs a -d vector of probabilities that the input window belongs to each of the classes .figure [ fig : network ] illustrates our architecture . [ [ network - architecture ] ] * network architecture * + + + + + + + + + + + + + + + + + + + + + + the network s input is a 2-d tensor representing the waveform data of a fixed - length window .the rows of for correspond to the channels of the waveform and since we use 10 second - long windows sampled at 100 hz , the time index is .the core of our processing is carried out by a feed - forward stack of 8 convolutional layers ( to ) followed by 1 fully connected layer that outputs class scores .all the layers contain multiple channels and are thus represented by 2-d tensors .each channel of the 8 convolutional layers is obtained by convolving the channels of the previous layer with a bank of linear 1-d filters , summing , adding a bias term , and applying a point - wise non - linearity as follows : where is the non - linear relu activation function . the output and input channelsare indexed with and respectively and the time dimension with , . is the number of channels in layer .we use 32 channels for layers to while the input waveform ( layer ) has 3 channels .we store the filter weights for layer in a 3-d tensor with dimensions .that is , we use 3-tap filters .the biases are stored in a 1-d tensor .all convolutions use zero - padding as the boundary condition .equation shows that our formulation slightly differs from a standard convolution : we use _ strided _ convolutions with stride , i.e. the kernel slides horizontally in increments of 2 samples ( instead of 1 ) .this allows us to downsample the data by a factor 2 along the time axis after each layer .this is equivalent to performing a regular convolution followed by subsampling with a factor 2 , albeit more efficiently .because we use small filters ( the kernels have size 3 ) , the first few layers only have a local view of the input signal and can only extract high - frequency features . through progressive downsampling ,the deeper layers have an exponentially increasing receptive field over the input signal ( by indirect connections ) .this allow them to extract low - frequency features ( cf .figure [ fig : network ] ) .after the 8th layer , we vectorize the tensor with shape into a 1-d tensor with features . this feature vector is processed by a linear , fully connected layer to compute class scores with given by : thanks to this fully connected layer , the network learns to combine multiple parts of the signal ( e.g. , p - waves , s - waves , seismic coda ) to generate a class score and can detect events anywhere within the window .finally , we apply the softmax function to the class scores to obtain a properly normalized probability distribution which can be interpreted as a posterior distribution over the classes conditioned on the input and the network parameters and : is the set of all the weights , and is the set of all the biases .compared to a fully - connected architecture like in ( where each layer would be fully connected as in equation ) , convolutional architectures like ours are computationally more efficient .this efficiency gain is achieved by sharing a small set of weights across time indices .for instance , a connection between layers and , which have dimensions and respectively , requires parameters in the convolutional case with a kernel of size 3 .a fully - connected connection between the same layers would entail parameters , a 4 orders of magnitude increase .furthermore , models with many parameters require large datasets to avoid overfitting . since labeled datasets for our problem are scarce and costly to assemble , a parsimonious model such as ours is desirable . [[ training - the - network ] ] * training the network * + + + + + + + + + + + + + + + + + + + + + + we optimize the network parameters by minimizing a -regularized cross - entropy loss function on a dataset of windows indexed with : the cross - entropy loss measures the average discrepancy between our predicted distribution and the true class probability distribution for all the windows in the training set . for each window , the true probability distribution has all of its mass on the window s true class : to regularize the neural network , we add an penalty on the weights , balanced with the cross - entropy loss via the parameter .regularization favors network configurations with small weight magnitude .this reduces the potential for overfitting . since both the parameter set and the training data set are too large to fit in memory , we minimize equation using a batched stochastic gradient descent algorithm .we first randomly shuffle the windows from the dataset .we then form a sequence of batches containing 128 windows each . at each training stepwe feed a batch to the network , evaluate the expected loss on the batch , and update the network parameters accordingly using backpropagation .we repeatedly cycle through the sequence until the expected loss stops improving . since our dataset is unbalanced ( we have many more noise windows than events ) , each batchis composed of 64 windows of noise and 64 event windows .for optimization we use the adam algorithm , which k.pdf track of first and second order moments of the gradients , and is invariant to any diagonal rescaling of the gradients .we use a learning rate of and keep all other parameters to the default value recommended by the authors .we implemented convnetquake in tensorflow and performed all our trainings on a nvidia tesla k20xm graphics processing unit .we train for 32,000 iterations which takes approximately .[ [ evaluation - on - an - independent - testing - set ] ] * evaluation on an independent testing set * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + after training , we test the accuracy of our network on windows from july 2014 ( 209 windows of events and 131,072 windows of noise ) .the class predicted by our algorithm is the one whose posterior probability is the highest .we evaluate our predictions using two metrics .the _ detection accuracy _ is the percentage of windows correctly classified as events or noise .the _ location accuracy _ is the percentage of windows already classified as events that have the correct cluster number .the convnetquake software is open - source .the waveform data used in this paper can be obtained from the incorporated research institutions for seismology ( iris ) data management center and the network gs is available at doi:10.7914/sn / gs . the earthquake catalog used is provided by the oklahoma geological survey .the computations in this paper were run on the odyssey cluster supported by the fas division of science , research computing group at harvard university .t. p.s research was supported by the national science foundation grant division for materials research 14 - 20570 to harvard university with supplemental support by the southern california earthquake center ( scec ) , funded by nsf cooperative agreement ear-1033462 and usgs cooperative agreement g12ac20038 .t.p . thanks jim rice for his continuous support during his phd and loc viens for insightful discussions about seismology . 28 natexlab#1#1[2]#2 , , ( ) ., , , , , , , , ( ) . , , , ( ) ., , , , , ( ) . , , , internal report , ucrl - tr-222758 , ., , , ( ) . , , , ( ) ., , , , , , ( ) . , , , , , ( ) . , , , , in : , pp . ., , , , ( ) . , , , , , , , , , , ( ) . , , , , , , , , , ( ) . , , ( ) ., , , , , ( ) . , , , ( ) ., , , , , , ( ) ., , , , , , ( ) ., et al . , , in : , volume , , pp . ., , , ( ) . , , , in : . , , , , ( ) ., , , ( ) . , , , , , ( ) ., , in : , , p. ., , , ( ) . , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . .c c c c & autocorrelation & fast & convnetquake ( ours ) + + noise detection accuracy & 100 & 100 & 99.9 + + event detection accuracy & 100 & 87.5 & 100 + + event location accuracy & n / a & n / a & 74.6 + + runtime & 9 days 13 hours & 48 min & 1 min 1 sec +in this section we show that convnetquake generalizes well to unseen examples of earthquake waveforms .we do this by comparing the number of detections missed by convnetquake and by the template matching method on synthetic data , which provides ground truth .our synthetic dataset is made of day - long seismic records constructed by inserting at random times , a scaled version of a waveform template ( extracted from true data ) over a gaussian noise floor .an example of synthetic time series is shown in figure [ fig : synthetics ] .we generate day - long seismic records with signal to noise ratio ( snr ) ranging from -1 to 8 db .the snr of a time series is defined as the ratio of the power of the inserted event waveforms over the power of the computer - generated seismic noise .that is , ,\ ] ] where and are the signal amplitude . for a 3-second long template of 3-channel waveform data sampled at 100 hz, we define the amplitude of a signal as the norm of the waveform , where the time index is and the channel index is .similarly is evaluated using the generated gaussian noise for the 3 second duration .we choose two template waveforms ( figure [ fig : templates]a ) and ( figure [ fig : templates]b ) . using the procedure described above ,we generate a training set of day - long records using and two testing sets of day - long records using and respectively .we partition the continuous synthetic waveform data used for training into windows labeled as either seismic noise or earthquake .we train convnetquake on these two categories using the procedure detailed in the section methods of the paper .this allows to test the detection ability of convnetquake .the template matching method consists in cross - correlating the 3-channel day - long seismic records with a 3-channel template of earthquake waveform to detect seismic events .we tag a time window as an event when the cross - correlation coefficient is above a threshold , where is the median absolute deviation ( mad ) of all the cross - correlation coefficients and is specified by the user . using the training synthetic records we find that provides the best detection accuracy .both convnetquake and the template matching method detect all the events inserted using template ( figure [ fig : templates]a ) seen during training ; the number of missed detection is 0 for all the records with snr between -1 db and 8 db . for the time series created by inserting template ( figure [ fig : templates]b ) not seen during the training phase , the template matching method misses almost all of the inserted new templates while convnetquake detects most of them ( see figure 4 in main manuscript ) .this demonstrates convnetquake s ability to generalize to new , unseen events .the detection accuracy on windows of ( unknown ) events increases with snr for convnetquake whereas template matching s remains low ( see figure 4 ) .in this section , we expand on our autocorrelation analysis to discriminate true from false detections in the set of detections made by convnetquake. there are windows labeled as events by convnetquake .we cross - correlate all pairs of windows ( there are cross - correlations ) and take the peak absolute value of the correlation coefficient ( cc ) per pair .we do not distinguish between correlated and anti - correlated events because our goal is to detect new events regardless of their polarization , and therefore of their source mechanism .figure [ fig : ccs ] shows that the distribution of those correlation coefficients is skewed and does not peak at cc=0 .this is because most of the event windows detected by convnetquake possess some level of correlation : our algorithm has discarded the uncorrelated seismic noise .we choose a threshold based on visual inspection of waveform coherence among the three components .we show in figures [ fig : cluster3_1 ] , [ fig : cluster3_2 ] , and [ fig : cluster3_3 ] the waveforms of detected events that belong to cluster 3 for three different threshold , cc 0.1 ( 2271 event waveforms ) , cc 0.2 ( 2129 event waveforms ) , and cc 0.3 ( 845 event waveforms ) .we decide on a threshold that retains most event signals while allowing for a diversified set of waveform shapes .a threshold of 0.2 retains coherent waveforms visible on at least two out of the three components .because of the geometry between focal mechanisms , source depth , and receiver location , most of the events detected by and are strike slip , with a dominant p - wave on the vertical and s - wave on the horizontal components . instead , we find that dominant s - waves also appear on the vertical components , suggesting a variety of focal mechanisms for that area. waveform similarity selection would restrict the search to strike - slip events only and miss all other events if those were not considered in the bank of templates .harley m benz , nicole d mcmahon , richard c aster , daniel e mcnamara , and david b harris .hundreds of earthquakes per day : the 2014 guthrie , oklahoma , earthquake sequence . _ seismological research letters _ , 860 ( 5):0 13181325 , 2015. daniel e mcnamara , harley m benz , robert b herrmann , eric a bergman , paul earle , austin holland , randy baldwin , and a gassner .earthquake hypocenters and focal mechanisms in central oklahoma reveal a complex system of reactivated subsurface strike - slip faulting ._ geophysical research letters _ , 420 ( 8):0 27422749 , 2015 . used to build both the synthetic training set and the first synthetic test set .this waveform corresponds to an earthquake from 3 august 2014 at 4:57:59 recorded by gs.ok029 .( b ) template used to create the second synthetic test set .this is the waveform of an earthquake from 1 april 2014 at 17:07:19 recorded by gs.ok029.,width=35 ] 300 events that have the highest absolute correlation coefficient , suggesting a strike - slip mechanism .however , s - waves dominate the vertical component for most of the events , suggesting a different focal mechanism.,width=28 ]
the recent evolution of induced seismicity in central united states calls for exhaustive catalogs to improve seismic hazard assessment . over the last decades , the volume of seismic data has increased exponentially , creating a need for efficient algorithms to reliably detect and locate earthquakes . today s most elaborate methods scan through the plethora of continuous seismic records , searching for repeating seismic signals . in this work , we leverage the recent advances in artificial intelligence and present _ convnetquake _ , a highly scalable convolutional neural network for earthquake detection and location from a single waveform . we apply our technique to study the induced seismicity in oklahoma ( usa ) . we detect 20 times more earthquakes than previously cataloged by the oklahoma geological survey . our algorithm is orders of magnitude faster than established methods . the recent exploitation of natural resources and associated waste water injection in the subsurface have induced many small and moderate earthquakes in the tectonically quiet central united states . induced earthquakes contribute to seismic hazard . during the past 5 years only , six earthquakes of magnitude higher than 5.0 might have been triggered by nearby disposal wells . most earthquake detection methods are designed for large earthquakes . as a consequence , they tend to miss many of the low - magnitude induced earthquakes that are masked by seismic noise . detecting and cataloging these earthquakes is key to understanding their causes ( natural or human - induced ) ; and ultimately , to mitigate the seismic risk . traditional approaches to earthquake detection fail to detect events buried in even modest levels of seismic noise . waveform similarity can be used to detect earthquakes that originate from a single region , with the same source mechanism . waveform _ autocorrelation _ is the most effective method to identify these repeating earthquakes from seismograms . while particularly robust and reliable , the method is computationally intensive and does not scale to long time series . one approach to reduce computation is to select a small set of short representative waveforms as _ templates _ and correlate only these with the full - length continuous time series . the detection capability of _ template matching _ techniques strongly depends on the number of templates used . today s most elaborate methods seek to reduce the number of templates by principal component analysis , or locality sensitive hashing . these techniques still become computationally expensive as the database of templates grows . more fundamentally , they do not address the issue of representation power . these methods are restricted to the sole detection of _ repeating _ signals . finally , most of these methods do not locate earthquakes . we cast earthquake detection as a supervised classification problem and propose the first convolutional network for earthquake detection and location ( _ convnetquake _ ) . our algorithm builds on recent advances in deep learning . it is trained on a large dataset of labeled waveforms and learns a compact representation that can discriminate seismic noise from earthquake signals . the waveforms are no longer classified by their similarity to other waveforms , as in previous work . instead , we analyze the waveforms with a collection of nonlinear local filters . during the training phase , the filters are optimized to select features in the waveforms that are most relevant to classify them as either noise or an earthquake . this bypasses the need to store a perpetually growing library of template waveforms . thanks to this representation , our algorithm generalizes well to earthquake signals never seen during training . it is more accurate than state - of - the - art algorithms and runs orders of magnitude faster . additionally , convnetquake outputs a probabilistic location of an earthquake s source from a _ single _ waveform . we evaluate our algorithm and apply it to induced earthquakes in central oklahoma ( usa ) . we show that it uncovers earthquakes absent from standard catalogs .
since the early times of quantum computing ( qc ) and quantum information processing ( qip ) , several proposals of implementation schemes have come from the field of quantum optics .quite recently , see , it has been shown that , at least in principle , _ scalable - nondeterministic _ quantum computation can be achieved by linear - optical passive ( lop ) devices , exploiting also the nonlinearity introduced by conditional measurements .this has lead to several schemes and experimental demonstrations of quantum gates and circuits , which exploit different ways to encode a _ single qubit _ by a _single photon _ : the qubit basis states can be encoded by the vacuum and the one - photon fock states of a given mode of the quantized e.m .field , or by two orthogonal polarization states of a photon , or by the one - photon fock states of a two - mode optical system . on the other hand , the possibility of _ deterministic - nonscalable _ linear - optical quantum computing has been also pointed out ,see , and fundamental gates have been experimentally tested .these works rely on _ single - photon multi - qubit _ ( spmq ) encoding schemes , namely schemes that allow to encode , say , qubits by a single photon , by introducing vacuum optical modes .the aim of the present paper is twofold : first , in [ sec2 ] , we show that the spmq encoding stems in a natural way from simple general features of the fundamental algebraic objects associated with the description of the quantized e.m .field and of lop devices ; then , in [ sec3 ], we make use of this result to introduce a simple algorithmic procedure which , for any given quantum computation , allows to design the lop circuit that implements it deterministically. we also briefly discuss the issue of scalability .eventually , in [ sec4 ] , we present some basic examples of deterministic lop circuits .in this section we introduce the algebraic formalism necessary to describe a quantum system of optical modes , and the lop devices acting on it ; our aim is to highlight the algebraic structure underlying the implementation of qip and qc by linear optics .we start by briefly recalling the algebraic description of the physical system of a single quantum optical mode , i.e. a vibrational mode of fixed frequency of the quantized e.m .field , that is formally equivalent to a quantum harmonic oscillator .the fundamental object of this description is the _ heisenberg - weyl _algebra , namely the complex lie algebra with generators satisfying the relations : where denotes the lie bracket .notice that is a -algebra , since it is endowed with the involution , namely the antilinear map determined by : the algebra admits a remarkable realization realization which will be still denoted by in the following as an algebra of operators in an infinite - dimensional hilbert space ( with the lie bracket realized by the commutator ) : where is the hilbert space adjoint of and is the identity operator . in fact , given an orthonormal basis , one can define the _ annihilation operator _ by it follows that the _ creation operator _ satisfies : moreover , one can define the _ number operator _ , which is a positive self - adjoint operator .+ then , one can easily verify that hence , generate indeed a realization of the heisenberg - weyl algebra .the infinite - dimensional hilbert space endowed with the orthonormal basis and the associated operator algebra generated by is called _ one - mode _ ( bosonic ) _ fock space_. the operator realization of can also be introduced in a more abstract way . in fact , let be a linear operator in a ( complex separable ) hilbert space satisfying relation ( [ commutation1 ] ) .it is then possible to prove that , if , in addition , is an irreducible set of operators ( i.e. any non - trivial linear span which is invariant under the action of and must be dense in ) and a technical condition concerning the operator is verified , the hilbert space must be infinite - dimensional and the operators are unitarily equivalent to the standard annihilation and creation operators of the harmonic oscillator in ; moreover , the unitary operator that generates this unitary equivalence is uniquely defined up to an arbitrary phase factor .this is one of the formulations of the stone - von neumann theorem on canonical commutation relations . as a consequence, there is an orthonormal basis ( defined uniquely up to an overall phase factor ) in such that ( [ annihilation ] ) is satisfied ; hence , one recovers the previous definition of the operator realization of .+ the operators defined in such a abstract way play respectively the role of annihilation and creation operators of the quantized e.m . field , and is then the photon - number operator .+ finally , we notice that can be decomposed in a natural way as the direct sum of subspaces characterized by a given number of photons : this formalism can be extended to the general case of optical modes .the fundamental object becomes the algebra , which is realized as the subspace of the linear space generated by the basis elements , where : satisfying the canonical commutation relations : the hilbert space of this realization is the -mode _ _ ( bosonic ) _ fock space _ endowed with the orthonormal basis , with : notice in passing that , if , the set of operators , for any , generates now a _reducible _ realization of .+ notice also that , on the other hand , the operators indeed form an irreducible set , and , according to the stone - von neumann theorem , any other irreducible set of operators satisfying the canonical commutation relations ( [ commutan ] ) must be related to the set by a unitary equivalence ; precisely , there is a unitary operator in , _ uniquely defined up to an arbitrary phase factor _ , such that : + the hilbert space can be decomposed as : where is the subspace characterized by a given number of photons , i.e. these subspaces are the eigenspaces of the total number of photons operator , i.e. of the positive self - adjoint operator decomposition ( [ directsumn ] ) plays a central role in understanding linear - optical quantum computing , and we will characterize it in the following by means of the bosonic realization of the lie algebra of u( ) that is obtained via the jordan - schwinger map. in qip and qc one always deals with finite - dimensional hilbert spaces ; indeed , information is represented by _words _ over some finite _ alphabet _ , namely by finite strings of symbols , and in the quantum domain these symbols are realized by _ qubits _ or , more in general , by _qudits_. from a mathematical point of view , a _ logical _ qudit is a vector in a -dimensional abstract hilbert space , and strings of symbols are obtained through the tensor product structure .on the other hand , when dealing with quantum optical systems one works with the _ infinite - dimensional _ fock space .nevertheless , we will show that by means of the jordan - schwinger ( j - s ) map , one can single out in a natural way suitable _ finite - dimensional _ subspaces that allow to encode qudits and to represent the appropriate class of transformations that allow to move within these subspaces , namely to represent the action of quantum logic gates .the general formulation of the j - s map gives a simple procedure allowing to obtain the so called _ bosonic realization _ of a lie algebra .consider an operator realization of the algebra , with basis elements given by : now , consider a -dimensional matrix lie algebra , and a basis of of , say , matrices ( ) .due to the properties of the operator realization of , one can define the linear operators associated with the natural action of the matrices on the linear span of . the linear operators provide a basis of the -dimensional bosonic realization of since , as the reader may verify using relations ( [ dij ] ), the operators preserve the commutation rules of the basis matrices : the one - to - one correspondence _ extended by linearity _ is the j - s map : . as a first example , let us construct the bosonic realization of the lie algebra of the group u(2 ) . to this aim, we have to consider the composite system of two optical modes .its fock space can be decomposed as in ( [ directsumn ] ) , with : + making use of the j - s map , it can be shown that these are the spaces of the irreducible unitary representations of the group u(2 ) ( or , also , of the group su(2 ) ) .+ indeed , consider and its matrix realization generated by the identity matrix i d and by the pauli matrices ; the generators of the bosonic realization of are then obtained applying formula ( [ generatori ] ) : where one can verify that the standard angular momentum commutation rules are satisfied : with denoting the levi civita tensor , and .the action of the operator and of the casimir operator on the fock states , i.e. : gives rise to a relabelling of these states as eigenstates of the abstract angular momentum : where so that the subspaces in ( [ subspace ] ) are the spaces of the ( )-dimensional unitary irreducible representation of the u(2 ) group : with the index identifying these subspaces , and the index labelling the standard basis vectors in each subspace . the generalization to the multimode case , with ,can be outlined as follows . by the j - s map, the generators can be realized as linear superpositions of the operators .this makes it clear that the subspaces of the -mode fock space are invariant subspaces for the bosonic realization of the generators : indeed , the action of the operators _ preserves the total number of photons_. it follows that the -photon space can be decomposed as an orthogonal sum of spaces of irreducible unitary representations of u( ) . actually , using the formalism of the highest weights ( see , for instance , ), one can prove that , as in the special case of u(2 ) , the -photon space is the space of an irreducible unitary representation of u( ) ( hence , the mentioned orthogonal sum contains only one term ) .one can show , moreover , that , for , not all the irreducible representations of u( ) can be realized in such a way ; for instance , for , only the representations of dimension , , can be realized , while it is well known that the dimension of the irreducible unitary representations of u(3 ) is given by the general formula : however , in what follows we will essentially deal with the _ definitory _ ( or _ fundamental _ ) representation of u( ) , whose hilbert space is the single - photon -mode space . the characterization of that we have given fits with the abstract definition of the hilbert space of a qu : a -dimensional hilbert space endowed with the fundamental representation of u( ) acting on it . in the following, we will be specifically interested in the values of given by .for these values of , the following hilbert space isomorphisms hold : with regarded as an abstract qu , or , equivalently , as a -qubit. however we stress the following points : * the hilbert spaces respectively on the l.h.s . andon the r.h.s . of ( [ kqubitspace ] ) though mathematically isomorphic have , for , a different physical meaning , since the former is a single - photon space while the latter is a -photon space ; * this physical content has its mathematical counterpart in the fact that , for , the fock space is endowed with an irreducible operator realization of the algebra , while the space is endowed just with reducible operator realizations of ; * accordingly , using the j - s map , one can endow with the fundamental representation of u(2 ) , while , by the same procedure , only the fundamental representation of : can be obtained ( namely , is the space of qubits which do not interact ) .the previous observations are the basis of the spmq encoding and of the use of lop transformation for the implementation of logic gates .we will now move to the description of the class of optical transformations which enable to implement the _ elaboration _ of quantum information , namely logic gates on qu .the linear - optical _ passive _ ( lop ) transformations are defined as the class of linear transformations that act on the system of optical modes i.e. of linear transformations in span leaving unchanged the total number of photons in the process ; a generic lop device is usually depicted as -port , namely a black box with inputs and outputs ( fig .[ fig1 ] ) , respectively corresponding to the field operators and .generic lop multiport .horizontal lines represent optical modes entering and leaving the device , from left to right ., scaledwidth=15.0% ] the property of photon - number conservation is expressed by the condition : a simple calculation shows that this condition is sufficient to guarantee that the canonical commutation relations ( [ commutan ] ) still hold for the operators , which thus form another basis for the realization of , and can indeed be interpreted as the field operators of the output modes .+ in fact , denoting by the matrix representing the lop transformation : from condition ( [ lop ] ) one can easily prove that where i d is the identity matrix .hence , with any lop transformation it can be naturally associated a _ unitary matrix _ ; conversely , any unitary matrix defines a lop transformation .thus , there is a one - to - one correspondence between lop -ports and the elements of the group u( ) . on the other hand , formula ( [ matricetrasf ] )implies that any invariant linear span for the operators must be invariant for the operators as well , and this in turn means that the output field operators form another irreducible set .thus , as we have already pointed out in [ sec2.1 ] , by the stone - von neumann theorem they must be unitarily equivalent to the operators , i.e. where is a unitary operator in uniquely defined up to an arbitrary phase factor .observe that , as a consequence of the photon - number conservation , the subspaces of are invariant subspaces for the operator .in fact , using the definition of and condition ( [ lop ] ) , one can easily check that the unitary operator commutes with : then , since , is an eigenspace of , it must be an invariant subspace for .+ in particular , as span , we have : for some .we will now show that one can give an explicit form of the operator in such a way that . to this aim , consider the following recipe : * write the matrix ( associated with any lop -port ) as the exponential of a matrix in the lie algebra : * next , via the j - s map , one can obtain a self - adjoint operator : * eventually one can define a unitary operator we now claim that 1 . the unitary operator verifies eq . ( [ matricetrasf ] ) ; 2 .the definition of does not depend on the choice of a particular element of the algebra such that exp ; 3 .the matrix representing the operator in the one - photon subspace of is , precisely : indeed , let be a matrix in and let us define the unitary operator by formula ( [ operatoreunitario ] ) .then , using the well known relation that holds for generic linear operators ( with ad ) , and applying the canonical commutation relations , one easily proves that hence , for any satisfying , the operator verifies eq .( [ matricetrasf ] ) .+ next , we prove that the association defined by formula ( [ operatoreunitario ] ) does not depend on the choice of the matrix such that . to this aim , observe that according to the stone - von neumann theorem the operator , which verifies eq .( [ matricetrasf ] ) , is uniquely identified by the phase factor appearing in eq .( [ fasediu ] ) .now , one can immediately check that , for any matrix , we have : hence , for defined by formula ( [ operatoreunitario ] ) independently on the choice of such that .this proves that the definition of itself does not depend on a particular choice of such a matrix , namely , our second claim . +our third claim can be checked by an elementary calculation , by explicitly evaluating the l.h.s of eq .( [ coincidenzamatrici ] ) : ( in the second line we used the fact that ) . summarizing , we have shown that with any lop -port one can associate in a unique way two mathematical objects : * a unitary matrix representing the lop transformation : ; * a unitary operator uniquely identified by the equations moreover , we have shown that one can give an explicit procedure for building the operator from the matrix ; conversely , if the operator is given , then the matrix can be obtained from relation ( [ coincidenzamatrici ] ) .we conclude observing that the j - s map js induces a map u( ) where is the group of unitary operators in , defined by : making use of the stone - von neumann theorem one can easily prove that for any u( ) , i.e. that is a unitary representation of u( ) .now we give an explicit form to the objects we have introduced so far , namely the matrix and the operator , for two simple special cases : the 2- and the 4-port .they are indeed special because they are the only lop directly implemented in the labs respectively by _ phase shifters _ ( ps ) and _ beam splitters _ ( bs ) ; then , any generic linear optical multiport can be implemented decomposing it as an array of 2- and 4-ports .the simplest example is the ps , the lop -port ( fig . [ fig2 ] ) whose action is just the phase multiplication : lop -port : phase - shifter.,scaledwidth=15.0% ] the group involved is u(1 ) , and its generator is simply ; in this case , the j - s map associates with the number operator , so that , and : notice that acts on the fock space as a photon - number - dependent phase factor. + a generic lop -port ( fig. [ fig3 ] ) is described by the unitary matrix : the -port is implemented by pss and bss , respectively corresponding to the u(1 ) factor and the su(2 ) matrix .lop -port : beam - splitter.,scaledwidth=15.0% ] to obtain the operator representation of the bs s su(2 ) matrix , we consider the well - known euler decomposition of a generic su(2 ) matrix as the product of three elementary rotations ; then , recalling the bosonic realization of the generators , and the induced map , we can write : where are the linear combinations of the operators given by equations ( [ j ] ) .in this section we apply the results of [ sec2.2 ] , [ sec2.3 ] and derive a constructive procedure for building lop circuits for deterministic quantum computation on an arbitrary number of qubits ; then we discuss the issue of scalability .the encoding of ( quantum ) information is generally made possible by a more or less strict correspondence between the mathematical properties of the symbols that are chosen to represent the information , and those of the description of the physical system that is chosen to encode symbols .the case of the spmq encoding is special from this point of view , since such a correspondence is indeed a complete equivalence in this case .+ as we have already said , symbols in quantum information are represented by qu ; universal quantum computation can be done on strings of qubits in the common case of the binary coding .it is useful to recall the following abstract definitions : * a _ qunit _ is a vector in a -dimensional abstract hilbert space , endowed with the fundamental representation of u( ) acting on it ; * a _ string of k qubits _ , or _k - qubit _ , is a vector in a -dimensional hilbert space specifically the tensor product of copies of the 2-dimensional single qubit hilbert space endowed with the fundamental representation of u(2 ) acting on it. it should be clear from the results of [ sec2.2 ] that the mathematical characterization of the space of the states of a single photon over optical modes perfectly fits with the abstract definition of the qu space ; furthermore , chosing , one can say that concides with the previous abstract definition of the hilbert space of a string of qubits .+ the spmq encoding is the one - to - one correspondence between _ logical states _ i.e. the states of a string of k abstract qubits and the _ physical states _ i.e. the states of the quantum system of a single photon over modes .the correspondence between logical and physical states can be formulated explicitly , using te well known computational basis notation : if we denote by the states of a given basis of the k logical qubits , we can rewrite them as column vectors with elements , that are all zero except for a 1 in the -th position , with .then , the spmq scheme consists in encoding the logical state by the state of one photon in the -th mode , with , and the corresponding notation is : we now claim that with this encoding scheme , lop devices are sufficient to implement deterministically any quantum circuit , without the need for ancillary resources and postselection schemes .as we have previously shown , with any lop -port one can associate a matrix in u( ) acting on the input modes , or equivalently a unitary operator acting by similarity : . from the physical point of view , this is nothing but the action of the time evolution operator associated with the -port ( regarded as a quantum device ) on the input field operators in the heisenberg picture. + on the other hand , as far as applications to qip and qc are concerned , since the encoding resource is given by the state vectors of the fock space , it is convenient to switch to the schr'odinger picture and to represent the action of the lop devices ( regarded as quantum logic gates ) as the action of the associated unitary operators on the state vectors .the operator can be represented by an infinite unitary matrix , after choosing a labeling of the fock states ; now we are interested in the action of on the subspace , and , as shown in ( [ primoblocco ] ) , this is represented by a u( ) matrix wich coincides with . since we knowthat any u( ) matrix corresponds to a lop -port built from an array of bs and ps , this means that we can act on with any desired unitary transformation , and so we can do any quantum computation on a string of a fixed number of qubits .+ to make the last statement more precise , once again we refer to the computational basis notation : recall that any logical quantum circuit acting on input strings of logical qubits is represented by some unitary operator acting on the -qubit hilbert space .after chosing a basis for the logical states , the operator associated with the circuit can be represented by a unitary matrix on such a basis : this is the _ computational basis matrix _ of the quantum circuit .+ then , when using the spmq encoding , in order to design the lop circuit which implements a given -qubit quantum circuit , one just needs to follow three simple steps : 1 .write down the computational basis matrix of the logical circuit ; 2 .take the as the matrix elements of the matrix of a lop circuit ; 3 .apply the rzbb procedure to decompose the matrix in the corresponding array of ps s and bs s .this simple , constructive procedure for designing lop quantum circuits constitutes the demonstration of the claim we made at the beginning of this section . to conclude ,notice that the simplicity of the procedure we presented makes it suitable for translation into an algorithm that could be run by a classical computer ; furthermore , if bs and ps with variable parameters were available , being their maximum number fixed by the number of modes , lop components could be rearranged automathically in the appropriate configuration , thus making the design of lop quantum circuit a comletely authomatized process .lop circuits for deterministic quantum computation can be designed when encoding strings of qubits by single photon multimode states ; but there are two practical problems related to this scheme . 1. the first one is the fact that , in order to encode a -qubit state , we need optical modes , which means that an exponential amount of physical resource is required ; this limits the practical feasibility of circuits acting on an arbitrary number of qubits .2 . on the other hand , this scheme is deterministic only for computations executed on a fixed number of qubits : when coupling two registers , physical states of 2 photons will appear , which do not encode any logical state .consider two strings of and qubits ; these are encoded respectively on the spaces and , where .the resulting physical system after a generic lop is the system of two photons on modes , whose hilbert space is strictly larger than the encoding space . at present stageit is not clear yet what the architecture of a quantum computer will be , but it seems reasonable that it will be a hybrid object made out of different components ; in this regard , scalability is only one of the requirements to be satisfied , and it should not be considered so stringent as to rule out a proposal for the implementation . with the scheme we presented here , one can build circuits acting on a small number , e.g. 2 or 3 , of qubits , which allow to test experimentally with present technology some interesting qip protocols , as we show in the next section . as a further remark, we just point out a possible way to reduce the number of optical modes necessary to encode -qubit logical states , in such a way that is polynomial in .in fact , one could encode -qubits in the subspace of , with , and suitably exploit the irreducible representations of u( ) acting in such subspaces for implementing logic gates .with respect to the second problem , a solution could be found by means of a suitable postselected circuit that allows to reduce this deterministic non - scalable scheme to the non - deterministic scalable scheme proposed in , and _ viceversa_.we first give some basic examples of 2-qubit gates , and then a composite circuit for the generation and measurement of bell states . to this aim, we write down explicitly the linear map that encodes logical 2-qubit states by single - photon 4-mode states , namely on : where , following the conventional notation for -qubit states , in the central column we have introduced the binary form of the logical state ; we will always use this notation in what follows .the first lop circuit we present is a very simple one , implementing a cnot gate .this is a 2-qubit universal gate , i.e. it can be shown that the cnot and arbitrary 1-qubit gates are sufficient to build any quantum logic network .the cnot gate acts on the logical computational basis states flipping the second ( _ target _ ) qubit when the first ( _ control _ ) qubit is in the state : and it is represented by the following matrix acting on the computational basis vectors : exploiting the following matrix identity : one can buid the lop circuit ( fig .[ fig4 ] ) corresponding to : decomposition is trivial in this case , since itself describes the transformation of a bs with transmission ( that is , a simple exchange ) coupling modes 3 and 4 .note that only one single photon source , three vacuum sources , and a classically controllable operation ( the interchange of modes 3 and 4 ) are required , thus eliminating a possible source of errors due to non - ideal lop components ; the simplicity of this circuit is remarkable if one thinks that the cnot is a basic gate that could be applied several times while running a quantum computation .lop cnot.,scaledwidth=30.0% ] this gate append a desired phase facor to the state , while leaving the other unchanged ; it can be shown by a simple calculation that a cnot transformation can be obtained by a cphase with ( also called csign ) preceeded and followed by a suitable transformation of the target qubit .the cphase is represented by the matrix : also in this case decomposition is trivial , and the corresponding lop circuit requires only one ps with acting on the 4-th mode , as depicted in fig .[ fig5 ] .lop cphase.,scaledwidth=30.0% ] this is the gate that interchanges the logical state of the two qubits , represented by the matrix : and it correspond to a sequence of three alternate cnot s . butwithin this scheme it is not necessary to implement this sequence : it suffices to note that the matrix interpreted as a lop matrix describes a bs with transmission coupling modes 2 and 3 , which corresponds to the simple circuit shown in fig .[ fig6 ] .lop swap.,scaledwidth=30.0% ] many qip protocols , e.g. quantum teleportation , rely on the use of entangled states of qubits as a resource , and on the ability to distinguish among such states , thus leading to many efforts towards the production and detection of entangled physical states .bell states are defined as the four maximally entangled states of a 2-qubit system ; within the scheme of qc they can be produced by means of a circuit composed by a cnot preceeded by a hadamard gate on the control qubit : this maps the logical computational basis states onto the bell states : by a simple calculation one finds that it is represented by the followng matrix : where denotes the hadamard gate , and index 1 refers to the fact that it acts on the first ( control ) qubit .implementation of a bell state analyzer in the framework of linear optics has been studied leading to the result that a complete measurement in the qubit polarization bell basis is not possible .nevertheless , in the spmq scheme we are proposing , a simple lop circuit for the simulation of bell state production and analysis can be found that works deterministically : as in the previous examples , one just takes the matrix ( [ bellmat ] ) as a lop circuit matrix , decompose it and obtain the circuit depicted in fig .[ fig7 ] . only two balanced ( transmission ) bs , with a sign change upon reflection off the lower side , and interchange of modes 3 and 4 are required .lop circuit for bell states generation ( running from left to right ) and measurement ( from rigtht to left ) .the two bs are balanced , and produce a sign change on reflection off the lower face.,scaledwidth=40.0% ]in this paper we have considered some algebraic structures of linear optics , and discussed how they provide a natural way to deal with linear - optical quantum computing ; on the other hand , we have stressed on the correspondence of such algebraic objects with the basic instruments used in the laboratories , to point out that practical implementations can , in principle , be designed and tested . as a result , we first derived a spmq scheme for encoding any number of qubits by only one photon ; then, we described an algorithmic procedure which , given any logical quantum circuit , allows to design the linear optical circuit which implements the logical circuit operation deterministically , and we presented some simple but interesting -qubit gates and circuits .we also discussed some practical problems related to the scalability of such scheme , and pointed out some possible solutions .however it seems reasonable that in the future a quantum computer will be a composite object , which will exploit either scalable and non - scalable , either deterministic and non - deterministic components ( most likely in association with classical components ) . in this regard ,it is interesting to test such components experimentally , and further work is to be done to quantify the effects of non - ideal instruments on the theoretical schemes .it should be noticed that the scheme proposed here requires no ancillary systems , namely additional photon sources and counters , which are the main sources of inefficiency ; _ only one _ single photon source is required , regardless of the number of qubits , and this should lead to an efficiency which is almost independent on the number of qubits .45 this condition that appears in a remarkable paper by dixmier j 1958 _ comp .* 13 * 263 - 270 allows to rule out possible non - standard ( i.e. inequivalent ) realizations of the canonical commutation relations ; as an example of such a non - standard ( but physically meaningful ) realization , see reeh h 1988 _ j. math .* 29 * 1535 - 1536 .
linear - optical passive ( lop ) devices and photon counters are sufficient to implement universal quantum computation with single photons , and particular schemes have already been proposed . in this paper we discuss the link between the algebraic structure of lop transformations and quantum computing . we first show how to decompose the fock space of optical modes in finite - dimensional subspaces that are suitable for encoding strings of qubits and invariant under lop transformations ( these subspaces are related to the spaces of irreducible unitary representations of u( ) ) . next we show how to design in algorithmic fashion lop circuits which implement any quantum circuit deterministically . we also present some simple examples , such as the circuits implementing a cnot gate and a bell - state generator / analyzer .
in the research of verification , very often two types of specification properties attract most interest from academia and industry .the first type specifies that _ `` bad things will never happen '' _ while the second type specifies that _`` good things will happen''_ . in linear temporal logics ,the former is captured by modal operator while the latter by .tremendous research effort has been devoted to the efficient analysis of these two types of properties in the framework of linear temporal logics . in the branching temporal logics of ( timed ) _ctl_ , these two types can be mapped to modal operators and respectively . properties are called _ safety _ properties while s are usually called _ inevitability _ properties . in the domain of dense - time system verification ,people have focused on the efficient analysis of safety properties .inevitability properties in _ timed ctl ( tctl)_ are comparatively more complex to analyze due to the following reason . in the framework of model - checking , to analyze an inevitability property , say , we actually compute the set of states that satisfy the negation of inevitability , in symbols ] and the initial states for the answer to the inevitability anlaysis . however , property in tctl semantics is only satisfied with non - zeno computations .( zeno computations are those counter - intuitive infinite computations whose execution times converge to a finite value . ) for example , a specification like `` along all computations , eventually a bus collision will happen in three time units '' can be violated by a zeno computation whose execution time converges to a finite timepoint , e.g. 2.9 . such requirement on non - zeno computationsmay add complexity to the evaluation of inevitability properties .in this work , we present our symbolic tctl model - checking algorithm which can handle the non - zeno requirement in the evaluation of _ greatest fixpoints_. the evaluation of inevitability properties in tctl involves nested reachability analysis and demands much higher complexity than simple safety analysis . to contain the complexity of tctl inevitability ,it is important to integrate new and old techniques for a performance solution . in this paper , we investigate three approaches . in the first approach ,we investigate how to adjust a parameter value in our greatest fixpoint evaluation algorithms for better performance .we have carried out experiments to get insight on this issue . in the second approach, we present a technique called _ early decision on the greatest fixpoint ( edgf)_. the idea is that , in the evaluation of the greatest fixpoints , we start with a state - space and iteratively pare states from it until we reach a fixpoint . throughout iterations of the greatest fixpoint evaluations ,the state - space is non - increasing .thus , if in a middle greatest fixpoint evaluation iteration , we find that target states have already been pared from the greatest fixpoint , we can conclude that it is not possible to include these target states in the fixpoint . through this technique, we can reduce time - complexity irrelevant to the answer of the model - checking . as reported in section [ sec.experiments ] , significant performance improvement has been observed in several benchmarks .our third approach is to use abstraction techniques .we shall focus on a special subclass , , of tctl in which every formula can be analyzed with safe abstraction if over - approximation is used in the evaluation of its negation .for example , we may write the following formula in the subclass . this formula says that if a request is responded by a service , then a request will follow the service .this subclass allows for nested modal formulas and we feel that it captures many tctl inevitability properties . one challenge in designing safe abstraction techniques in model - checking is making them accurate enough to discern many true properties , while still allowing us to enhance verification performance . in previous research , people have designed many abstraction techniques for reachability analysis , as have we .however , for model - checking formulas in , abstraction accuracy can be a bigger issue since the inaccuracy in abstraction can be potentially magnified when we use inaccurate evaluation results of nested modal subformulas to evaluate nesting modal subformulas with abstraction techniques .thus it is important to discern accuracy of previous abstraction techniques in discerning true formulas . in this paper, we also discuss another possibility for abstract evaluation of greatest fixpoints , which is to omit the requirement for non - zeno computations in tctl semantics . as reported in section [ sec.experiments ] , many benchmarks are true even without exclusion of zeno computations .finally , we have implemented these ideas in our model - checker / simulator red 4.1 .we report here our experiments to observe the effects of parameter values , edgf , various abstraction techniques , and non - zeno requirements on our inevitability analysis .we also compare our analysis with kronos 5.1 , which is another model - checker for full tctl .our presentation is ordered as follows .section [ sec.relwork ] discusses several related works .sections [ sec.system ] and [ sec.tctl ] give brief presentations of our model , _ timed automata ( ta ) _ , and tctl . section [ sec.algorithm ] presents our tctl model - checking algorithm with requirements for non - zeno computations .section [ sec.edgf ] improves our model - checking algorithm using an edgf technique .section [ sec.zeno ] gives another version of a greatest fixpoint evaluation algorithm by omitting the requirement of non - zeno computations .section [ sec.tctle ] identifies the subclass of tctl which supersedes many inevitability properties , while allowing for safe abstract model - checking by using over - approximation techniques .section [ sec.experiments ] illustrates our experiment results and helps clarify how various techniques can be used to improve analysis of inevitability properties .section [ sec.conc ] is the conclusion .the ta model with dense - time clocks was first presented in .notably , the data - structure of dbm is proposed in for the representation of convex state - spaces of ta .the theory and algorithm of tctl model - checking were first given in .the algorithm is based on region graphs and helps manifest the pspace - complexity of the tctl model - checking problem . in ,henzinger et al proposed an efficient symbolic model - checking algorithm for tctl .however , the algorithm does not distinguish between zeno and non - zeno computations .instead , the authors proposed to modify tas with zeno computations to ones without . in comparison ,our greatest fixpoint evaluation algorithm is innately able to quantify over non - zeno computations .several verification tools for ta have been devised and implemented so far .uppaal is one of the popular tool with dbm technology .it supports safety ( reachability ) analysis in forward reasoning techniques .various state - space abstraction techniques and compact representation techniques have been developed .recently , moller has used uppaal with abstraction techniques to analyze restricted inevitability properties with no modal - formula nesting .the idea is to make model augmentations to speed up the verification performance .moller also shows how to extend the idea to analyze tctl with only universal quantifications .however , no experiment has been reported on the verification of nested modal - formulas .kronos is a full tctl model - checker with dbm technology and both forward and backward reasoning capability .experiments to demonstrate how to use kronos to verify several tctl _ bounded inevitability _ properties is demonstrated in .( _ bouonded inevitabilities _ are those inevitabilities specified with a deadline . ) but no report has been made on how to enhance the performance of general inevitability analysis . in comparison , we have discussed techniques like edgf and abstractions which handle both bounded and unbounded inevitabilities .ddd is a reachability analyzer based on bdd - like data - structures for ta .sgm is a compositional safety ( reachability ) analyzer for ta , also based on dbm technology .a newer version also supports partial tctl model - checking .cmc is another compositional model - checker .its specification language is a restricted subclass of and is capable of specifying bounded inevitabilities .our tool red ( version 4.1 ) is a full tctl model - checker / simulator with a bdd - like data - structure , called crd ( clock - restriction diagram) .previous research with red has focused on enhancing the performance of safety analysis .abstraction techniques for analysis have been studied in great depth since the pioneering work of cousot et al . for ta, convex - hull over - approximation has been a popular choice for dbm technology due to its intuitiveness and effective performance .it is difficult to implement this over - approximation in red since variable - accessing has to observe variable - orderings of bdd - like data - structures .nevertheless , many over - approximation techniques for ta have been reported in for bdd - like data - structures and in specifically for crd .relations between abstraction techniques and subclasses of ctl with only universal ( or existential respectively ) path quantifiers has been studied in .as mentioned above , the corresponding framework in tctl is noted in .we use the widely accepted model of _ timed automata_ , which is a finite - state automata equipped with a finite set of clocks which can hold nonnegative real - values . at any moment , a timed automata can stay in only one _ mode _ ( or _ control location _ ) . in its operation ,one of the transitions can be triggered when the corresponding triggering condition is satisfied . upon being triggered ,the automata instantaneously transits from one mode to another and resets some clocks to zero . between transitions , all clocks increase readings at a uniform rate . for convenience ,given a set of modes and a set of clocks , we use as the set of all boolean combinations of atoms of the forms and , where , , `` '' is one of , and is an integer constant .a ta is given as a tuple with the following restrictions . is the set of clocks . is the set of modes . is the initial condition . defines the invariance condition of each mode . is the set of transitions . and respectively define the triggering condition and the clock set to reset of each transition .a _ valuation _ of a set is a mapping from the set to another set .given an and a valuation of , we say _ satisfies _ , in symbols , iff it is the case that when the variables in are interpreted according to , will be evaluated as .a state of is a valuation of such that there is a unique such that and for all ; for each , ( the set of nonnegative reals ) and .given state and such that , we call the mode of , in symbols . for any , is a state identical to except that for every clock , . given , is a new state identical to except that for every , . given a ta , a _ run _is an ( infinite ) sequence of state - time pairs , , such that and is a monotonically increasing real - number ( time ) divergent sequence , and for all , _ invariance conditions are preserved in each interval : _ that is , + for all ] , s.t . , for all , if either ) ] , .in other words , satisfies iff there exists a run from such that is always true .a ta satisfies a tctl formula , in symbols , iff for every state , .our model - checking algorithm is backward reasoning . we need two basic procedures , one for the computation of the weakest precondition of transitions , and the other for backward time - progression .these two procedures are important in the symbolic construction of backward reachable state - space representations .various presentations of the two procedures can be found in . given a state - space representation and a transition , the first procedure , , computes the weakest preconditionin which , every state satisfies the invariance condition imposed by ; and from which we can transit to states in through .the second procedure , , computes the space representation of states from which we can go to states in simply by time - passage ; and every state in the time - passage also satisfies the invariance condition imposed by . with the two basic procedures, we can construct a symbolic backward reachability procedure as in .we call this procedure for convenience .intuitively , characterizes the backwardly reachable state - space from states in through runs along which all states satisfy .computationally , can be defined as the least fixpoint of the equation , i.e. , .our model - checking algorithm is modified from the classic model - checking algorithm for tctl .the design of the greatest fixpoint evaluation algorithm with consideration of non - zeno requirement is based on the following lemma .[ lemma.gfp ] given , iff there is a finite run from of duration such that along the run every state satisfies and the finite run ends at a state satisfying . + details are omitted due to page - limit .but note that we can construct an infinite and divergent run by concatenating an infinite sequence of finite runs with durations .the existence of infinitely many such concatenable finite runs is assured by the recursive construction of .then can be defined with the following greatest fixpoint . here clock zc is used specifically to measure the non - zeno requirement .the following procedure can construct the greatest fixpoint satisfying with a non - zeno requirement . ' '' '' / * d is a static parameter for measuring time - progress * / \ { + ; ; + repeat until , \{(1 ) + ; ; ( 2 ) + } + return ; + } ' '' '' here removes a clock from a state - predicate without losing information on relations among other clocks .details can be found in appendix [ app.clock.eliminate ] .note here that works as a parameter .we can choose the value of for better performance in the computation of the greatest fixpoint .procedure gfp ( ) can be used in the labeling algorithm in to replace the evaluation of -formulas . for completeness of the presentation ,please check appendix [ app.mck ] to see our complete model - checking algorithm with non - zeno requirement .the correctness follows from lemma [ lemma.gfp ] .in the evaluation of the greatest fixpoint for formulas like , we start from the description , say , for a subspace of and iteratively eliminate those subspaces which can not go to a state in through finite runs of time units .thus , the state - space represented by shrinks iteratively until it settles at a fixpoint . in practice, this greatest fixpoint usually happens in conjunction with other formulas .for example , we may want to specify meaning that a bus at the collision state , will enter the idle state in 26 time - units .after negation for model - checking , we get . in evaluating this negated formula ,we want to see if the greatest fixpoint for the -formula intersects with the state - space for .we do not actually have to compute the greatest fixpoint to know if the intersection is empty .since the value of iteratively shrinks , we can check if the intersection between and the state - space for becomes empty at each iteration of the greatest fixpoint construction ( i.e. , the repeat - loop at statement ( 1 ) in procedure gfp ) .if at an iteration , we find the intersection with is already empty , then there is no need to continue calculating the greatest fixpoint and we can immediately return the current value of ( or ) without affecting the result of the model - checking .based on this idea , we rewrite our model - checking algorithm with our _ early deision on the greatest fixpoint ( edgf)_. we introduce a new parameter to pass the information of the target states inherited from the scope . ' '' '' + / * is the set of clocks declared in the scope of * / + / * is constraints inhereted in the scope of for early decision of gfp * / \ { + switch ( ) \ { + * case * ( ) : return ; + * case * ( ) : return ; + * case * ( ) : return ; + * case * ( ) : return ; + * case * ( ) : + if does not contain modal operator , \ { + ; + return ;(3 ) + } + else \ { + ; + return ;(4 ) + } + * case * ( ) : return ; + * case * ( ) : return ; + * case * ( ) : + ; ; + return ; + * case * ( ) : return ; + } + } + / * d is a static parameter for measuring time - progress * / \ { + ; ; + repeat until or , \{(5 ) + ; ; ( 6 ) + } + return ; + } ' '' '' to model - check ta against tctl formula , we reply iff is false .as can be seen from statement ( 3 ) and ( 4 ) in the case of conjunction formulas , we strengthen the target - state information . in the evaluation of the greatest fixpoint ,we use condition - testing in statement ( 5 ) respectively to check for early decision .in practice , the greatest fixpoint computation procedures presented in the last two sections can be costly in computing resources since their characterizations have a least fixpoint nested in a greatest fixpoint .this is necessary to guarantee that only nonzeno computations are considered . in reality, it may happen that , due to well - designed behaviors , systems may still satisfy certain inevitability properties for both zeno and non - zeno computation . in this case , we can benefit from a less expensive procedure to compute the greatest fixpoint .for example , we have designed the following procedure which does not rule out zeno computations in the evaluation of -formulas . ' '' '' \ { + ; + repeat until or , \ { ( 7 ) + ; ; + } + return ; + } ' '' '' even if the procedure can be imprecise in over - estimation of the greatest fixpoint , it can be much less expensive in the verification of well - designed real - world projects .we have also experimented with abstraction techniques in the evaluation of greatest fixpoints .due to page - limit , we shall leave the explanation in appendix [ app.tctla ] .the corresponding xperiment report is in subsection [ subsec.exp.tctla ] .we have implemented the ideas in our model - checker / simulator , red version 4.1 , for ta .red uses the new bdd - like data - structure , _ crd _ ( clock - restriction diagram) , and supports both forward and backward analysis , full tctl model - checking with non - zeno computations , deadlock detection , and counter - example generation .users can also declare global and local ( to each process ) variables of type clock , integer , and pointer ( to identifier of processes ) .boolean conditions on variables can be tested and variable values can be assigned .the tctl formulas in red also allow quantification on process identifiers for succinct specification .interested readers can download red for free from .... http://cc.ee.ntu.edu.tw/~val/ ....we design our experiment in two ways .first , we run red 4.1 with various options and benchmarks to test if our ideas can indeed improve the verification performance of inevitability properties in .second , we compare red 4.1 with kronos 5.2 to check if our implementation remains competitive in regard to other tools .however , we remind the readers that comparison report with other tools should be read carefully since red uses different data - structures from kronos . moreover , it is difficult to know what fine - tuning techniques each tool has used .thus it is difficult to conclude if the techniques presented in this work really contribute to the performance difference between red and kronos .nevertheless , we believe it is still an objective measure to roughly estimate how our ideas perform . in the following section , we shall first discuss the design of our benchmarks , then report our experiments .data is collected on a pentium 4 1.7ghz with 256 mb memory running linux .execution times are collected for kronos while times and memory ( for data - structure ) are collected for red .`` s '' means seconds of cpu time , `` k '' means kilobytes for memory space for data - structures , `` o / m '' means `` out - of - memory . ''we do not claim that the benchmarks selected here represent the complete spectrum of model - checking tasks .the evaluation of tctl formulas may incur various complex computations depending on the structures of the timed automata and the specification formulas .but we do carefully choose our benchmarks according to the broad spectrum of combination of models and specifications so that we can gain some insights about performance enhancement of tctl inevitability analysis .benchmarks include three different timed automatas and specifications for unbounded inevitability , bounded inevitability , and modal operators with nesting depth zero , one , and two respectively .we identify one important benchmark which can only be verified with non - zeno computations .the other benchmarks can be ( safely ) verified without requirement of non - zeno computations .due to page - limit , we leave the description of the benchmarks in appendix [ app.benchmarks ] . in statement ( 2 ) of procedure gfp ( ) and statement ( 6 ) of procedure gfp_edgf ( ) , we use inequality to check time - progress in non - zeno computations , where is a parameter .we can choose various values for the parameter in our implementations .in our experiment reported in this subsection , we have found that the value of parameter can greatly affect the verification performance . in this experiment, we shall use various values of parameter ranging from to beyond the biggest timing constants used in the models . for the leader - election benchmark ,the biggest timing constant used is . for the pathos benchmark ,the biggest timing constant used is equal to the number of processes . for the csma / cd benchmarks ( a ) , ( b ) , and ( c ) , the biggest timing constant used is equal to 808 .in fact , we can also use inequality , with , in statements ( 2 ) and ( 6 ) of procedures gfp ( ) and gfp_edgf ( ) respectively . due to page - limit, we shall leave the performance data table to appendix [ app.d.table ] .we have drawn charts to show time - complexity for the benchmarks w.r.t .-values in figure [ fig.charts.time ] .cc & + ( a ) leader - election & ( b ) pathos + & + ( c ) csma / cd(a ) & ( d ) csma / cd(b ) + + the y - axis is with `` time in sec '' while the x - axis is with `` '' used in `` . ''more charts for the space - complexity can be found in appendix [ app.d.mem ] . as can be seen from the charts, our algorithms may respond with different complexity curves to various model structures and specifications . for benchmarksleader - election and pathos , it seems that the bigger the -value , the better the performance .for the three csma / cd benchmarks , it seems that the best performance happens when is around 80 .but one thing common in these charts is that always gives the worst performance .we have to admit that we do not have a theory to analyze or predict the complexity curves w.r.t . various model structures and specifications .more experiments on more benchmarks may be needed in order to get more understanding of the curves . in general , we feel it can be difficult to analyze such complexity curves .after all , our models of ta are still `` programs '' in some sense .nevertheless , we have still tried hard to look into the execution of our algorithms for explanation of the complexity cuves . procedures gfp ( ) and gfp_edgf ( ) both are constructed with an inner loop ( for the least fixpoint evaluation of reachable - bck ( ) ) and an outer loop ( for the greatest fixpoint evaluation ) . with bigger -values , it seems that the outer loop converges faster while the inner loop converges slower .that is to say , with bigger -values , we may need less iterations of the outer - loop and , in the same time , more iterations of the inner loop to compute the greatest fixpoints . the complexity patterns in the charts are thus superpositions between the complexities of the outer loop and the inner loop .we have used the -values with the best performance for the experiments reported in the next few subsections . for benchmarks pathos and leader - election , is set to .( is the biggest timing constant used in model and tctl specification . ) for the three csma / cd benchmarks , is set to . in our first experiment, we observe the performance of our inevitability analysis algorithm w.r.t .the non - zeno requirement and the edgf policy .the performance data is in table [ tab.z ] ..performance w.r.t .non - zeno requirements and edgf techniques [ cols= " < , < , > , > , > , > " , ]the charts for memory complexity with various -parameter values is in figure [ fig.charts.mem ] .
inevitability properties in branching temporal logics are of the syntax , where is an arbitrary ( timed ) ctl formula . in the sense that `` good things will happen '' , they are parallel to the `` liveness '' properties in linear temporal logics . such inevitability properties in dense - time logics can be analyzed with greatest fixpoint calculation . we present algorithms to model - check inevitability properties both with and without requirement of non - zeno computations . we discuss a technique for early decision on greatest fixpoints in the temporal logics . our algorithms come with a -parameter for the measurement of time - progress . we have experimented with various issues , which may affect the performance of tctl inevitability analysis . specifically , we report the performance of our implementation w.r.t . various -parameter values and with or without the non - zeno computation requirement in the evaluation of greatest fixpoints . we have also experimented with safe abstration techniques for model - checking tctl inevitability properties . analysis on the experiment data helps clarify how various techniques can be used to improve verification of inevitability properties . * keywords : * branching temporal logics , tctl , real - time systems , inevitability , model - checking , greatest fixpoint , abstraction
helioseismology has been extremely useful to probe the internal structure of the sun but two crucial regions need to be improved for a good understanding of the time evolution of the solar activity : ( 1 ) the solar core for a proper description of the transport of momentum implied by rotation , gravity waves and magnetic field along the evolution and ( 2 ) the subsurface layers which correspond to the transition region between large and small scale dynamics evolution .this transition zone is important to study because it couples the internal dynamics to the dynamics of the solar atmosphere and it plays a crucial role in the emergence of the space weather science ( see for example ) .figure [ f - lepto ] sketches a schematic view of these subsurface layers above 0.96 , which are called the _ leptocline _ region ( from the greek `` leptos''= thin , klino"= tilt ) .this term was proposed by , who computed the solar oblateness and showed a curvature change in this zone which was interpreted as the presence of a double layer . just below the surface, there is a change in radial and latitudinal rotation and the treatment of the superadiabiatic region supposes a proper description of the convection ( presently described by the mixing length parameter ) and of detailed molecular and atomic opacity calculations .one needs also to add the turbulent pressure and the emergence of local magnetic field to these processes , then below , hydrogen and helium pass from neutral to partially ionized and then totally ionized in a region where the magnetic pressure can not be ignored .the mean and varying magnetic field amplitudes have been tentatively extracted by from the analysis of the low degree acoustic modes.the proper interaction between these different physical processes is not yet included in stellar evolution models and could lead to some variation of the solar radius along the solar cycle .the observation of these subsurface layers is not so easy .the difficulty comes from the fact that the high - degree -modes have a small lifetime and are largely perturbed by the turbulent motions and the emerging magnetic field . but important progress has appeared recently thanks to the analyses of long series of seismic data . estimate the variations of the high - degree -modes frequencies over the solar cycle and using ring diagram analysis extract now latitudinal and temporal variations of the sound speed or the density along the hale solar cycle ( at least its last half cycle ) . in parallel , and reported changes with the solar cycle of the solar subsurface stratification , inferred from inversion of soho / mdi -mode frequencies .they notice a different behavior for the layers around 0.99 . between 0.97 and 0.99 , it seems that the position of the layers varies in phase with the solar cycle , whereas it appears opposite in the upper part , above 0.99 , where the variability is in antiphase .so , it seems that the most external layers of the sun shrink during the ascending phase of the solar cycle and relax after the maximum , but these changes are not uniform with depth . at the surface , this result is coherent with the observation of the photospheric radius variations .the observed series of ground - based measurements and the measurements aboard stratospheric balloons suggest a reduction of the solar photospheric radius at the maximum of the cycle .the balloon variation estimates are much smaller than the ground - based observations , polluted by the atmospheric turbulence .they are consistent with results from who reported no evidence of solar - cycle visible radius variations between 1996 and 2004 larger than 7 mas .but they disagree with computations of .these authors predict also non - homologous subsurface stratification changes with the 11 year solar cycle but with an amplification up to a factor 1000 from the depth at 5 mm to the surface .it leads to a variation of the solar radius up to 600 km along the cycle as shown in fig . 2 of .such a variation seems extremely large .so the study of the detected -modes is very useful in the present context even the interpretation of -modes has appeared puzzling due to some apparent lack of sensitivity of these modes at the real surface .we develop here a theoretical approach to validate the -modes inversion procedure used previously and applied here to some specific cases that we know perfectly .this paper is the first of a series where we estimate the impact of a change of radius and composition on the subsurface layers and on the -mode frequencies using classical solar models including a detailed microscopic description of these layers . in the present work ,we do not try to justify the origin of the variation of the radius .the next step will be to develop more complex models including magnetic field and differential rotation .our general aim is to properly qualify the variabilities of -modes and solar radius that we will study with the sdo and picard missions ( kosovichev et al .2007 and thuillier , dewitte & schmutz 2006 , respectively).we will also investigate the capability of the -modes to estimate the solar photospheric radius variability .this study will contribute also to clarify the notion of solar radius " to bridge two different communities and obtain without ambiguity a unique definition of this fundamental quantity .after a description of our study ( section [ s - context ] ) , we describe in section [ s - model ] , the models used and examine the subsurface layer changes on different variables produced by a change in the solar radius and composition and we calculate the corresponding -mode frequencies . section [ s - inversion ] is devoted to the use of these frequencies to infer the change in the position of the subsurface layers . the validation of the procedure , an estimate of the present solar cyclic radius variation and the perspectives of this work will conclude the paper in section [ s - discussions ] .the present theoretical work focus on the solar layers located above 0.96 r . the previous approach of tried to interpret the behavior of the seismic acoustic observations by introducing some magnetic effect in solar models to predict the radius variation over the cycle . in the present work ,we concentrate on the -mode frequencies .we use known models and perform inversions that we can verify and compare with the present observational -mode frequency variations . for this first paper , we stay in the classical approach and only focus on basic quantities which may vary along the solar cycle without introducing the physical process at the origin of the corresponding variations .the next objectives will be to introduce the different magnetohydrodynamical actors .turbulence , rotation and the local effect of magnetic field with a decent topology must be introduced together in a more sophisticated way than it is generally done .but we would like first to study the potential of the -modes in some well known cases and then show that it is possible to deduce an order of magnitude of the solar radius variation from the present variation of the observed -mode frequencies .the general idea is to progress step by step on the sensitivity of the -modes to the complex physics of the subsurface layers using the same formalism than the one used in . in the present paperwe do not yet impose any kind of magnetic field like in or but we estimate the local changes to decouple the origin of the different effects .we examine the theoretical -mode frequency changes coming from known solar models which mimic a simple expansion or a change in composition .we deduce , for the first time from -modes , an estimate on the quality of extraction of the photospheric radius change and an order of magnitude of its hale cycle change .we use the cesam evolution code to calculate several models of chosen solar radius , luminosity and surface abundance .the different computed cases are organized in 3 sets , and listed in table [ t - table_models ] .our reference model with , and is the seismic solar model ( sesm ) built to reproduce the observed sound speed profile .set 1 is composed of models and converged with a radius fixed respectively at and and the composition fixed at the value of the reference model , but where the mixing - length parameter is a free parameter ( the luminosity is not constrained in models and ) . the radius difference of corresponds to a difference of about 140 km .set 2 is composed of six models , for built with the two free parameters the initial composition in helium and ; in these models we impose a change in the radius and luminosity , and the same ratio of the final superficial chemical composition than that of model . finally set 3 consists of five models , for which differ from set 2 by the value of , andincludes a model where and ( ) .a change of composition is interesting to study because the presence of varying magnetic field in the subsurface layers may lead to an apparent change of hydrogen / helium thermodynamic conditions in solar models with some impact on the density and pressure of the subsurface layers .the goal was to mimic variations of the radius and luminosity which could be extrapolated to variations during the 11-year solar activity cycle .for instance , and have a luminosity varying in the same way than the radius , whereas and present luminosity and radius varying in an opposite way .the relative variations for the radius are chosen larger than the supposed observed ones in order to avoid numerical accuracy problems and to better see their influence on the subsurface layers . in the different cases , the models converge to get the calibrated radius within a precision of 10 that means within 7 km . the relative change in the luminosity of is similar to the observed solar irradiance variation during the 11-year cycle .the way we introduce it in solar models leads generally to a variation of luminosity on the production of energy instead on the superficial layers .so we see only an indirect effect of the luminosity change on the subsurface layers through changes in temperature and radius , their effect is not negligible for models 10 and 13 .we estimate how the subsurface layers react to the induced changes .for that , we estimate the differences of the following variables between the different models and the seismic model is reported to the reference radius , i.e. to get all models refered to the reference model . ] : the mass , the temperature , the density , the pressure , the sound speed , the adiabatic exponent , the density scale height , the pressure scale height , the radiative temperature gradient , the real temperature gradient , the rosseland opacity coefficient and the gravitational energy . cccccccc model & & & & & & & $ ] + + 1 & 0.70642 & 0.27468 & 2.03990 & 1.0000 & 1.0000 & 0.02447 & 5777.52 + + 2 & 0.70642 & 0.27468 & 2.03751 & 1.0002 & 1.0000 & 0.02447 & 5776.97 + 3 & 0.70642 & 0.27468 & 2.04260 & 0.9998 & 1.0001 & 0.02447 & 5778.30 + + 4 & 0.70632 & 0.27478 & 2.04033 & 1.0002 & 1.0010 & 0.02447 & 5778.40 + 5 & 0.70642 & 0.27468 & 2.03757 & 1.0002 & 1.0000 & 0.02447 & 5776.98 + 6 & 0.70651 & 0.27458 & 2.03463 & 1.0002 & 0.9990 & 0.02447 & 5775.52 + 7 & 0.70634 & 0.27476 & 2.04508 & 0.9998 & 1.0010 & 0.02447 & 5779.56 + 8 & 0.70644 & 0.27466 & 2.04214 & 0.9998 & 1.0000 & 0.02447 & 5778.10 + 9 & 0.70653 & 0.27457 & 2.03937 & 0.9998 & 0.9990 & 0.02447 & 5776.67 + + 10&0.69772 & 0.28186 & 2.06148 & 1.0002 & 1.0010 & 0.02676 & 5778.42 + 11&0.69791 & 0.28166 & 2.05575 & 1.0002 & 0.9990 & 0.02676 & 5775.53 + 12&0.69774 & 0.28184 & 2.06624 & 0.9998 & 1.0010 & 0.02676 & 5779.57 + 13&0.69793 & 0.28164 & 2.06048 & 0.9998 & 0.9990 & 0.02676 & 5776.69 + 14&0.69783 & 0.28175 & 2.06098 & 1.0000 & 1.0000 & 0.02676 & 5777.00 + figures [ f - param ] and [ f - param2 ] illustrate the changes with depth of these variables in comparison with the seismic one .we focus our study to the zone located above 0.96 , that is the zone where changes in the subsurface stratification have been found by along the solar cycle .the upper limit is fixed at 0.998 , beyond which the superadiabatic zone extends , the turbulence acts strongly and the rotation changes quickly .we have chosen the zone where the -modes have a good sensitivity .at a first glance , we notice that most of these variables present non negligible variations in the studied region .we first comment on the common trends of both figures : * the variations shown on the different panels are dominated by the change in radius imposed in the calibration and at the second order by the change of composition for set 3 . *the way we introduce the change in luminosity acts on the nuclear burning layers and has a negligible effect on the subsurface layers .so for clarity , we have not plotted the models 2 and 3 of set 1 , nor 5 and 8 of set 2 , in fact their curves are aligned with the models having the same radius in figure [ f - param ] . in contrastthe observed luminosity variation of along the solar cycle is most likely coming from the very external layers .in fact , the variation of radius and of the photospheric temperature of our models produces a change of luminosity determined by the stefan s law .it is generally too small to exhibit any structural effect , except for models 10 and 13 where the variation of luminosity coming from the variation of radius and temperature is near from the imposed variation of luminosity . * the behaviors of , and are very similar i.e. a bump around 0.99 with opposite variations between models of different radius ; this bump exists also for the temperature .this position corresponds to the transition between neutral he and he as discussed in .* the differences of and have similar variations and present a double peak near 0.99 with almost equal amplitude .in addition to the bump at 0.99 , the second bump could be due to the transition h/h .it is reasonable that in this convective region , the gradient of the structure follows the adiabatic exponent . * the variations of and a sign change at 0.99 , connected to the variation of the opacities in the region where the light elements are partially ionized .the variation of the pressure and density modifies the corresponding opacity coefficients .figure [ f - param2 ] differs nevertheless from figure [ f - param ] .figure [ f - param ] shows two symmetrical groups of models depending on the value of the calibrated radius . in the set 3 another change comes from the modification of the heavy elements contribution ( about 10% ) which induces a small change in helium ( about 3% ) and hydrogen . in order to separate properly the effect of radius from the effect of composition , we have calculated model ( ) that includes only the change in composition .we show in figure [ f - param2 ] the changes induced by the composition effect . as an evident consequence , we loose the symmetry between the group of models with a greater radius compared to the models with a smaller one , this is mainly visible on the temperature and differences .these first behaviors indicate that the subsurface layers above 0.96 are significantly affected by a change in radius and composition .to go further , we shall estimate the differences in -mode frequencies issued from these models .we compute for each model the theoretical -mode frequencies using the oscillation code adipls to see the impact of the different changes on these quantities .figure [ f - fmodes ] shows the relative difference of frequencies between each model and the reference model versus the corresponding absolute frequencies .we note first that the -mode frequencies of a model with a larger radius are smaller than those of the reference model .the change of in radius leads to a relative change of about on -mode frequencies .the left panel of figure 4 corresponds to sets 1 and 2 .the difference associated to and is slightly bigger in absolute value from the difference associated to and due to the value of the reached real radius : + 135 km for and and -143 km for and ( this difference is within the precision imposed of the model radius ) .the frequency dependence has almost a flat behavior , whereas this is not the case for set 3 in the right panel where the relative difference has an extremum around due to the additional effect of the change in composition , which produces non - uniform changes as already discussed .in this section we do the inversion of the -mode frequencies for degrees between 100 and 300 , in a range slightly bigger than the one chosen previously with the corresponding solar observational quantities to see the reproductivity of the radial variation for known models , to qualify the procedure and to extrapolate a new estimate of the radius variation along the solar cycle .we use the same formalism as in to infer the changes in the position of subsurface layers from the -mode frequency variations . a relation between the relative frequency variations for -modes andthe associated lagrangian perturbation of the radius of the subsurface layers has been established by : where is the degree of the -modes , is the moment of inertia as classically defined in , is the angular frequency of the eigenpulsation ( ) and is the gravity acceleration . the validity of this equation is limited to the case where any magnetic field effect is explicitly introduced in the equations which is the case in this work .it also supposes that , if and are respectively the vertical and horizontal eigenfunctions , the property of -modes is satisfied .this equation allows us to obtain from . for these inversions, we used as the reference model and a standard regularized least - square technique , since equation [ eq - eq_radius ] defines an ill - posed inverse problem ) .the validity of this equation for the different cases analyzed here will be discussed in the last section .figures [ f - inversion1 ] and [ f - inversion2 ] show results from inverting equation [ eq - eq_radius ] for each computed model .these figures also show the frequencies computed by integrating the solutions using equation [ eq - eq_radius ] , and how well they match the model frequencies within the error bars .the uncertainties were set arbitrarily to and respectively for the need of the inversion process and the will to mimic real variations .this fact guarantees the good quality of the inversion .the main characteristics of our solutions are : * the inversion solutions plotted in figure [ f - inversion1 ] for set 2 lead to an almost uniform radial variation .the amplitude of the radial variation is similar to the one imposed at the surface , as expected from the relative variations of frequencies in the left panel of figure [ f - fmodes ] , i.e. about km with a sign respecting the nominal one .nevertheless , there is a small difference near the surface ( see also next section ) , which is due to the poor spatial resolution of the kernels at the surface .this problem has already been raised in where the shape of the kernels is shown . *the solutions for set 3 shown in figure [ f - inversion2 ] are slightly different .below , the variation is constant but closer to the surface there are non - monotonic changes in the stratification with a bump centered at .the uncertainty of the localization of this bump is governed by the characteristic width of our kernels that is about . *these variations are different in shape and in amplitude to those found by .this is not surprising because we do not yet introduce the dynamical processes which generate the solar cycle .* since the only difference between some models of set 2 and some others of set 3 is the different value of , this bump is clearly a consequence of the introduction of this change in composition affecting the subsurface layers through the pressure and mainly the equation of state .we note that the value of is not any longer about 140 km but about 110 km ( 135 - 20 km ) for models and and -160 km ( 142 + 20 km ) for and at , this is easily explained by looking at the between the two reference models of about 20 km at this depth .it is clear that a change of composition has an effect largely below the superficial layers .in fact the inversion process implies that the solution is not unique . by adjusting the regularisation parameters, we can find another profile with two bumps for with also a very good fit to frequency .that solution has not been retained because ( i ) the variations at the surface are not in agreement with the input radius variation as cited in table [ t - table_models ] , ( ii ) the error bars are bigger and ( iii ) the curves are more oscillating .moreover , we can not presently give a physical interpretation to this solution .on the contrary , for the first solution , we discuss in section [ s - discussions ] how we understand the present inversions .equation 8 of supposes that the radial position used in equation [ eq - eq_radius ] is the lagrangian radius .-modes are surface oscillation trapped waves , so each mode oscillates around an equilibrium radius .we relate in this section this radial position to that of the structure .the extraction of the solar subsurface structure and the photospheric radius variation along the solar cycle is not so easy to determine and several papers have been published with different conclusions . the most recent work of shows the difficulty to obtain a general expression for the inversion , so this work allows to test such a procedure .we verify in this section if the displacement of equation [ eq - eq_radius ] corresponds to the radial displacement at a given mass for the different cases studied . with this purpose , we calculate for each set ( sets 1 and 2 lead to the same result ) the radial displacement of a layer corresponding to a given value of the structural quantities , i.e. where represents a model structure quantity like , , or , and is the model number .figure [ f - radius_param ] shows the panels representing at constant , , and . for sets 1 or 2, we have the same behaviour for all the quantities and equation [ eq - eq_radius ] can be applied without any doubt . for the set 3 ,the radial difference , computed by the procedure described above , differs from one quantity to another .the only curve similar in amplitude and in shape with the one computed by inversion of -mode frequencies in figure [ f - inversion2 ] is the curve computed at fixed pressure height .effectively figure [ f - superposition ] shows an excellent superposition of the corresponding curve and the radial displacement issued from the inversion .this agreement , in shape and amplitude , demonstrates that the validity of equation 1 in this case requires to associate the radial displacement to this quantity .it is in fact not surprising because the -mode frequencies are naturally sensitive to the pressure scale height .we note also that the inversion is not able to reproduce precisely the behavior very near the surface due to the lack of sensitivity of the -modes .but we can deduce from this study an estimate of the uncertainty on the determination of the photospheric radius variation : we estimate it of the order of 15% .[ [ a - new - prediction - on - the - solar - radius - variation - along - the - solar - cycle ] ] a new prediction on the solar radius variation along the solar cycle ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ we show in this paper that a pure variation in the solar radius produces changes in the subsurface layers that are in the same zone than that studied with the observed -modes by .such a change is characterized by changes in the subsurface stratification and more exactly variations in the computed position of the layers .we show that the changes detected by -modes are physically related to the variation of the pressure . in our modellingwe are able to produce non - uniform changes by slightly modifying the chemical composition below the solar surface .this is not excluded if one introduces dynamical processes in keeping the same constraints on luminosity and radius .the two studies show different stratifications of the outlayers but with similar effects on the -mode frequencies . considering larger effects on the -mode frequencies than in the solar case, we propose to deduce from this study an estimate of the variation of the solar radius along the cycle . from of about for a of about 140 km , we lead to a solar radius variation along the solar cycle of about km for the observed variations in frequency of about .this result is consistent with the radius extrapolated by ( see respectively figures 1 and 3 of this article ) .it is interesting to note that this radius variation estimate is also compatible ( in order of magnitude ) with the observed low degree acoustic mode variation of about 0.4 along the solar cycle of the low degree -modes .this study represents the first step on the way to explain the variations of the subsurface stratification along the solar cycle and reinforces the interest of the -modes. the next step will be the development of models including magnetic field and other dynamical processes , which will permit a more realistic study .nevertheless it will suppose also an improved expression for the inversion of -modes which is not yet obtained .we believe that the introduction of a magnetic field will influence the stratification of the subsurface layers .the study by pointed out non - uniform radial changes , depending on the importance of the magnetic pressure .moreover , it will be also interesting to take into account the differential rotation and consequently some asphericity to confront them to the subsurface latitudinal stratification over the 11-year cycle .we would like to emphasize that a better knowledge of these subsurface layers will contribute to our understanding ( i ) of the dynamics of the 11-year solar cycle and ( ii ) of the sun - earth relationship for space climate .this supposes in parallel the simultaneous measurement of the radius and frequencies change .this study is in the framework of coming future space missions : sdo ( see http://sdo.gsfc.nasa.gov/ ) and picard ( see http://smsc.cnes.fr/picard/ and ) . the dynamiccs / hirise perspective , proposed in the framework of the esa cosmic vision 2015 - 2025 , could bring even more data on all sources of internal variability by putting together instruments which will follow seismically all the layers down to the core together with instruments measuring the variability of the above atmosphere .
several works have reported changes of the sun s subsurface stratification inferred from -mode or -mode observations . recently a non - homologous variation of the subsurface layers with depth and time has been deduced from -modes . progress on this important transition zone between the solar interior and the external part supposes a good understanding of the interplay between the different processes which contribute to this variation . this paper is the first of a series where we aim to study these layers from the theoretical point of view . for this first paper , we use solar models obtained with the cesam code , in its classical form , and analyze the properties of the computed theoretical -modes . we examine how a pure variation in the calibrated radius influences the subsurface structure and we show also the impact of an additional change of composition on the same layers . then we use an inversion procedure to quantify the corresponding -mode variation and their capacity to infer the radius variation . we deduce an estimate of the amplitude of the 11-year cyclic photospheric radius variation .
the development of a robust and reliable estimator of the pose ( i.e. position and attitude ) of a rigid body is a key requirement for robust and high performance control of robotic vehicles .pose estimation is a highly nonlinear problem in which the sensors normally utilized are prone to non - gaussian noise .classical approaches for state estimation are based on nonlinear filtering techniques such as extended kalman filters , unscented kalman filters or particle filters .however , nonlinear observers have become an alternative to these classical techniques , starting with the work of salcudean for attitude estimation and subsequent contributions over the last two decades .early nonlinear attitude observers have been developed on the basis of lyapunov analysis .recently , the attitude estimation problem has motivated the development of theories on invariant observers for systems endowed with symmetry properties .for instance , complementary nonlinear attitude observers exploiting the underlying lie group structure of the special orthogonal group are derived in with proofs of almost global stability of the error system . a symmetry - preserving nonlinear observer design based on the cartan moving - frame methodis proposed in , which is locally valid for arbitrary lie groups .a gradient - like observer design technique for invariant systems on lie groups is proposed in , leading to almost global convergence provided that a non - degenerate morse - bott cost function is used .more recently , an observer design method directly on the homogeneous output space for the kinematics of mechanical systems is proposed in , leading to autonomous error evolution and strong convergence properties .finally , extends the observer design methodology proposed in in order to deal with the case where the measurement of system input is corrupted by an unknown constant bias .full pose observer design , although less studied than attitude observer design , has recently attracted more attention .for instance , observers designed directly on have been proposed using both full state feedback or bearing measurements of known landmarks .an observer on is proposed in , using full range and bearing measurements of known landmarks and achieving almost global asymptotic stability . in a prior work by the authors , a nonlinear observer on is proposed using directly position measurements in the body - fixed frame of known inertial feature points or landmarks , with motivation strongly related to robotic vision applications using either stereo camera or kinect sensor .the observer derivation is based on the gradient - like observer design technique proposed in , and the almost global asymptotic stability of the error system is proved by means of lyapunov analysis . in this paper, we consider the question of deriving a nonlinear observer on for full pose estimation that takes the system outputs on the real projective space directly as inputs . a key advance on our prior work is the possibility of incorporating `` naturally '' in a sole observer both vectorial measurements ( provided e.g. by magnetometers or inclinometers ) and position measurements of known inertial feature points ( provided e.g. by stereo camera ) .in addition , sharing the same robustness property with the observer proposed in , the algorithm here proposed is also well - posed even when there is insufficient data for full pose reconstruction using algebraic techniques . in such situations , the proposed observer continues to operate , incorporating what information is available and relying on propagation of prior estimates where necessary . finally , as a complementary contribution , a modified version of the basic observeris proposed so as to deal with the case where bias is present in the velocity measurements .the remainder of this paper is organised as follows .section [ sec : preliminary ] formally introduces the problem of pose estimation on along with the notation used . in section iii ,based on a recent advanced theory for nonlinear observer design directly on the output space , a nonlinear observer on is proposed using direct body - fixed measurements of known inertial elements of the real projective space and the knowledge of the group velocity .stability analysis is also provided in this section .then , in section [ sec : observerdesignbias ] the proposed basic observer is extended using lyapunov theory in order to cope with the case where the measurement data of the group velocity are corrupted by an unknown constant bias . in section [ sec :simulation ] , the performance of the proposed observers are validated by means of simulation . finally , concluding remarks are given in section [ sec : conclusions ] .let and denote an inertial frame and a body - fixed frame attached to a vehicle moving in 3d - space , respectively .the vehicle s position , expressed in the frame , is denoted as .the attitude of the vehicle is represented by a rotation matrix of the frame relative to the frame .let and denote the vehicle s translational and angular velocities , both expressed in . in this paper , we consider the problem of estimating the vehicle s pose , which can be represented by an element of the special euclidean group given by the matrix this representation , known as homogeneous coordinates , preserves the group structure of with the operation of matrix multiplication , i.e. , .now let us recall some common definitions and notation . the lie - algebra of the group is defined as with denoting the skew - symmetric matrix associated with the cross product by , i.e. , .the adjoint operator is a mapping defined as , with . for any two matrices , the euclidean matrix inner product and frobenius norm are defined as let , , denote the anti - symmetric part of , i.e. .let denote the unique orthogonal projection of onto with respect to the inner product , i.e. , , one has it is verified that for all , for all , , the following equation defines a _ right - invariant riemannian metric _ : for any ( or ) , the notation denotes the vector of first three components of and the notation stands for the i - th component of .thus , it can be written as ^\top ] and ^\top \in \mathbb{rp}^3 ] , ^\top \in \mathbb{rp}^3 ] , ^\top ] .* case 2 : corresponds to case 2 of assumption [ hypo : observability ] , in which one vectorial measurement and the position measurements of two feature points are available , where , , , with ^\top ] and ^\top ] , ^\top ] .recall that remark [ remmeasurement ] explains how to transform a vector or a position of a feature point into a corresponding element of .the gains and parameters involved in the proposed observer are chosen as follows : for each simulation run , the proposed filter is initialized at the origin ( i.e. ) while the true trajectories are initialized differently .combined sinusoidal inputs are considered for both the angular and translational velocity inputs of the system kinematics .the rotation angle associated with the axis - angle representation is used to represent the attitude trajectory .one can observe from figure [ fig1 ] that the observer trajectories converge to the true trajectories after a short transition period for all the three considered cases .figure [ fig2 ] shows that the norms of the estimated velocity bias errors and converge to zero , which means that the group velocity bias is also correctly estimated . and vs. time .in this paper , we propose a nonlinear observer on for full pose estimation that takes the system outputs on the real projective space directly as inputs .the observer derivation is based on a recent observer design technique directly on the output space , proposed in .an advantage with respect to our prior work is that we can now incorporate in a unique observer different types of measurements such as vectorial measurements of known inertial vectors and position measurements of known feature points .the proposed observer is also extended on so as to compensate for unknown additive constant bias in the velocity measurements .rigorous stability analyses are equally provided .excellent performance of the proposed observers are justified through simulations .this work was supported by the french _ agence nationale de la recherche _ through the anr astrid scar project `` sensory control of aerial robots '' ( anr-12-astr-0033 ) and the australian research council through the arc discovery project dp120100316 `` geometric observer theory for mechanical control systems '' .t. hamel , r. mahony , j. trumpf , p. morin , and m .- d . hua .homography estimation on the special linear group based on direct point correspondence . in_ ieee conf . on decision and control _ ,pages 79027908 , 2011 .
a nonlinear observer on the special euclidean group for full pose estimation , that takes the system outputs on the real projective space directly as inputs , is proposed . the observer derivation is based on a recent advanced theory on nonlinear observer design . a key advantage with respect to existing pose observers on is that we can now incorporate in a unique observer different types of measurements such as vectorial measurements of known inertial vectors and position measurements of known feature points . the proposed observer is extended allowing for the compensation of unknown constant bias present in the velocity measurements . rigorous stability analyses are equally provided . excellent performance of the proposed observers are shown by means of simulations .
many investigations in the realm of quantum physics are based on a single two level atom coupled to an approximately resonant light field .one major aspect of this work is to investigate the quantum mechanical motion of such an atom .the work of steck _ et al _ is a prime example .they investigate chaos assisted tunneling by studying the motion of cold cesium atoms in an amplitude modulated standing wave of light .et al _ study cesium atoms in an effort to observe quantum coherent dynamics . the work of hensinger _et al _ investigates quantum chaos and quantum tunneling .proposals in qed to utilize interactions between atoms and their cavity for physical realizations of a quantum computer and for communication of quantum states also require good descriptions of quantum mechanical motion .the full quantum system here consists of the light field and the atom .the evolution of such a system is governed by a schrdinger equation . in most caseshowever , the evolution of the light field is not of interest and so the schrdinger equation can be reduced to a master equation .the most general form of this master equation is given by +\mathcal{l } \rho,\label{lindblad}\ ] ] where is a lindbladian superoperator .the form we will use here , specific to a two level atom interacting with a light field , is [ majorme ] \label{hamiltonian},\]] \rho-\mathcal{a}\left[\sigma\right]\rho\right ) .\label{louivillian}\ ] ] here we are only interested in the motion of the atom in the one direction .thus is the component of the atom s momentum in the -direction .the detuning of the light field is .the atomic lowering operator is given by where and represent the ground and excited states of the atom respectively . is the complex , and possibly time - dependent rabi frequency operator for the light field , which , from here on , will be represented simply as . and are the wavenumber of the incident photons and the mass of the atom respectively .planck s constant ( ) will be set to one for the rest of the analysis .the superoperator describes random momentum kicks .it is defined for any arbitrary operator as where is the atomic dipole radiation distribution function produced by the electronic transition , reduced to one dimension . for motion parallel to the direction of propagation of the laser light ,it is given by the superoperators and are defined for arbitrary operators and as = cac^\dagger,\mathcal{a}\left[c\right ] a = \frac{1}{2}\left\{c^\dagger c , a\right\},\ ] ] and define the general form of a lindbladian superoperator .the entire expression of eq . ( [ louivillian ] ) describes the irreversible evolution of the system at rate .it is this part of the master equation that we wish to minimize by working in the regime ( where is the maximum modulus of the rabi frequency operator ) .the problem with this regime is that when making very large , the full master equation still has to be solved using a timestep smaller than .this makes solving the full master equation numerically very difficult .there are a number of ways to approximate the master equation , four of which are investigated here .these are 1 . the standard adiabatic approximation ; 2 . a more sophisticated adiabatic approximation ; 3 . a secular approximation ; and 4 .a dressed - state approximation .approximation ( 1 ) is a standard approach used by many researchers in the field , both as a semi - classical treatment and as a fully quantum approximation method .unfortunately , this treatment is not valid in the regime of the work of .these experiments were performed in the regime of but where is the same order as . in their work , approximation ( 1 ) was used but on closer examination , this approximation was seen only to be valid in the regime .we will concentrate on the difficult regime in this analysis .approximation ( 2 ) is one way to correct the standard adiabatic approximation for this regime .approximation ( 3 ) was proposed in ref . , and was used for the formation of quantum trajectory simulations in ref. .approximation ( 4 ) is a fully quantum dressed - state treatment including the effect of spontaneous emission .previously , only semi - classical treatments , both omitting , and including , spontaneous emission have been done .we look at all four approximations , examining the validity and complexity , as well as the numerical accuracy ( compared to the full simulation ) and computational resource requirements , of each .the method of adiabatic approximation applied to the full master equation serves to eliminate the internal state structure of the atom .one reason for wishing to have no internal states is so that the system has a clear classical analogue . in performing this treatment ,we also remove the hamiltonian term of order , removing the requirement that the master equation be solved on a timestep at least as small as .this adiabatic elimination technique is described in many different text books , see .the result we obtain was first derived by graham , schlautmann and zoller . to achieve this adiabatic eliminationhowever , we follow a procedure similar to that of hensinger _ et al _ . the density matrix can be written using the internal state basis as where the etc are still operators on the centre - of - mass hilbert space l . from eqs .( [ majorme ] ) , these obey [ rates ] here the kinetic energy term is represented in the superoperator .\ ] ] if the standard adiabatic elimination procedure is valid , we can represent the system by the density matrix for the centre of mass ( com ) alone , =\rho_{gg}+\rho_{ee},\ ] ] where is the trace over the internal states of the atom . in this standard adiabatic approximation we further simplify this by noting that large detuning leads to very small excited state populations such that and thus is approximately . thus denoting simply as ,we can replace in eq .( [ fullrhogg ] ) just with .now we require expressions for and in terms of .this is achieved by noting that from eq .( [ fullrhoge ] ) , comes to equilibrium on a timescale much shorter than , at a rate .thus we set .also , if the kinetic energy term is much smaller than then it can be ignored. this will be the case if .this is typically true , and so we get this can be substituted back into the equation for to give }{\gamma\left(\gamma^2 + 4\delta^2 \right ) } \right)\nonumber \\ & & + \frac{\gamma\mathcal{j}\left[\omega\right]\rho_{gg}}{\gamma^2 + 4\delta^2 } -\mathcal{k}\rho_{ee}.\label{laterhoee}\end{aligned}\ ] ] from eq .( [ laterhoee ] ) , we see that equilibrates on a timescale much faster than and so we also set .this time , the kinetic energy term must be ignored compared with rather than . allowing this approximationgives }{\gamma\left(\gamma^2 + 4\delta^2\right ) } \simeq\frac{\omega\rho_{gg}\omega^\dagger } { \gamma^2 + 4\delta^2}\label{badrhoee}.\end{aligned}\]]the first correction term on the left hand side scales as which in the regime chosen is negligible compared to the leading order term .the second correction term however scales as .had we been working in the regime , then this term could also be safely ignored compared to the leading term .this condition however , is not satisfied in the experiments of nor in our chosen regime , leaving this term the same order as the leading term .this term however was dropped in ref . on the basis that in a more sophisticated approach ( sec . [ better ] ) , this term does not appear and the correction to the final master equation is small .knowing that this is an unjustified approximation , but in the interest of comparison to currently used techniques , we will continue to follow this method as others have done .thus , dropping the second correction term we are left with now substituting first eq .( [ halfrhoge ] ) then eq .( [ finalrhoee ] ) into eq .( [ fullrhogg ] ) and replacing with gives the final adiabatically eliminated master equation:\rho -\mathcal{a}\left[\frac{\omega}{2\delta } \right]\rho\right)-i\left[\frac{p_x^2}{2m}-\frac{\omega\omega ^\dagger}{4\delta},\rho\right],\nonumber \\ \label{adiab1me}\end{aligned}\ ] ] where here we have used the fact that to eliminate some of the higher order terms .this master equation is of the lindblad form .as noted , the standard adiabatic elimination method described in section [ standard ] is not strictly valid in the regime of the experiments of hensinger _ et al _ , .there are a number of ways to try and develop a strictly valid version of the adiabatic approximation in this regime. one way would be to not drop any terms without justification and continue to plough through the mathematics .another method which we believe to be neater and just as accurate is proposed here by using a slightly more sophisticated method similar to that in the appendix of .the basis of the approach is to move into an interaction picture with respect to this approach may seem counter - intuitive to most . usually when moving to an interaction picture , it would be with respect to a that is already one of the terms in the hamiltonian . in our case , if we investigate eq .( [ adiab1me ] ) , we are moving to an interaction picture with respect to the opposite of a term in the effective hamiltonian and as such will actually be adding a term to the hamiltonian .the reason we choose to do this is that the problem term we encountered in sec .[ standard ] was in the hamiltonian for the excited state .the potential seen by the excited state of the atom is in fact inverted and so the chosen is designed to cancel the excited state potential . after moving into this interaction picture, we then perform the adiabatic elimination process , and finally transform back into the schrdinger picture .this method can give a different result because the approximations we make in the interaction picture may not have been valid in the schrdinger picture . with the unitary transformation operator , the interaction picture density matrix is using this in eqs .( [ majorme ] ) gives an interaction picture master equation equation still of the lindblad form +\gamma\mathcal{l}\tilde{\rho},\ ] ] but now with an extra hamiltonian term such that is given by is just the component of the momentum in the -direction , transformed into the interaction picture .the lindbladian superoperator is unaffected by the interaction picture because rabi frequency operator commutes with the position operator , , as well as with the state operators , and .following the same procedure as the standard adiabatic elimination process , the equations for the centre - of - mass operators can be extracted : [ ipres]\nonumber \\ & & -\tilde{\mathcal{k}}\tilde{\rho}_{gg } , \label{fullrhogg2}\\ \dot{\tilde{\rho } } _ { ge}&=&-\frac{\gamma}{2}\tilde{\rho}_{ge}-\frac{i}{2}\left ( \omega^\dagger\tilde{\rho}_{ee } -\tilde{\rho}_{gg}\omega^\dagger\right ) -\frac{i}{4\delta}\left[\omega\omega^\dagger,\tilde{\rho}_{ge } \right]\nonumber \\ & & + i\delta\tilde{\rho}_{ge}-\tilde{\mathcal{k } } \tilde{\rho}_{ge } , \label{fullrhoge2 } \\ \dot{\tilde{\rho}}_{ee}&= & -\gamma\tilde{\rho}_{ee}-\frac{i } { 2}\left(\omega\tilde{\rho}_{ge}-\tilde{\rho}_{eg } \omega^\dagger\right ) -\frac{i}{4\delta}\left[\omega\omega^\dagger , \tilde{\rho}_{ee}\right]\nonumber \\ & & -\tilde{\mathcal{k}}\tilde{\rho}_{ee } , \label{fullrhoee2}\end{aligned}\ ] ] where obviously uses the interaction picture momentum operator . in this more sophisticated approach ,we still take the trace over the internal states of the atom , letting , but now we do not simplify this further and simply let .this gives a master equation of the form -\frac{i}{2}\left[\omega,\tilde{\rho}_{ge}\right]\nonumber \\ & & - \frac{i}{4\delta}\left[\omega\omega^\dagger,\tilde { \rho } \right ] -\tilde{\mathcal{k}}\tilde{\rho}. \label{meees } \end{aligned}\ ] ] hence we again need to find expressions for and in terms of .we can achieve this by noting that , as in the earlier treatment , and equilibrate on a timescale much faster than . by setting , and again ignoring the kinetic energy term, we get }{8\delta^3 } + \frac { \left\{\omega^\dagger,\tilde{\rho}_{ee}\right\}}{2\delta } -\frac{\gamma\tilde{\rho}\omega ^\dagger}{4i\delta^2}\nonumber \\ & & + \frac{\gamma\left\{\omega^\dagger,\tilde{\rho } _ { ee } \right\}}{4i\delta^2 } + \frac{\left[\omega\omega ^\dagger,\left\{\omega^\dagger,\tilde { \rho}_{ee}\right\}\right]}{8\delta^3}.\label{iprhoge5}\end{aligned}\ ] ] setting and ignoring the kinetic energy term allows us to solve for . in the more sophisticated adiabatic treatment , instead of getting an equation of the form of eq .( [ badrhoee ] ) , we get in eq .( [ iphalfrhoee ] ) , the superoperators and are defined ,\label{f}\\\mathcal{r}&\equiv&\mathcal{f}+\text { higher order terms}.\label{r}\end{aligned}\ ] ] in the regime chosen ( ) , the second term of eq .( [ f ] ) is of order 1 and so can not be ignored compared to the leading term . in eq .( [ r ] ) , the higher order terms are of order much smaller than 1 and even smaller than , which is the smallest order kept by making taylor approximations to get the expression for .thus to simplify eq .( [ iphalfrhoee ] ) , we act on the left of both sides with the superoperator .then to leading order we have as expected , the interaction picture chosen produced a term in to counteract the unwanted term in .notice here that we come to essentially the same result as in the standard adiabatic treatment in eq .( [ finalrhoee ] ) but without making any unjustified approximations . now substituting eq .( [ iprhoge5 ] ) and eq .( [ iprhoee ] ) into eq .( [ meees ] ) , after simplification , leads to the final interaction picture master equation \tilde{\rho } -\mathcal{a}\left[\frac{\omega}{2\delta } \right]\tilde{\rho}\right ) -i\left[\frac{\omega^2{\omega^\dagger}^2}{16\delta ^3},\tilde{\rho}\right]\nonumber \\ & & - \tilde{\mathcal{k}}\tilde{\rho}.\end{aligned}\ ] ] finally , all that remains is to transform the interaction picture master equation back to the schrdinger picture by performing the opposite unitary transformation .this leaves the final master equation for the more sophisticated adiabatic elimination treatment as\rho -\mathcal{a}\left[\frac{\omega}{2\delta } \right]\rho\right)\nonumber \\ & & -i\left[\frac{p_x^2}{2m}-\frac{\omega\omega ^\dagger}{4\delta}+\frac{\omega^2 { \omega^\dagger}^2}{16\delta ^3},\rho\right].\label{adiab2me}\end{aligned}\ ] ] notice here that by explicitly accounting for the term in , the master equation derived is the same as that of the standard adiabatic approach with an extra potential term of order .this term is very small and as such the statements made to justify the standard approach were correct in that the adjustment to the final master equation is small .the more sophisticated treatment however , although more algebraically intensive to produce the initial master equation , requires very little extra effort than the standard adiabatic treatment to simulate .more importantly though , the master equation derived in the more sophisticated adiabatic approach is valid in the regime .a secular approximation to the full master equation is quite different from any adiabatic approximation .a secular approximation does not totally remove any dependence on the internal state of the atom .it only eliminates the coherences between the internal atomic states .the secular approximation was also used by hensinger _ et al _ , alongside the standard adiabatic approximation .there are a number of ways to derive a secular approximation to the full master equation .one such method has been performed by dyrting and milburn but this method is quite complicated .a much simpler method which produces the same approximate master equation is based on the technique that is applied for the standard adiabatic approximation .we start with the eqs .( [ rates ] ) for etc , and then adiabatically eliminate as in sec . [ standard ] .however , instead of also trying to solve for , we just substitute the expression for , eq .( [ halfrhoge ] ) , into the equations for , eq .( [ fullrhogg ] ) and , eq .( [ fullrhoee ] ) .this eliminates the coherences between the excited and ground states while still keeping much of the original master equation .one advantage of this is that it allows comparison to an analogous 2-state classical model .more importantly to us , this approximation eliminates the evolution at rate as is necessary to simplify the simulations .we are left with the following equations for the ground and excited state density matrices \rho_{ee } -\mathcal{a}\left[\frac{\omega}{2\delta}\right]\rho_{gg}\right ) \nonumber \\ & & + i\left[\frac{\omega\omega^\dagger}{4\delta } , \rho_{gg}\right]-\mathcal{k}\rho_{gg } , \label{secgg}\\ \dot{\rho}_{ee } & = & -\gamma\rho_{ee}+\gamma\left(\mathcal{j}\left[\frac{\omega}{2\delta}\right]\rho_{gg } -\mathcal{a}\left[\frac{\omega^\dagger}{2\delta}\right]\rho_{ee}\right ) \nonumber \\ & & -i\left[\frac { \omega\omega^\dagger}{4\delta},\rho_{ee}\right]-\mathcal{k}\rho_{ee}. \label{secee}\end{aligned}\ ] ] the final master equation is constructed by recombining the equations ( [ secgg ] ) and ( [ secee ] ) to produce an equation which reproduces these equations for and while also only giving rapidly decaying terms for and .this final master equation for the secular approximation is + \gamma\left(\mathcal{b}\mathcal{j } \left[\sigma\right]\rho-\mathcal{a}\left[\sigma\right]\rho\right ) \nonumber \\ & & + \gamma\left(\mathcal{j}\left[\frac{\omega^\dagger\sigma}{2\delta } \right]\rho -\mathcal{a}\left[\frac{\omega^\dagger\sigma}{2\delta } \right]\rho\right)\nonumber\\ & & + \gamma\left ( \mathcal{j}\left[\frac{\omega\sigma^\dagger}{2\delta}\right]\rho- \mathcal{a}\left[\frac{\omega\sigma^\dagger}{2\delta}\right]\rho \right ) , \label{secularme}\end{aligned}\ ] ] where is just the pauli spin operator . to compare this master equation with those we have already seen , the hamiltonian terms derived here are the same as those derived in the standard adiabatic approximation , eq .( [ adiab1me ] ) . the spontaneous emission term exactly as in the full master equation , eq .( [ louivillian ] ) , remains , while two extra jump terms involving a state change with no spontaneous emission have been derived .there are no apparent problems in this derivation , or that in .nevertheless , as we will discuss in sec [ discussion ] , eq .( [ secularme ] ) does not give accurate results in comparison with eq .( [ lindblad ] ) , in the regime .a semi - classical dressed - state treatment of atomic motion was put forward by dalibard and cohen - tannoudji .the dressed - state approximation used here is a fully quantum version of that treatment .the states we have been using for a basis so far , and , are called bare states .we can also work with another basis of position - dependent states which we call dressed - states .these dressed - states are derived by considering the hamiltonian of the full master equation eq .( [ hamiltonian ] ) and ignoring the kinetic energy component to get diagonalization of the hamiltonian yields the form where and are the eigenenergies of .the corresponding eigenstates are the position dependent dressed - states we will be considering here , labeled and . in terms of these dressed - states , is given by .the energies and states are where is defined by [ theta] and now , we use this basis to get equations for , , and . without including the spontaneous emission or the kinetic energy ,these equations are simply .\label{rhojk}\]]we do however want to include the spontaneous emission in our treatment and so we must assess the effect of the raising and lowering operators for the bare states ( and ) acting on the new basis states ( and ) .this is easily done .so far there have been no approximations made .the essence of the dressed - state approximation is similar to the secular approximation in that we keep the internal dressed - states but allow the coherences to go to zero .in contrast to the secular approximation , we do not simply set .instead , we notice from eq .( [ rhojk ] ) that , if we ignore the operator nature of the eigenenergies , then the equations for would have a term of the form . to first order, is approximately .thus will rotate very quickly such that only terms rotating at this very rapid pace will be able to contribute to its evolution .thus only terms involving are kept in the equation for .\label{dress12 } \end{aligned}\ ] ] knowing that the last term in eq .( [ dress12 ] ) serves to force to oscillate very rapidly and the other two terms force it to decay quickly , will average to zero .thus we set and equal to zero in the population equations giving ,\label{dress111}\\ \dot{\rho}_{22 } & = & \gamma\mathcal{b}\left ( \sin\theta\cos\theta\rho_{22}\sin\theta\cos\theta+\cos^2\theta\rho _ { 11}\cos^2\theta \right)\nonumber \\ & & -\frac{\gamma}{2}\left\{\sin^2\theta,\rho_{22}\right\ } -i\left[e_2(x , t),\rho_{22}\right].\label{dress222}\end{aligned}\ ] ] if we could ignore the operator nature of the rates in eqs .( [ dress12]-[dress222 ] ) , we would obtain the same rates as given by dalibard and cohen - tannoudji in .all that remains now is to design the master equation that forces to zero giving these population equations .the master equation which fulfills these requirements is \nonumber \\ & & + \gamma\mathcal{b}\mathcal{j } \left[\cos\theta\sin\theta\left(a^\dagger a - aa^\dagger \right)\right]\rho \nonumber \\ & & -\gamma\mathcal{a}\left[\cos\theta\sin\theta\left(a^\dagger a - aa^\dagger \right)\right]\rho\nonumber \\ & & + \gamma\left(\mathcal{b}\mathcal{j}\left[\cos^2\theta a \right]\rho-\mathcal{a}\left[\cos^2\theta a \right]\rho\right)\nonumber \\ & & + \gamma\left(\mathcal{b}\mathcal{j}\left[\sin^2\theta a^\dagger \right]\rho-\mathcal{a}\left[\sin^2\theta a^\dagger \right]\rho\right),\end{aligned}\ ] ] where the kinetic energy term has been restored .this form of the master equation is still very complicated remembering the definitions of and from eqs .( [ theta ] ) .it is possible to simulate this master equation as it stands but the simulation would be very slow and inefficient .this master equation can , however , be approximated further with almost no loss in accuracy .the definitions in eqs .( [ theta ] ) can be approximated by remembering that we are working in the regime of .thus , to leading order , is simply 1 , and is . also any terms of order are extremely small and so are also ignored .if we propagate these approximations through our system , we find that the dressed - state is very close to the bare excited state . also , the dressed - state is very near the bare ground state .thus if we make the approximations then we can similarly approximate the last approximation is to expand the dressed - state eigenenergies in a taylor series to second order giving this leaves us with the final master equation for the dressed - state approximation as \nonumber \\ & & + \gamma\left(\mathcal{b}\mathcal{j } \left[\sigma\right]\rho-\mathcal{a}\left[\sigma\right]\rho\right ) \nonumber \\ & & + \gamma\left(\mathcal{b}\mathcal{j } \left[\frac{\omega}{2\delta}\sigma_z\right]\rho -\mathcal{a}\left[\frac{\omega}{2\delta}\sigma_z\right]\rho\right ) .\label{finaldressme}\end{aligned}\ ] ] here we have removed the hamiltonian term by moving into an interaction picture . again comparing this master equation to those already seen , the hamiltonian terms here are exactly analogous to those derived in the sophisticated adiabatic approximation eq .( [ adiab2me ] ) .again we have the same spontaneous emission term as in eq .( [ louivillian ] ) , but this time , the higher order correction term includes a spontaneous emission without changing the internal state of the atom , the opposite of the case in the secular approximation . this would be a useful approximate master equation , especially if the atom was initially in the excited state . in our case however , we can simplify this equation further by noting that the jump terms keep the atom in the ground state .once the excited state populations are reduced to zero , this master equation reduces to exactly the same master equation as derived for the more sophisticated adiabatic approximation eq .( [ adiab2me ] ) .the dressed state master equation results would thus lie exactly on top of those of the more sophisticated adiabatic approximation and as such are not included in the simulations .now that we have derived the forms of the master equations which we wish to compare , we need to set up a method of simulation for the different approximations and the full master equation . in this case , the numerical environment matlab turned out to be the most useful tool , combined with the quantum optics toolbox produced by s.m .tan .the simulation is designed by converting states and operators to vectors and matrices . making this conversionrequires a number of different adaptations of the theoretical master equations .firstly , we need to chose a form for the complex rabi frequency operator , .the form of the rabi frequency operator we use here is , such that there is no time dependence ( standing wave ) and the operator is hermitian . rewriting the sine function in terms of exponentials , , we can examine the action of these exponentials .this suggests using the momentum representation , because the exponentials , , simply represent single momentum kicks to the atom of , or atomic unit of momentum .having chosen to use the momentum representation to evaluate our master equation solutions , we need to address concerns with the conversion process .firstly , the momentum range must be truncated .the loss of probability due to this truncation should be kept below some small level , say . using this rule of thumb, it was found that a momentum hilbert space ranging from up to was sufficient for the parameters we chose ( discussed later ) .we still face a limitation problem in that the momentum hilbert space is continuous . to simulate this on a computer , we need to discretize this space .we are able to discretize this space , again as a consequence of the choice of .the exponential component nature of provides for momentum kicks of exactly one atomic unit of momentum in the direction of propagation of the laser light ( the -direction ) .thus the only momentum kicks that are not in a single unit of atomic momentum in the -direction are due to spontaneous emission of photons from the atom .this allows for a momentum kick of one atomic unit in any direction , which , when projected onto the -direction , allows for a random kick of anywhere between and atomic unit .however the relative infrequency of the spontaneous emission allows us to approximate this by a kick of , , or .this approximation requires us to convert the integral over the atomic dipole distribution to a discrete sum . following ref . , we let where the discrete function is obtained by making sure that it has the same normalization , mean momentum kick and mean squared momentum kick .this gives , and .we now need to define our matrix notation .the simulations here will set , and to 1 so that the kinetic energy operator can be represented as where denotes a momentum eigenstate .the operator in a momentum hilbert space just acts as a raising operator given by the operator is just the lowering operator . for simplicity , we take the initial density matrix to be the zero momentum state , given by a matrix of zeros with a 1 in the very centre .these are all the approximations required to allow the simulation of the adiabatic approximations .the full master equation and the secular and dressed - state approximations require the internal state information as well .this is achieved by constructing the tensor product of the momentum hilbert with a internal state hilbert space .this is easily done using the quantum optics toolbox .the only problem remaining is to adopt a method of simulation .there are a number of different simulation methods available as part of the quantum optics toolbox as well as any number of methods available using regular ordinary differential equation techniques .the method we use here is a built - in function of the quantum optics toolbox called odesolve .this allows options to be specified for use on both smooth and stiff ode problems .the simulations presented here using odesolve were checked using first , a hybrid euler and matrix exponential method , and second , a modified mid - point method combined with richardson extrapolation from press _ et al _ .the results all agreed well but the odesolve method was by far the fastest and easiest to use .we wish to simulate the experimental regime of hensinger _et al _ , where . however , the actual experimental parameters would be prohibitively time - consuming to simulate .this is both because of the separation in time scales between the fastest ( ) and slowest dynamics ( ) , and also because of the basis size required .the latter is determined by the fact that must be larger than the effective potential drop , .if we were to use the parameters of the experiment of then we would need a basis size of more than . however , we can scale the parameters down and still preserve the regime of the experiments . as well as working in the regime , we require for the validity of the adiabatic approximations that be much larger than the oscillation frequency near the potential minimum , . in our scaled units ( ) ,the latter is of order . on this basis , we have chosen paramaters of , and , leaving , giving . for rb as in ref . , we have nm and kg , so in si units , the frequency unit is s .note that the we have chosen is _ not _ the true radiative decay rate for rb .the scaled time unit can be given meaning by examining the spontaneous emission rate .for each of the approximations , the fluorescence rate is ( to leading order ) . here is a time - averaged effective value , which would be somewhat less than .this means that after a time period of , we would expect there to have somewhat under one spontaneous emission .this time is 2 scaled time units .there is , however , a lot of evolution occurring in that time period . to compare the approximations , we look in detail at a period from 0 to 2 time units , andalso look at the long time results at 8 time units .in comparing the results of the simulations , we first compare the accuracy of the approximations , and then the resources required to perform the calculations . to compare the accuracy of the simulations , we look at the momentum distribution of the atom as it evolves through time . the interesting components of this evolution are the probability to have 0 momentum and the probability to have 1 atomic unit of momentum as the atom evolves through time .the other probabilities evolve similarly to one of these two .firstly , we will look at the probability to have zero momentum over a relatively short timescale .the approximations are so close to the full master equation that , at full size , they are almost impossible to distinguish from the full master equation . fig .[ full0]a shows an overall picture of how the probability to have zero momentum evolves through time .[ full0]b zooms in on a section of the full size figure to illustrate the differences between the approximations .as we can see , the secular and standard adiabatic approximation evolutions both significantly lead that of the full master equation in time .the dressed - state and sophisticated adiabatic approximations are very close to the full master equation solution even at this magnification .fig [ full0]c zooms in even closer to try to distinguish the sophisticated adiabatic approximation from the full master equation .this trend continues as we investigate the probability to have 1 atomic unit of momentum in fig .[ full1 ] .the inaccuracy of the secular and standard adiabatic approximations are again evident in fig .[ full1]b .the more sophisticated adiabatic approximation is very close to the full master equation evolution and is still indistinguishable at this magnification .[ full1]c provides a means of comparing the sophisticated adiabatic approximation to the full master equation evolution in detail .as we can see , the more sophisticated adiabatic approximation is again very close to the full master equation .the last visual comparison to make is to see how the evolution described by the approximations matches that of the full master equation after a very long time period .the time period that has elapsed in fig . [ long ]is 8 time units , after which , we would have expected there to be a number of spontaneous emissions . at this point in time, we compare the probability densities described by the approximations and the full master equation .it is evident from fig . [ long ]that the standard adiabatic and the secular approximations are quite poor methods for simulating the full master equation evolution over a long time period .the main reason for this is the leading behaviour such as we see in fig .[ full0]a .even at long times , the probability to have zero momentum is still oscillating and the leading behaviour means that the standard adiabatic and secular approximations are not oscillating in phase with the full master equation .thus , even though they follow roughly the same shape , they are not necessarily at the same point in the oscillation as the full master equation .what is even more striking though is that the secular approximation evolution seems to follow a slightly different trend for the probabilities to have larger momentum .the intriguing features of the secular approximation are discussed in sec [ discussion ] . to examine how close the sophisticated adiabatic and the dressed - state approximations are to the full master equation , fig .[ longzoom ] focuses on a smaller section to provide a comparison .it is evident from fig .[ longzoom ] that the dressed - state and sophisticated adiabatic approximations are very close to the full master equation , even at this long time .the main resource required to perform the simulations is time .although each approximation has different memory and processing power requirements , these needs are reasonably accurately reflected in the time each simulation takes to run . the times quoted in table [ table ]are the times required to calculate the set of results from 0 to 8 time units and are quoted in seconds ..[table]comparison of the times each simulation requires to run . [ cols="<,^",options="header " , ] as we can see from the results in table [ table ] , the adiabatic approximations are both clearly the fastest .the secular approximation is just over 4 times larger than the adiabatic approximations .this is not entirely unexpected .we would have expected at least a doubling in time by using the methods that included state information .the secular approximation is supposed to force the coherences to zero .unfortunately , using our method to simulate this allows zero to be anywhere up to .this , although small , still has to be processed explaining the four - fold increase in time .one reason it is just over 4 times the time required for the adiabatic approximations could be that after the evolution has been calculated , there is still a partial trace to be performed to obtain a solution of the same form as the adiabatic approximations .we could have limited this time by simulating two coupled equations instead of a full master equation and then would probably have only doubled the time taken .the standard and more sophisticated adiabatic approximations take similar times to simulate , and are much faster than the full master equation .the only difference between them is an extra potential term .it turns out , though , that this hamiltonian term is quite important in accurately describing the motion of the atom . while the standard approach evolves too quickly and leads that of the full master equation evolution , the more sophisticated approach with the modified potential does not suffer this problem .the evolution described by this more sophisticated approach is very close to the full master equation , even at long times .of course the dressed - state approximation offers the same accuracy as the sophisticated adiabatic approximation which may be useful if we wanted to simulate an initially excited atom .these successes contrast the results from the secular approximation .as one can see from figs .[ full0 ] and [ full1 ] , the secular approximation not only leads the full master equation solution but it also predicts a lower probability to have either zero or one atomic unit of momentum .this result is surprising because the secular approximation master equation is quite similar to the others . investigating this further , we find that the secular approximation master equation simulation shows that the probability to have 25 atomic units of momentum increases exponentially much faster than any of the other approximations .to analyse this in another manner , we find that for the secular approximation tr ] to tr $ ] for the sophisticated adiabatic approximation .all other simulations look similar . ] here we see that numerically , this ratio is around 0.1 which is of the same order as the other ratios ( such as ) which are required to be small for our approximations . finally , we discuss the possibility of simulating with the true experimental parameters .this is difficult because of the stiffness of the full master equation , and the basis size required for all methods .the latter problem can be avoided by using quantum trajectories . actually use quantum trajectory simulations based on the secular approximation master equation .it is however , possible to convert any master equation of the lindblad form to a quantum trajectory simulation .all of the approximate master equations we have developed here have been written in the lindblad form and as such all of these could be simulated using quantum trajectories .there are a number of theoretical models for the motion of an atom as it interacts with a light field .this paper has investigated the possibility of using four different approximations as opposed to using the full master equation to simulate an experimental system .two have been widely used in the past and two have not .we have given a detailed explanation of the mathematical principles to perform each of these approximations on a fairly general system .we have also compared them numerically to to the true dynamics from the full master equation . in a regime of particular experimental interest , we have found that the most accurate results are obtained from two approaches that we have introduced here , a sophisticated adiabatic approach and a dressed - state approach .these give identical equations in the regime of interest , and in terms of resources , they are almost as fast to simulate as the standard adiabatic approximation .this has been most used in the past , but deviates significantly from the true dynamics for long times .the other approximation that has been used in the past , the secular approximation , is even poorer . on top of the failings of the standard adiabatic approximation, it takes longer to simulate and appears to produce anomalous momentum diffusion .
in the field of atom optics , the basis of many experiments is a two level atom coupled to a light field . the evolution of this system is governed by a master equation . the irreversible components of this master equation describe the spontaneous emission of photons from the atom . for many applications , it is necessary to minimize the effect of this irreversible evolution . this can be achieved by having a far detuned light field . the drawback of this regime is that making the detuning very large makes the time step required to solve the master equation very small , much smaller than the time scale of any significant evolution . this makes the problem very numerically intensive . for this reason , approximations are used to simulate the master equation which are more numerically tractable to solve . this paper analyses four approximations : the standard adiabatic approximation ; a more sophisticated adiabatic approximation ( not used before ) ; a secular approximation ; and a fully quantum dressed - state approximation . the advantages and disadvantages of each are investigated with respect to accuracy , complexity and the resources required to simulate . in a parameter regime of particular experimental interest , only the sophisticated adiabatic and dressed - state approximations agree well with the exact evolution .
bacteriorhodopsin ( br ) is the best known protein in the family of opsins , proteins conjugated with a molecule of retinal and able to convert visible light into electrostatic energy .this protein is found in a primeval organism , the _ halobacterium salinarum _ , specifically in a part of its cell membrane called purple membrane ( pm ) , since its color .this membrane , thick , is a natural thin film , essentially constituted by few lipids and these proteins organized in an hexagonal lattice .a large number of studies has been carried out on br in the field of biophysics and physicochemistry , and many aspects have been unveiled . as relevant examples we cite : ( i ) the photoinduced isomerization of the retinal embedded in br , ( ii ) the conformational change of br associated with the retinal isomerization , ( iii ) the importance of environmental conditions in the photocycle development .patches of pm have been used for several purposes : to produce metal - protein - metal junctions , to perform c - afm investigations , to develop solar cells of new generation , etc . as a matter of fact, films of br resist to thermal , electrical and also mechanical stress and show a substantial photocurrent when irradiated by a visible ( green ) light .therefore , br can be used as an optoelectrical switch , to convert radiant energy into electrical energy , in pollutants remediation systems , to produce optical memories , to control neuronal and tissue activity , etc .the commonly accepted view concerning the protein activation is the following : a photon is absorbed by the retinal molecule contained in each protein , then causing the bending of this molecule . as a consequence ,the protein undergoes a change of its tertiary structure , following a cycle of transformations that arrives to release a proton outside the cell membrane .finally , reprotonization of the retinal molecule by asp96 restores the native configuration .some crystallographic investigations have been performed on this protein to determine its configuration in the different steps of the cycle .this is a particularly hard task , since the x - ray radiation could modify the protein structure , and only recently the puzzle of many contradictory results starts to be recomposed . at present , a rather complete description of the protein is given only for the native and the active l - state .measurements of the protein current - voltage ( i - v ) characteristics were reported in several papers . to this purpose , samples made of patches of pm were anchored on a conductive substrate and connected to an external circuit .the connection was made with : i ) an extended transparent conductive contact , ii ) a tip of a c - afm . in both cases ,the measured current was found to be quasi - ohmic at the lowest bias , and strongly super - linear at increasing bias .furthermore , when the sample was irradiated with green light , a significant photocurrent was observed .there was a clear proof that the charge transfer is mainly due to the protein .there was also a high resistance channel due to the lipid membrane , which is detected in experiments involving a membrane deformation . in the absence of membrane deformation, this channel can be neglected .the charge transfer through the protein was attributed to a tunneling mechanism . in particular , the presence of a current ( in dark ) well above possible leakage components , and of a photocurrent ( in light ) supports the hypothesis of a mechanism of charge transfer intrinsically dependent on the protein tertiary structure . since a long time , the interaction of electromagnetic fields with biological matter is the object of many investigations , mainly for the damages produced by ionizing radiations . as far as known , sunlight that reaches the earth is largely composed of non - ionizing radiations whose main effect on biological matter is heating . in particular , for proteins , this should lead to a global energy enhancement , regardless of the protein specific conformational state , as confirmed by recent experiments showing the critical role of temperature in current measurements .therefore , we conjecture that in a sample of proteins , like a patch of purple membrane , light gives rise to different effects . from one side , the retinal modification with the consequent conformational change from the native to the active state , from another side , a net transfer of energy to the whole protein with a consequent increase of its free - energy . as a general issue , both these effects should contribute to the protein activation .the present paper addresses this issue by accounting simultaneously for these two effects in a computational / theoretical model called inpa ( impedance network protein analogue ) .this approach describes the electrical characteristics of a protein by using a network of impedances . in previous investigations , the local interaction of a photon with the retinal has been investigated by considering the corresponding change of the network structure the novelty of the present paper consists in the further introduction of a global energy increase of the the whole protein due to the incident light .this is described by a change of the network connectivity both of the native and the active state .the methodological approach we follow points to the integration of different disciplines ( molecular biology , physics , electronics ) to develop a new generation of electronic devices within a nano - bio - technology .this interdisciplinary approach is leading to an entirely new discipline which we christen _ proteotronics _ .the paper is organized as follows .section ii summarizes the main steps of the inpa model and describes the improvements introduced on the basis of the dynamical evolution of the protein energy landscape .section iii reports and discusses the main results and suggests the opening of new perspectives .major conclusions are summarized in section iv .the inpa model is based on a percolative approach that describes the protein like a network of links and nodes .a node represents a single amino acid and its spatial position is the same of the corresponding atom .a link joins a couple of nodes , and represents the interaction between amino acids . the protein structure in its native or active stateis taken by public databases or homology modeling , thus the node configuration reproduces the protein backbone .then , couples of nodes are connected with the rule that they must be be closer than an assigned interaction radius , . in this way, the number of links , , depends on the value of and is in the range , with the number of amino acids pertaining to the given protein . in the present case ,the macroscopic quantity of interest is the static i - v characteristic .therefore , the network is drawn like an electrical circuit where an elementary resistance , , is associated with each link between nodes and .explicitly : where , is the cross - sectional area between two spheres of radius centered on the -th and -th node , respectively ; is the distance between the sphere centers , is the resistivity . by positioning the input andoutput electrical contacts , respectively , on the first and last node ( more structured contacts can be envisioned ) for a given applied bias ( current or voltage operation modes according to convenience ) the network is solved within a linear kirchhoff scheme and its global resistance , , is calculated .accordingly , this network produces a parameter - dependent static i - v characteristic for the single protein , based on the standard relation : to account for the super - linear behaviour of current at increasing voltages , a tunneling mechanism of charge transfer is included . in doing so , a stochastic approach within a monte carlo scheme is used . in particular , following the simmons model , a mechanism containing two possible tunneling processes , a direct tunneling ( dt ) at low bias , and a fowler - nordheim tunneling ( fn ) at high bias , is introduced .therefore , the resistivity value of each link is chosen between a low value , taken to fit the current at the highest voltages , and a high value , which depends on the voltage drop between network nodes as : where is the maximal resistivity value taken to fit the i - v characteristic at the lowest voltages ( ohmic response ) and is the height of the tunneling barrier between nodes .the transmission probability of each tunneling process is given by : \quad ( ev_{i , j } <\phi ) , \label{eq:4a } \\ { p}^{\rm fn}_{ij}&=&\exp \left[-\alpha\ \frac{\phi}{ev_{i , j}}\sqrt{\frac{\phi}{2 } } \right ] \qquad ( ev_{i , j } \ge \phi ) \label{eq:4b}\end{aligned}\ ] ] where is the potential drop between the couple of amino acids , , and is the electron effective mass , here taken the same of the bare value .the dt superscript refers to the low - bias , quasi - ohmic response and the fn subscript refers to the high - bias , super - ohmic response .by construction , both the current response at very low and very high bias exhibit an ohmic behaviour with values of the corresponding resistance differing for several orders of magnitude .this model was successfully used to reproduce the experiments of ref .the inputs parameters were =6 , =219 mev , , for the low field resistivity and for the high field resistivity .the protein tertiary structure was taken from the protein database , specifically the 2ntu entry , an x - ray crystallographic measurement for br native - state .the agreement between calculations and experiments was found to be satisfactory , also reproducing the current modifications due to the membrane indentation by the c - afm tip . on this basis, we found reasonable to take the same input to fit the response of the protein in light .at present , the only crystallographic entry describing the complete protein in an active state is 2ntw , which gives account of the l - state ( henceforth called the active state ) of br .this state is sensitive to the light and precedes the m state ( ) , which corresponds to a proton releasing .in the i - v measurements , the proton releasing was not monitored and the current measured was only attributed to electron transfer . when the activated configuration was used as input to fit the current response in the presence of light , the agreement with experiments was less satisfactory than that in dark . a possible way to overcomethis drawback is to assume that the presence of light modifies not only the protein structure but also its connectivity properties . in the inpa modelthis modification is accounted for by changing the value of the interaction radius . to this purpose ,[ fig : rel_res ] reports the role of the interaction radius in the calculation of the resistances of the native and active states .numerical data are obtained at very low voltages , where the ohmic regime strictly holds , and reflect the protein topology . , for br in the native and active state .the ellipse indicates the region of values whose trend is in agreement with experiments.,scaledwidth=45.0% ] the main results of fig .[ fig : rel_res ] are : ( i ) the general low resolution between these states and , ( ii ) the presence of two regions in which the resolution is best appreciable , around and .the experiments are in agreement with . in the following the protein current responsesare numerically analyzed for several values of around 6 .reported in the figures .symbols refer to numerical calculations , lines are guides to the eyes . dashed lines and superimposed symbols refer to the active state ; continuous lines and superimposed symbols to the native state . for =5.8 the i - v characteristics are found to coincide for native and active states.,scaledwidth=45.0% ] induced by the absorption ( from the whole protein ) of other photons .lower panel depicts an alternative possibility when the absorption of photons induces a global energy increase from u to u of the native state and a successive absorption process induce a conformational change from u to the energy level w of the active state.,scaledwidth=45.0% ] .,scaledwidth=45.0% ] , scaledwidth=45.0% ] in particular , simulations for the single protein are performed for the three values , = 5.8 , 6.0 and 6.3 and for both native and active state .results are reported in fig .[ fig : puri ] with panel ( a ) reporting the experimental data carried out in a br macroscopic sample in dark and light . for a given protein state , fig .[ fig : puri ] shows a current enhancement by increasing from 5.8 to 6.3 .furthermore , at increasing , the differences between the current response in dark and light are more and more marked .the above results suggest that the activation mechanism of a _macroscopic sample _ of br can be described within the _ single _ protein model by using : i ) a conformational change ( from 2ntu to 2ntw structure ) , ii ) a connectivity change ( i.e. a variation of the network interaction radius ) .more specifically , we can envision a twofold mechanism of photon absorption : by the retinal , and by the whole protein .the former is responsible of the conformational change , the latter of a global energy increase of the protein .notice that , according with experiments , the global energy increase is coherently used by the protein sample in enhancing its photocurrent response . in other words , the electromagnetic radiation impinging on the protein may anyway produce the global effect of an energy gain , while the local interaction of a photon with the retinal triggers the conformational change when the protein is in its native state .these mechanisms associated with photon absorption are schematically depicted in fig .[ fig : schemaattivazione ] . from one hand ,when the native state , say , becomes an active state , say , a further irradiation should enhance the global energy of the active state . in this way, the active state is promoted to an upper energy value . from another hand ,when the state does not undergo a conformational change , anyway its energy level can be promoted to an upper value ; a further dose of light may drive this state to an active state . among the different ways used to describe the protein energy landscape at different stages of the folding ,one of the most accepted is the rugged funnel - diagram . in this diagram ,the protein folds from the molten state to the native ( stable ) state following many possible folding routes toward the minimum of a funnel - like energy surface .when the protein runs down in the energy funnel , it loses the spurious bonds and enforces those stabilizing the minimal - energy configuration . in doing so, it also reaches the minimum of the configurational entropy .furthermore , the phase transition from a stable state at low energy to a stable state at higher energy is depicted in terms of a tunneling between the minima of a multiwell energy landscape . as energy increases , the spurious connections do again appear and the protein can explore more microstates . in a very schematic way , this mechanismis pictured in fig .[ fig : funnels ] where a couple of funnels representing the native and an active state are superimposed .the conformational change corresponds to the transition from a funnel to another one .the minimal energy between the two funnel stable minima is of the order of the ev . otherwise , by rising the energy of the protein in the native state , it is possible to reach the overlapping region of the two funnels . herethe transition from the native to the active state can occur without energy supply . within the inpa model , the mechanism of energy increaseis described by an increase of the value . as a consequence ,the network becomes more connected which implies an increase of the pathways for tunneling and of the number of possible current channels .this , in turn , leads to an increase of the instantaneous current fluctuations , as reported in fig .[ fig : fluctuations ] . here, the current fluctuations observed from simulations are reported for the active state , with an applied bias of 0.75 v and for two values of 5.8 and 6.3 . following this scheme ,the current response of samples made by monolayers of br has been fitted by using a binary mixture of native and active states ; the percentages of each state being a function of the value .specifically , a good fit of the experimental data is obtained by using : ( i ) for the sample in light , = 6.3 and a binary mixture of 96% of 2ntw and a 4% of 2ntu ; ( ii ) for the sample in dark = 5.8 and 100% of 2ntu .the c - afm experiment , performed in the absence of direct light , was previously fitted within a very good accuracy on the full bias range by using = 6.0 and 100% of the 2ntw native state . since in these experimentsone can not exclude the presence of a certain amount of proteins in the active state , in agreement with a value of larger than the threshold value = 5.8 here the fit with experiments is tested by using binary mixtures with an increasing percentage of active states .the fit is found to be sufficiently accurate with a percentage of active state not larger than 40% .= 6.3 .open squares refer to a pure native state with =5.8 .continuous ( dashed ) line refers to experimental data in dark ( light ) in the bias range v. in the inset the continuous line refers to experimental data in dark , in the bias range v. circles refer to data calculated with the pure native state with =6.0 , squares refer to data calculated with the mixture of 60% of native states and 40% of active states , with =6.0 ., scaledwidth=45.0% ] figure [ fig : miscele ] reports : ( i ) the experimental data and , ( ii ) the single protein data rescaled by using the formula where indicates the sample current , is a numerical constant of the order of used to scale the single protein current to the macroscopic data , is the current of the single protein calculated with the native / active configuration , is the fraction of native / active protein expected in the sample .of course , for a pure state , this formula reduces to the simple proportional rescaling : , dashed line is the fitting obtained with eq .[ eq:5 ] , empty circles refer to values intermediate between the experiments reported in ref , , see text ., scaledwidth=45.0% ] figure [ fig : rcfit ] reports the concentration of active 2ntw states in the samples vs the corresponding values to be used in simulations .symbols refer to values used in simulations and the dashed curve is a fitting obtained from a sigmoidal hill - like function that is commonly used in biochemistry to describe the percentage of proteins activated by a ligand .its validity in fitting several different physicochemical reactions is well known , and writes : where is the percentage of proteins in the active state , and =5.8 . for , half of the proteins in the sample have changed their configuration . here , the best fitting parameters are and , i.e. . the full circles reproduce the experiments reported in fig . [fig : rcfit ] . for the case of the experiments in ref . , further binary mixtures with the percentages suggested by eq .[ eq:7 ] ( open circles ) have been tested to to be consistent with experiments but to a less quantitative resolution of the photocurrent . in the present context , the meaning of the function is the following : for an increasing number of photons impinging onto the sample , the free energy of the protein grows and , as a consequence , the value of also grows . with alsothe percentage of proteins moved to the active state grows because some of the photons hit the retinal .finally , for larger than about all the proteins in the sample are in the active state .further amount of photons may only improve the free energy of the protein in the active state and the internal degree of connections .the paper investigates the mechanisms responsible for the photocurrent exhibited by monolayer samples of bacteriorhodopsin in the presence of an impinging green light . to this purpose, use is made of the inpa model implemented to account for the change of connectivity of the single protein associated with the presence of the light .previous results provided a satisfactory interpretation of a set of accurate measurements , performed with the c - afm technique , in nanolayer samples and in the absence of direct light .accordingly , experiments were interpreted on the basis of the tertiary structure associated with the native state of the single protein . however , a less satisfactory agreement was obtained in the region of low voltages when the same approach was applied to the case of monolayer samples of br in the presence of light , and thus taking the tertiary structure of the protein in its active state .to overcome this drawback , here we consider also the change in the connectivity of the protein state consequent to the enhancement of the free - energy level of the single protein induced by the presence of the light .the increase in connectivity is accounted for by an increase of the value of the interaction radius , , already introduced to correlate the electrical properties with the tertiary structure of the protein .accordingly , the new model interprets the photocurrent using a binary mixture of results pertaining to the native and active structures with the proper values of .specifically , a satisfactory fit of experiments on nanolayers is obtained by using : ( i ) for the sample in light , = 6.3 and a binary mixture of 96% of 2ntw and of 4 % of 2ntu , ( ii ) for the sample in dark , = 5.8 and 100 % of 2ntu .the c - afm experiments , performed in the absence of direct light , are quite finely reproduced by using a binary mixture containing up to 40% of 2ntw and =6.0 (see fig .[ fig : miscele ] ) .therefore , the implemented model enables us to achieve a better agreement between theory and experiments in the region of low applied bias and does not modify previous findings at high values of applied bias .we notice that the process of protein activation , in particular for opsins , is still a very open topic and the present approach aims to provide a further step for a better understanding of the subject .environmental effects , different from the presence of light , like temperature , the value of the ph , etc , should be responsible for other activation mechanisms .accordingly , more experiments , and structural information are necessary , and present results should give a further motivation to stimulate new experiments and formulate new theories . finally , this research exploits the trend in which different emerging disciplines can converge in a new branch of science , we recently introduced as proteotronics .indeed , proteotronics aims to develop new devices based on the sensing properties of proteins . in doing so, protein responses to external stimuli have chances to be better understood and used to devise biodevices of relevant importance in applied sciences .99 r. h. lozier , r. a. bogomolni , and w. stoeckenius , .a.corcelli , m. colella , g. mascolo , f. p. fanizzi , and m. kates , . g. varo , l.s .brown , r. needleman , and j.k .m. etzkorn _et al_. , . m. yoshino _et al_. , .r.gonzlez-luque , _ et al _ . , .s. subramaniam and r. henderson , .h. luecke , .t. kouyama and a. nasuda - kouyama , .y. jin , n. friedman , m. sheves m , t. he , and d. cahen , i. ron _ et al _ .l. sepunaru , n. friedman , i. pecht , m. sheves , and d.cahen , .i. casuso _et al _ . , .s. mukhopadhyay _et al _ . , . v. renugopalakrishnan _et al_. , .a. v. patil , t. premaruban , o. berthoumieu , a. watts , and j. j. davis , .g. dai , l.m .chao , and t. iwasa , .n. hampp , .s. q. lima and g. miesenck , .k. deisseroth , . c. wickstrand , r. dods , a. royant , and r. neutze , biochimica et biophysica acta(bba)-general subjects(2014 ) . h. m. berman _et al_. , .j. k. lanyi and b. schobert , .e. alfinito , j. -f .millithaler , and l. reggiani , .e. alfinito and l. reggiani , .e. alfinito , c. pennetta , and l. reggiani , .e.alfinito and l. reggiani , .e. alfinito , j. f.millithaler , l. reggiani , n. zine , and n. jaffrezic - renault , .e. alfinito , j. pousset , l. reggiani , and k. lee , .e. alfinito and l. reggiani , .e.alfinito and l. reggiani , .e. alfinito , j. pousset , and l. reggiani , .e. alfinito , l. reggiani , and j. pousset , cond - mat 1405.3840 ; e. alfinito , j. pousset , and l. reggiani _ protein - based electronics : transport properties and application . towards the development of a proteotronics _( pan stanford publishing pte .penthouse level , suntec tower 3 8 temasek boulevard singapore 038988 , in press ) e. alfinito , v. akimov , c. pennetta , l. reggiani , and g. gomila , . v. akimov _ et al _ ., in _ nonequilibrium carrier dynamics in semiconductors _ , edited by m. saraniti and u. ravaioli ( springer proceed- ings in physics , 2006 ) vol .110 7 , pp .229 - 232 .e. alfinito , c. pennetta , and l. reggiani , e. alfinito , c. pennetta , and l. reggiani , l. .j. g. simmons , .j. n. onuchic , z. luthey - schulten , and p. g. wolynes , .b. k. kobilka , and x. deupi , .s. goutelle , m. maurin , f. rougier , x. barbaut , l. bourguignon , m. ducher , and p. maire , .
recently , a growing interest has been addressed to the electrical properties of bacteriorhodopsin ( br ) , a protein belonging to the transmembrane protein family . several experiments pointed out the role of green light in enhancing the current flow in nanolayers of br , thus confirming potential applications of this protein in the field of optoelectronics . by contrast , the mechanisms underlying the charge transfer and the associated photocurrent are still far from being understood at a microscopic level . to take into account the structure - dependent nature of the current , in a previous set of papers we suggested a mechanism of sequential tunneling among neighbouring amino acids . as a matter of fact , it is well accepted that , when irradiated with green light , br undergoes a conformational change at a molecular level . thus , the role played by the protein tertiary - structure in modeling the charge transfer can not be neglected . the aim of this paper is to go beyond previous models , in the framework of a new branch of electronics , we called proteotronics , which exploits the ability to use proteins as reliable , well understood materials , for the development of novel bioelectronic devices . in particular , the present approach assumes that the conformational change is not the unique transformation that the protein undergoes when irradiated by light . instead , the light can also promote a free - energy increase of the protein state that , in turn , should modify its internal degree of connectivity , here described by the change in the value of an interaction radius associated with the physical interactions among amino acids . the implemented model enables us to achieve a better agreement between theory and experiments in the region of a low applied bias by preserving the level of agreement at high values of applied bias . furthermore , results provide new insights on the mechanisms responsible for br photoresponse . # 1#2#3#4#1 * # 2 , * # 3 ( # 4 )
with a growing demand for autonomous robots in a range of applications , such as search and rescue , and space and underwater exploration , it is essential for the robots to be able to navigate accurately for an extended period of time in order to accomplish the assigned tasks . to this end ,the ability to detect revisits ( i.e. , _ loop closure _ or place recognition ) becomes necessary , since it allows the robots to bound the errors and uncertainty in the estimates of their positions and orientations ( poses ) . in this work, we particularly focus on loop closure during visual navigation , i.e. , given a camera stream we aim to efficiently determine whether the robot has previously seen the current place or not . even though the problem of loop closure has been extensively studied in the visual - slam literature ( e.g. , see ) , a vast majority of existing algorithms typically require the _ offline _ training of visual words ( dictionary ) from _ a priori _ images that are acquired previously in visually similar environments . clearly , this is not always the case when a robot operates in an unknown , drastically different environment . in general , it is difficult to reliably find loops in ( visual ) appearance space .one particular challenge is the perceptual aliasing that is , while images may be similar in appearance , they might be coming from different places . to mitigate this issue , both temporal ( i.e., loops will only be considered closed if there are other loops closed nearby ) and geometric constraints ( i.e. , if a loop has to be considered closed , a valid transformation must exist between the matched images ) can be employed .it is important to point out that the approach of decides on the quality of a match _ locally _ if the match with the highest score ( in some distance measure ) is away from the second highest , it is considered a valid candidate .however , the local information may lead to incorrect loop - closure decisions because both temporal and geometric conditions can easily fail in highly self - similar environments such as corridors in a hotel . to address the aforementioned issues ,in this paper we introduce a _ general _ , _ online _ loop - closure approach for vision - based robot navigation .in particular , by realizing that loops typically occur intermittently in a navigation scenario , we , for the first time ever , formulate loop - closure detection as a sparse -minimization problem that is convex .this is opposed to the current methods that cast loop closure detection as an image retrieval problem . by leveraging the fast convex optimization techniques, we subsequently solve the problem efficiently and achieve real - time frame - rate generation of loop - closure hypotheses .furthermore , the proposed formulation enjoys _flexible _ representations and can produce loop - closure hypotheses regardless of what the extracted features represent that is , any discriminative information , such as descriptors , bag of words ( bow ) , or even whole images , can be used for detecting loops .lastly , we shall stress that our proposed approach declares a loop that is valid only when it is _ globally unique _ , which ensures that if perceptual aliasing is being caused by more than one previous image , _ no _ loop closure will be declared .although this is conservative in some cases , since a false loop closing can be catastrophic while missing a loop closure generally is not , ensuring such global uniqueness is necessary and important , in particular , in highly self - similar environments .the rest of the paper is organized as follows : after reviewing the related work , we formulate loop - closure detection as a sparse -minimization problem in section [ sec : sparseandredunant ] . in section[ sec : detectionloops ] we present in detail the application of this formulation to visual navigation , which is validated via the real - world experiments in section [ sec : expr ] . finally , section [ sec : conclusions ] concludes this work as well as outlines the possible directions for future research .the problem of loop - closure detection has been extensively studied in the slam literature and many different solutions have been proposed over the years ( e.g. , see and references therein ) . in what follows ,we briefly overview the work that closely relates to the proposed approach . in particular , the fab - map is a probabilistic appearance - based approach using visual bow for place recognition , and was shown to work robustly over trajectories up to km .similarly , the binary - bow ( bbow)-based method detects the fast keypoints and employs a variation of the brief descriptors to construct the bow . a verification step is further enforced to geometrically check the features extracted from the matched images .it should be pointed out that both methods are based on the similar ideas of text - retrieval : these methods learn the bow dictionaries beforehand , which are used later for detecting loop closures when the robots actually operates in the field .this restricts the expressive power of the dictionary in cases where it has to operate in environments drastically different from where the dictionary was constructed .in contrast , the proposed approach builds the dictionary _ online _ as the robot explores an unknown environment , while at the same time efficiently detecting loops ( if any ) .moreover , rather than solely relying on the descriptors - based bow , our method is flexible and can utilize _ all _ pixel information to discriminate places even in presence of dynamic objects ( encoded as sparse errors ) , any descriptor that can represent similar places , or any combination of such descriptors .some recent work has focused on loop closure under extreme changes in the environment such as different weather and/or lighting conditions at different times of the day .for example , proposed the seqslam that is able to localize with drastic lighting and weather changes by matching sequences of images with each other as opposed to single images . introduced the experience - based maps that learn the different appearances of the same place as it gradually changes in order to perform long - term localization .building upon , also discovered new images to attain better localization .in addition , have explored geometric features such as lines for the task of loop closure detection in both indoor and outdoor scenarios .note that if the information invariant to such changes can be extracted as in , the proposed formulation can also be used to obtain loop - closure hypotheses .essentially , in this work we focus on finding loop closures given some discriminative descriptions such as descriptors and whole images , assuming _ no _ specific type of image representations .more recently , with the rediscovery of efficient machine learning techniques , convolutional neural networks ( cnns) have been exploited to address loop closure detection .these networks are multi - layered architectures that are typically trained on millions of images for tasks such as object detection and scene classification .the internal representations at each layer are learned from the data itself and therefore can be used as features to replace hand - crafted features . based on this approach , features from different layers in the network and identify the layers that are useful for view - point and illumination invariant place recognition .moreover , in landmarks are treated as objects by finding object proposals in the images and features are extracted for them using deep networks .these features then allow for view - point invariant place categorization by matching different objects from varied viewpoints . in these cnn - based placecategorization techniques , the networks are used as feature extractors followed by some form of matching . in this paper , we show that these deep features can also be utilized in the proposed framework of loop - closure detection .it should be noted that in our previous conference publication , we have preliminarily shown that the proposed loop - closing framework is general and can employ most hand - crafted features .recently , extended this sparse - optimization based framework to an incremental formulation allowing for the use of the previous solution of the sparse optimization to jump start the next one , while further extended it to a multi - step delayed detection of loops ( instead of single - step detection as in our prior work ) in order to exploit the structured sparsity of the problem . in this paper , we present more detailed analysis and thorough performance evaluations , including new experiments using deep features and validations in challenging multiple - revisit scenarios , as well as new comparisons against the well - known nearest neighbour ( nn ) search .in this section , we formulate loop - closure detection as a sparse optimization problem based on a sparse and redundant representation .such representations have been widely used in computer vision for problems such as denoising , deblurring , and face recognition .similarly , formulated the back - end of graph slam as an -minimization problem .however , _ no _ prior work has yet investigated this powerful technique for loop closure detection in robot navigation .the key idea of this approach is to represent the problem _ redundantly _ , from which a _ sparse _ solution is sought for a given observation .suppose that we have the current image represented by a vector , which can be either the vectorized full raw image or descriptors extracted from the image .assume that we also have a dictionary denoted by \in \mathcal r^{n \times m} ] and ] .is hereafter used to denote the time index , thus revealing the online incremental process of the proposed approach . ]the solution at the -th time step contains the contribution of all previous bases in constructing the current image . to find a unique image to close a loop with , we are interested in which basis vector has the greatest relative contribution , which can be found by calculating the unit vector .any entry greater than a predefined threshold , , is considered a loop - closure candidate .in addition , due to the fact that in a visual navigation scenario , the neighbouring images are typically overlapped with the current image and thus have great `` spurious '' contributions , we explicitly ignore a time window , , around the current image , during which loop - closure decisions are not taken .this is a design parameter and can be chosen based on the camera frequency ( fps ) and robot motion .once the decision is made , the dictionary is updated by appending to it , i.e. , ] + it is important to note that the solution to , by construction , is guaranteed to be sparse .in the ideal case of no perceptual aliasing , the solution is expected to be -sparse , because _ ideally _ there exists only one image in the dictionary that matches the current image when a revisit occurs . in the case of exploration where there is no actual loop - closure match in the dictionary, the current image is best explained by the last observed and the solution hence is still -sparse , which however will not generate a valid loop - closure detection because of the temporal constraint enforced in our implementation .note that if there are significant differences such as illumination or dynamic objects , the solution may no longer be -sparse ( see section [ sec : norland ] ) . in a general case where images that have been previously observed and that are visually similar to the current image , a naive thresholding based method which simply compares the current image to each of the previous ones based on some similarity measure would likely produce loop - closure hypotheses corresponding to the images in the dictionary .it is very important to note that such an approach independently calculates the contribution of each previous image , _ without _ taking into account the effects of other images or data noise , despite the fact that due to noise they may be correlated and thus is _ suboptimal_. in contrast , the proposed -minimization - based approach simultaneously computes the optimal contribution of all the previous images and noise by finding the global optimal solution of the convex problem , and guarantees the unique hypothesis by selecting the -th image with the greatest . in the case of multiple revisits to the same location , the proposed approach , as presented here , is _ conservative _ ( i.e , only one , but the best one , revisit would be selected ) .including the corresponding images from earlier visits in the dictionary would lead to a non - unique solution , when the same location is revisited again .however , the proposed method can be easily extended to detect loops on multiple revisits . instead of considering the contribution of all the previous basis separately ,if a loop exists between previous locations and , we consider their joint contribution ( ) when making the decision .this ensures that even though these places are not individually unique enough to explain the current image , together ( and since they are visually similar as we already have a loop closure between them ) , they best explain the current observation , allowing us to detect loop closures in case of multiple revisits .we stress that the dictionary representation used by the proposed approach is general and flexible .although we have focused on the simplest basis representation using the down - sampled whole images ( see section [ sec : expr ] ) , this does not restrict our method only to work with this representation .in fact , any discriminative feature that can be extracted from the image ( e.g. , gist , hog , etc . ) can be used as dictionary bases for finding loops , thus permitting the desired properties such as view and illumination invariance . to show that , particular experiments have been performed in section [ sec : expr ] , using different types of bases and their combinations . moreover , it is not limited to a single representation at a time . if we have descriptors , a multi - modal descriptor can be easily formed by stacking them up in a vector in ( ) .this idea has recently been exploited in .therefore , our proposed method can be considered as a _generalized _ loop - closing approach that can use any basis vectors as long as a metric exists to provide the distance between them .it is interesting to point out that sparse -minimization inherently is robust to _ data noise _, which is widely appreciated in computer vision ( e.g. , see ). in particular , the sparse noise ( error ) term in can account for the presence of dynamic changes or motion blurs .for example , in fig .[ fig : newcollege - pics ] the dominant background basis explains most of the image , while the dynamic elements ( which have not been observed before ) can be represented by the sparse noise , and fig .[ fig : newcollege ] shows that the proposed approach robustly finds these loops .such robust performance becomes necessary particularly for long - term mapping where the environment often gradually changes over time and thus reliable loop closure in presence of such changes is essential . as a final remark , the proposed -minimization - based loop - closure algorithm is also robust to _ information loss _ , which is closely related to the question raised by : how much information is needed to successfully close loops ?in this work , we have empirically investigated this problem by down - sampling the raw images ( which are used as the bases of the dictionary ) without any undistortion and then evaluating the performance of the proposed approach under such an adverse circumstance . as shown in section [ sec : expr ] , truly small raw images , even with size as low as pixels , can be used to reliably identify loops , which agrees with the findings of .to validate the proposed -minimization - based loop - closure algorithm , we perform a set of real - world experiments on the publicly - available datasets .in particular , a qualitative test is conducted on the new college dataset , where we examine the different types of bases ( raw images and descriptors ) in order to show the flexibility of basis representation of our approach as well as the robustness to dynamics in the scene .subsequently , we evaluate the proposed method on the rawseeds dataset and focus on the effects of the design parameters used in the algorithm . finally , we perform experiments on the kitti visual odometry benchmark , by highlighting the ability of the proposed approach to use different types of deeply learned features as representations , as well as the superior performance against a nearest neighbour ( nn)-based approach . [fig : newcollege-8x6 ] [ fig : newcollege-8x6-gist ] the new college dataset provides stereo images at hz along a km trajectory , while in this test we only use every frame giving an effective frame rate of hz and in total images .each image originally has a resolution of , but here is down - sampled to either or pixels .we show below that even under such adverse circumstance , the proposed approach can reliably find the loops .the image is scaled so that its gray levels are between zero and one , and then is vectorized and normalized as a unit column vector . for the results presented in this test , we use the threshold and the weighting parameter . due to the fact that neighbouring images typically are similar to the current one and thus generate false loop closures, we ignore the hypotheses within a certain time window from the current image and set sec , which effectively excludes the spurious loops when reasoning about possible closures .note that can be chosen according to speed of the robot as well as the frame - rate at which the loop closing algorithm is working .we also eliminate random matches by enforcing a temporal consistency check , requiring at least one more loop closure within a time window from the current match .we ran all the experiments in matlab on a laptop with core - i5 cpu of 2.5ghz and 16 gb ram , and use the homotopy - based method for solving the optimization problem .the qualitative results are shown in fig .[ fig : newcollege ] where we have used _ three _ different bases , i.e , down - sampled and raw images , and gist descriptors . in these plots , the odometry - based trajectory provided by the datasetis superimposed by the loop closures detected by the proposed approach , which are shown as vertical lines connecting two locations where a loop is found .all the lines parallel to the -axis represent loop closures that connect the same places at different times .any false loops would appear as non - vertical lines , and clearly do not appear in fig .[ fig : newcollege ] , which validates the effectiveness of the proposed method in finding correct loops .these results clearly show the flexibility of bases representation of the proposed method .in particular , instead of using the different down - sampled raw images as bases , our approach can use the gist descriptors , , which are computed over the whole image , and is able to detect the same loop closures as with the raw images . an interesting way of visualizing the locations where loop closures occur is to examine the sparsity pattern of the solution matrix , which is obtained by stacking all the solutions , , for all the queried images in a matrix .[ fig : newcollege - sparsity ] shows such a matrix that contains non - zero values in each column corresponding to the elements greater than the threshold . in the case of no loop closure , each image can be best explained by its immediate neighbour in the past , which gives rise to non - zeros along the main diagonal .most importantly , the off - diagonal non - zeros indicate the locations where loops are closed . it is interesting to see that there are a few sequences of loop closures appearing as off - diagonal lines in fig . [fig : newcollege - sparsity ] .this is due to the fact that the first three runs in the circular area at the beginning of the dataset , correspond to the three off - diagonal lines in the top - left of the matrix ; while a sequence of loop closures detected in the lower part of new college , correspond to the longest line parallel to the main diagonal .-th column corresponds to the solution for the -th image , and the non - zeros are the values in each column that are greater than .note that the main diagonal occurs due to the current image being best explained by its neighboring image , while the off - diagonal non - zero elements indicate the loop closures . , scaledwidth=75.0% ] it is important to note that although both dynamic changes and motion blurs occur in the images , the proposed approach is able to reliably identify the loops ( e.g. , see fig .[ fig : newcollege - pics ] ) , which is attributed to the sparse error used in the -minimization [ see ] . to further validate this robustness to dynamics , fig .[ fig : robust_ng ] shows a typical scenario in the new college where we query a current image with no dynamics to the dictionary that uses down - sampled raw images as its bases , and the correct match is robustly found , which however contains moving people .interestingly , the dominant noise contributions ( blue ) as shown in fig .[ fig : robust_ng](c ) , mainly correspond to the locations where the people appear in the match image .this implies that the sparse error in correctly models the dynamic changes . to further test the proposed algorithm , we use the bicocca 25b dataset from the rawseeds project .the dataset provides the laser and stereo images for a trajectory of m. we use the left image from the stereo pair sampled at hz , resulting in a total of images .note that we do _ not _ perform any undistortion and work directly with the raw images coming from the camera . in this test, we focus on studying the effects of the most important parameters used in the proposed approach , and evaluate the performance based on precision and recall . specifically , precision is the ratio of correctly detected loop closures over all the detections .thus , ideally we would like our algorithm to work at full precision . on the other hand , recall is the percentage of correct loop closures that have been detected over all possible correct detections .a high recall implies that we are able to recover most of the loop closures . raw images ., title="fig:",scaledwidth=45.0% ] raw images ., title="fig:",scaledwidth=45.0% ] we first examine the acceptance threshold , whose valid values range from 0.5 to 1 .this parameter can be thought of as the similarity measure between the current image and the matched image in the dictionary . in order to study the effect of this parameter on the precision and recall, we vary the parameter for a fixed image of pixels .moreover , we are also interested in if and how the weighting parameter impacts the performance and thus vary this parameter as well . the results are shown in fig .[ fig : varying - tau ] .as expected , the general trend is that a stricter threshold ( closer to 1 ) leads to higher precision , and as a side effect , a lower recall . this is because as the threshold increases , we get fewer loop closing hypotheses but a larger proportion of them is correct . note that this dataset is challenging due to the perceptual aliasing in many parts of the trajectory ; the matched images are visually similar but considered as false positives since the robot is physically not in the same place .interestingly , fig .[ fig : varying - tau ] also shows that the smaller leads to the higher precision but the lower recall .this seems to counter the intuition that the sparser the solution of ( by using a larger ) , the higher fidelity of the loop - closure detection .however , it should be noted that this intuition motivates the proposed sparse formulation but does not guarantee that the optimal solution is sparsest ( 1-sparse ) as discussed in section [ sec : uniqueness ] , which heavily depends on the quality of the data at hand ( e.g. , signal - to - noise ratio , the similarity level of the revisiting images ) .it is important to note that there are two goals to reconcile in our sparse formulation ( [ equ : l1-unconstrained ] ) : ( i ) the reconstruction error represented by the term , and ( ii ) the sparsity level of the solution encoded in the term . a smaller value of this parameter results in a better data - fitting solution of smaller reconstruction error , hence requiring the images to be as visually similar as possible but at the same time , lowering the contribution of the greatest basis vector . inspired by the recent work , we also examine the performance difference by varying image sizes and see if we can obtain meaningful results using small - size images . the original image size from the bicocca datasetis , and the first image size we consider is which is a reduction of a quarter in each dimension . for each successive experiment ,we half the size in each dimension , which results in images of size , , , and finally .the weighting parameter is fixed to be 0.5 .precision and recall curves are generated by varying the acceptance threshold are shown in fig .[ fig : varying - imsize ] .it is clear from fig .[ fig : varying - imsize ] that the curves are tightly coupled and undergo the same behaviour for each image size .precision curves for the three largest image sizes overlap each other , showing that we can generate the same quality of loop closure hypotheses using any of the image sizes .these plots show a graceful degradation as the image size decreases . considering that the image of size is a factor of times smaller than the original image ,our method is able to distinguish places based on very little information , which agrees with the findings of . ., title="fig:",scaledwidth=45.0% ] ., title="fig:",scaledwidth=45.0% ] since the proposed method solves an optimization problem in a high - dimensional space , it is important to see how long the method takes to come up with the loop - closing hypotheses . despitethat each image is an vector for an image with rows and columns , and at the end of the experiment we have nearly images , the computation is very efficient thanks to the sparsity induced by the novel formulation .most of our solutions are expected to be -sparse ( i.e. , we expect only one non - zero if the current image matches perfectly one of the basis vectors in the dictionary ) , and thus the homotopy - based solver performs efficiently as shown in table [ tab : timing ] . for the largest image size , the mean time is ms with a maximum less than half a second .the proposed method works well on small images such as , which take on average ms .the runtime gradually grows as the number of basis vectors increases .the timing information given in table .[ tab : timing ] shows that the current method can run fast enough for real time operation at above 5hz for the largest image size considered .interestingly , we found is a good trade - off between precision / recall and computational cost. in general , a higher threshold would lead to fewer high - quality loop closures .this parameter can be designed based on the application in question .similarly , images of size larger than do not provide great improvement in terms of precision / recall .thus , the choice of image size should take into account the complexity of the environment being modelled .in an environment ( e.g. , outdoors ) where there is rich textural information , smaller images may be used .if the environment itself does not contain a lot of distinguishable features , larger images can be used in order to be able to differentiate between them ..execution time for different image sizes . note that at the end , the dictionary has a size of feature dimension + 8358 ( number of basis ) . [ cols="^,^,^,^,^,^,^",options="header " , ] [ tab : timing ] in this section , we compare the performance of the proposed method against the state - of - the - art dbow algorithm on bicocca 25b dataset .for the dbow , we operate on the full - sized images , using different temporal constraints ( ) along with geometric checks enabled . its performance is controlled by a so - called confidence parameter ] , followed by projection onto the unit sphere . in order to investigate if this leads to better performance , we use combinations of the gist and deep features described in section . [sec : deepexpr ] .for the three features : gist , 256 , and 1024 , we explore the possible four combinations : * 1 * ) s(gist , 256 ) , * 2 * ) s(gist , 1024 ) , * 3 * ) s(256 , 1024 ) , and * 4 * ) s(gist , 256 , 1024 ) .the results are presented in fig .[ fig : kitti - multi ] . comparing it to fig .[ fig : kitti ] , it can be seen that the performance is much better than the single features case , the precision is higher with a comparable recall .the multi - modal features are more discriminative and can be thought to match images over the intersection of both the descriptor spaces , leading to a better precision .this , however , is achieved at the cost of an increased size of the final stacked descriptor .the largest size considered here is that of s(gist,256,1024 ) which is .however , this is still feasible for runtime operation ( see table [ tab : timing ] ) .this expressive power of the stacked features places images far away from each other in the new combined descriptor space , allowing sparser solutions and thus leading to a better recall as well .+ + + as highlighted in section [ sec : uniqueness ] , the method declares loop closures that are globally unique .this may lead to missed loops in the worst case , that is , when either two images or the descriptors extracted from them are exactly the same .this is the worst case because in every other case , there would exist a single or a set of images that are able to reconstruct the image . only in the case of the exact same basis vectors ,multiple solution with the same value for ( [ equ : l1-unconstrained ] ) exist . in order to show how the proposed method behaves in the worst case, we take a batch of 100 images from the new college dataset and present the batch 60 times to the loop closing method .this simulates the situation of 60 repeated visit to the same place leading to the generation of the exact same images .each image is described with a gist descriptor of length 512 .we use the proposed framework to solve for loop closures and look at the sparse solution [ in ( [ equ : l1-unconstrained ] ) ] for each of the images presented to the method .we stack these as column vectors and the results are shown in fig .[ fig : repeatedvisits ] .the method is able to correctly associate each loop closure to one of the first 100 images in the dictionary ( initial 512 entries correspond to noise bases ) . at each revisit, we can see diagonal lines associating the current image to one of the corresponding first 100 images .[ fig : repeatedvisits](b ) shows a zoomed - in version , in which the initial noisy reconstruction during the first 100 images can also been seen , since at that time there are no valid loop closures present in the dictionary .it can be clearly seen from fig .[ fig : repeatedvisits ] that even in the case of 60 revisit , the proposed method is able to associate the current image to the first occurrence of the bases in the dictionary .this can be attributed to the greedy nature of the -optimizer incorporated in the framework .it chooses the first basis that it can find which has the least reconstruction error according to ( [ equ : l1-unconstrained ] ) , which in this case corresponds to the first occurrence of the basis in the dictionary .the existence of exact basis leads to minimization of the reconstruction error at the first step , hence returning the first occurrence of the corresponding basis .one of the challenges that makes place recognition difficult is illumination variation arising during long - term operation such as transitions from day to night or between different seasons . in order to test the performance of the proposed method under such severe illumination changes , we use the data from the visual place recognition in challenging environments ( vprice ) challenge .the dataset consists of 7778 images from a variety of outdoor environments and under various viewing conditions .the dataset provides both _ memory _ and _ live _ images , and the objective is to find a match for each live image in the memory images .the first part of the dataset contains images acquired from a camera on - board a train , recorded in spring and winter for the same trajectory of the train , an example of which is shown in fig .[ fig : vpriceexample ] . in this test ,we use only the images from the train sequences ( 2289 in memory and 2485 in live ) .the following experiments aim to investigate two aspects of the loop - closure problem : ( i ) the performance of the proposed method in challenging condition against a baseline of nearest - neighbor ( nn ) with exhaustive search , and ( ii ) the effect of the parameter on the sparsity of the solution .+ + + to that end , we use two representations as mentioned earlier : the down - sampling raw images and the gist descriptors . each frame in _ memory _is used to populate the dictionary and then a match is search for each _ live _ image . note that no temporal or geometric consistency is applied and all decisions are taken just on the current image .similarly , for each live image , we find the nearest neighbour match using an _ exhaustive _ search over _ all _ the memory images given the representation .the results are shown in fig .[ fig : norlandresults ] .it can be seen that the proposed method provides better precision and recall in almost all the cases , and especially , the recall degrades more gracefully compared to the nn method . as opposed to the previous results , the threshold in fig .[ fig : norlandresults ] varies from to so that the results from the nn can be shown as well .[ fig : norlandresults ] also shows that a smaller ( a less sparse solution ) leads to a higher precision but lower recall and vice versa , which agrees with the results in fig .[ fig : varying - tau ] . .black vertical bars represent max , mean and min for each value of , while the blue bar spans mean with two standard deviations .the dictionary contains over 2000 vectors for this dataset . , scaledwidth=70.0% ] another interesting aspect of the problem is the change in sparsity by varying .we report percentage of the number of nonzeros ( nnz ) for different values of in fig .[ fig : sparsitynnz ] for the experiments presented in fig .[ fig : norlandresults ] . as the value of increases ,the solution become more sparse but even for smaller values the sparsity is a very small percentage of the number of vectors in the dictionary .the nn - based method , on the other hands , has all non - zeros entries corresponding to the number of vectors in the dictionary .these results imply that a sparse solution can be constructed even for small values of and the solutions is still much sparser as compared to the nn solution .while the problem of loop closure has been well studied in visual navigation , motivated by the sparse nature of the problem ( i.e. , only a small subset of past images actually close the loop with the current image ) , in this work , we have for the first time ever posed it as a sparse convex -minimization problem .the _ globally optimal _ solution to the formulated convex problem , by construction , is _ sparse _, thus allowing efficient generation of loop - closing hypotheses .furthermore , the proposed formulation enjoys a _ flexible _ representation of the basis used in the dictionary , with _ no _ restriction on how the images should be represented ( e.g. , what descriptors to use ) .provided any type of image vectors that can be quantified with some metric to measure the similarity , the proposed formulation can be used for loop closing .extensive experimental results have validated the effectiveness and efficiency of the proposed algorithm , using either the whole raw images as the simplest possible representation or the high - dimensional descriptors extracted from the entire images including feature from deep neural networks .we have also shown empirically how the design parameters effect the performance of our method , and in general , the proposed approach is able to efficiently detect loop closing for real - time applications .the quality of loop closure depends on the type of descriptor employed for the task .raw images do not provide view - point or illumination invariance . for detecting loop closures in drastically different illumination conditions such as day and night, the problem is reduced to finding a suitable descriptor and then the proposed framework can be employed .we currently use a single threshold to control the loop - closure hypotheses , which guarantees a globally unique hypothesis .however , in the case of multiple revisits to the same location , this hard thresholding would prevent detecting any loop closures and the revisits would be simply considered as perceptual aliasing , which is conservative but loses information . in the future, we will investigate different ways to address this issue .for example , as mentioned earlier , we can sum up the contributions of basis vectors if a loop has already been detected between them and thus ensure that multiple visits lead to more robust detection of loop closures .nevertheless , this has not been a major issue in our tests ; as shown in fig .[ fig : newcollege ] and fig .[ fig : repeatedvisits ] , the proposed algorithm is capable of detecting loops at different revisits , even in the worst case scenario . as briefly mentioned before , the number of basis vectors in the dictionary grows continuously and can prohibit the real - time performance for large - scale problems . to mitigate this issue, one possible way would be to update the dictionary dynamically by checking a novelty factor in terms of how well the current image can be explained by the existing dictionary , which is akin to adding `` key frames '' in visual slam .this work was partially supported by the mineco - feder project dpi2015 - 68905-p , by the research grant bes-2010 - 033116 , by the travel grant eebb - i-13 - 07010 , by the onr grants n00014 - 10 - 1 - 0936 , n00014 - 11 - 1 - 0688 and n00014 - 13 - 1 - 0588 , by the nsf awards iis-1318392 and iis-15661293 , and by the dtra award hdtra 1 - 16 - 1 - 0039 . c. j. cannell and d. j. stilwell . a comparison of two approaches for adaptive sampling of environmental processes using autonomous underwater vehicles . in _mts / ieee oceans _ , pages 15141521 , washington , dc , dec . 1923 , 2005 .j. j. casafranca , l. m. paz , and p. pinies . factor graph slam : going beyond the norm . in _robust and multimodal inference in factor graphs workshop , ieee international conference on robots and automation , ( icra ) _ , karlsruhe , germany , 2013 .j. j. casafranca , l. m. paz , and p. pinies .a back - end norm based solution for factor graph slam . in _ieee / rsj international conference on intelligent robots and systems ( iros ) _ , pages 1723 , tokyo , japan , nov . 38 , 2013 .n. dalal and b. triggs .histograms of oriented gradients for human detection . in _ieee computer society conference on computer vision and pattern recognition ( cvpr ) _ , volume 1 , pages 886893 , san diego , ca , june 20 - 26 , 2005 . d. l. donoho . for most large underdetermined systems of linear equationsthe minimal 1-norm solution is also the sparsest solution ._ communications on pure and applied mathematics _ , 590 ( 6):0 797829 , 2006 .m. everingham , l. van gool , c. k. i. williams , j. winn , and a. zisserman .the pascal visual object classes challenge 2012 ( voc2012 ) results .http://www.pascal-network.org/challenges/voc/voc2012/workshop/index.html , 2012 .a. geiger , p. lenz , and r. urtasun .are we ready for autonomous driving ?the kitti vision benchmark suite . in _computer vision and pattern recognition ( cvpr ) , 2012 ieee conference on _ , pages 33543361 .ieee , 2012 .v. kumar b g , g. carneiro , and i. reid .learning local image descriptors with deep siamese and triplet convolutional networks by minimising global loss functions . in _ the ieee conference on computer vision and pattern recognition ( cvpr )_ , june 2016 .y. latif , c. cadena , and j. neira .robust graph slam back - ends : a comparative analysis . in _ intelligent robots and systems ( iros 2014 ) , 2014 ieee / rsj international conference on _ , pages 26832690 .ieee , 2014 .j. h. lee , g. zhang , j. lim , and i. h. suh .place recognition using straight lines for vision - based slam . in _robotics and automation ( icra ) , 2013 ieee international conference on _ , pages 37993806 .ieee , 2013 .j. h. lee , s. lee , g. zhang , j. lim , w. k. chung , and i. h. suh .outdoor place recognition in urban environments using straight lines . in _2014 ieee international conference on robotics and automation ( icra ) _ ,pages 55505557 , may 2014 .m. milford and g. wyeth .: visual route - based navigation for sunny summer days and stormy winter nights . in _ ieee international conference on robotics and automation ( icra ) _ , pages 16431649 , st .paul , mn , may 1418 , 2012 . d. nister and h. stewenius .scalable recognition with a vocabulary tree . in _computer vision and pattern recognition , 2006 ieee computer society conference on _ , volume 2 , pages 21612168 , 2006 .doi : 10.1109/cvpr.2006.264 .e. rosten and t. drummond .fusing points and lines for high performance tracking . in _ieee international conference on computer vision ( iccv ) _ , volume 2 , pages 15081515 , beijing , china , oct .17 - 20 , 2005 .h. sugiyama , t. tsujioka , and m. murata .collaborative movement of rescue robots for reliable and effective networking in disaster area . in _ international conference on collaborative computing : networking , applications and worksharing _ , san jose , ca , dec . 1921 , 2005 .n. snderhauf , f. dayoub , s. shirazi , b. upcroft , and m. milford . on the performance of convnet features for place recognition . in _ proc .ieee / rjs int .conference on intelligent robots and systems _, 2015 .n. sunderhauf , s. shirazi , a. jacobson , f. dayoub , e. pepperell , b. upcroft , and m. milford .place recognition with convnet landmarks : viewpoint - robust , condition - robust , training - free ._ proceedings of robotics : science and systems xii _ , 2015 . h. zhang , f. han , and h. wang . robust multimodal sequence - based loop closure detection via structured sparsity . in _ proceedings of robotics : science and systems _ , annarbor , michigan , june 2016 .doi : 10.15607/rss.2016.xii.043 .
it is essential for a robot to be able to detect revisits or _ loop closures _ for long - term visual navigation . a key insight explored in this work is that the loop - closing event inherently occurs sparsely , i.e. , the image currently being taken matches with only a small subset ( if any ) of previous images . based on this observation , we formulate the problem of loop - closure detection as a _ sparse , convex _ -minimization problem . by leveraging fast convex optimization techniques , we are able to efficiently find loop closures , thus enabling real - time robot navigation . this novel formulation requires no offline dictionary learning , as required by most existing approaches , and thus allows _ online incremental _ operation . our approach ensures a _ unique _ hypothesis by choosing only a single globally optimal match when making a loop - closure decision . furthermore , the proposed formulation enjoys a _ flexible _ representation with _ no _ restriction imposed on how images should be represented , while requiring only that the representations are `` close '' to each other when the corresponding images are visually similar . the proposed algorithm is validated extensively using real - world datasets . = 1
quantum fourier transform ( qft ) plays essential roles in various quantum algorithms such as shor s algorithms and hidden subgroup problems .inspired by the exponential speed - up of shor s polynomial algorithm for factorization , many people investigated the problem of efficient realization of qft in a quantum computer .up to now , many improvements have been made . in ,moore and nilsson showed that qft can be parallelized to linear depth in a quantum network , and upper bound of the circuit depth was obtained by cleve and watrous for computing qft with a fixed error . in the actual time - cost for performing qft in the quantum network was examined .further , blais designed an optimized quantum network with respect to time - cost for qft . in practice ,the decoherence problem induced by the unavoidable coupling of quantum system with the environment have to be considered in circuit design for qft over a quantum network .if no measure is taken , decoherence will destroy the encoded quantum information .many methods have been proposed to suppress decoherence in a quantum system , among which , an important scheme is to encode the quantum information into the decoherence - free subspaces or subsystems ( dfs ) of quantum system .theoretically , dfss are completely isolated from the noises .a large amount of discussions about dfs have appeared in the literature . in this paper, we will take advantage of the decoherence - free subspaces to develop a novel scheme for performing qft in a quantum computer .the circuits designed in this way have the robustness against noise in the procedure of implementing qft .the paper is organized as follows : in sec.ii , some notations and the preliminary knowledge on dfs and qft will be reviewed ; in sec.iii , general method will be introduced for implementing qft in dfs ; in sec.iv , circuits will be designed to perform qft in the dfs of a quantum network with respect to weak collective decoherence ( wcd ) and strong collective decoherence ( scd ) respectively ; in sec.v , the efficiency of the circuit and possible improvements will be discussed ; finally , a conclusion will be made in sec.vi .suppose the quantum system * s * under consideration is coupled to an environment * e*. the overall system is governed by the hamiltonian in the form of : where are operators acting on the state space of * s*(*e * ) , and the index set contains all the possible couplings between the system and the environment .assume span a -closed associate algebra . according to , is isomorphic to a direct sum of complex matrix algebras , each with multiplicity where the index set labels all the irreducible components of .correspondingly , the system hilbert space can be decomposed into a similar form all the subsystem spaces in the right hand side of eq.([eq3 ] ) correspond to decoherence - free subsystems of the quantum system * s*. particularly , gives a decoherence - free subspace of the quantum system * s * when .quantum network under collective decoherence ( cd ) provides a nice paradigm for the dfss . roughly speaking , all qubits of a quantum network under cdare coupled to the environment in the same manner . in the literature ,two types of cd , weak collective decoherence ( wcd ) and strong collective decoherence ( scd ) , are frequently discussed .scd is defined as the decoherence due to the interaction hamiltonian where , and represents the pauli matrix that corresponds to the local operation on the qubit .if only one term appears in the right hand side of eq.(4 ) , i.e. the system is coupled to the environment only in one direction , the induced decoherence is called wcd . without loss of generality , the hamiltonian can be written as next , we give a brief description of qft implemented over an -qubit quantum network .mathematically , the quantum fourier transformation can be expressed as : denote the state of the quantum network by the qubit string in which the qubit is at the state .the transformation can be realized by applying the following sequence of quantum gates ( all the gate sequences in this paper are operated from the right to left one by one ) where eq.([eq8 ] ) includes two classes of elementary quantum gates , and . the local hadamard gate represents \ ] ] over qubit .the controlled - phase - shift gate represents the action \ ] ] over ( control ) and ( target ) qubits .in addition , there are three important elementary gates that will be used in this paper .the controlled - not gate flips the qubit ( target qubit ) when the qubit ( control qubit ) is at the state , and nothing is done when the control qubit is at the state .the rotation gate realizes the unitary transformation over qubit : realizes the rotation over qubit ( target qubit ) when the state on the qubit ( control qubit ) is , and nothing is done when the state on the qubit ( control qubit ) is .more details about dfs and qft can be found in ref . and the references therein .in the following parts of this paper , we focus the study on implementing qft over decoherence - free subspaces .if not claimed , the abbreviation dfs will indicate only the decoherence - free subspace .suppose is an dimensional dfs of the quantum system * s * , then one can select } ] new qubits . for clarity ,we call these qubits logical - qubits , and the original qubits physical - qubits . next , we will discuss how to realize a robust qft algorithm over these logical - qubits .similar to ( [ eq6 ] ) , we define the qft over a dfs by are basis states of the logical - qubits . the basic idea for realizing is as follows .notice that the two classes of gates and play the central roles in the qft , we will construct correspondingly two similar classes of quantum gates for implementing qft in a dfs , denoted by one - logical - qubit gate acting on the logical - qubit and two - logical - qubit gate acting on the and logical - qubits respectively .these two classes of gates should fulfil two requirements : one is that they are invariant operators on the state space of the logical - qubits ; the other is that they operate on the logical - qubits in the same manner as and on the physical - qubits .similar to the general qft described in eq.([eq6 ] ) , we can realize by applying the following sequence of quantum gates where gate sequence ( [ eq13 ] ) provides us the general strategy for designing a circuit to implement qft over the dfs in a quantum system .concretely , let be orthonormal states in the dfs , and be orthonormal states in the orthogonal complementary space of .then between and the natural basis of space there exists an unitary transformation , i.e. where let be an integer no greater than .here we choose to construct logical - qubits for performing m - qubit qft over the dfs , and rewrite them as then similarly , we have and where . from eqs.([eq19]-[eq21 ] ) , we can see that if can be constructed by elementary gates , then and are feasible realizations for the two gates and .thus the realization of the unitary transformation is crucial for building the circuits to implement qft in a dfs .the remainder tasks , then , are to find the transformation in eq.([eq15 ] ) and build a circuit to realize it . from the theory of universal quantum computation , any unitary operator can be constructed by a sequence of universal elementary gates . in most casesit is not easy to obtain such explicit decompositions .whereas , as will be shown in the next section , it is possible to build up a circuit for over the quantum network under collective decoherence with a finite number of elementary gates .in the quantum networks under wcd , nontrivial dfs exists only when the original network has no less than two physical - qubits . for the simplest case ,the dfs in a two - qubit quantum network under wcd is spanned by the orthonormal states and , with which one can build up one logical - qubit , i.e. and for a -qubit quantum network under wcd , we use the orthonormal states , where represents the logical - qubit extracted from the and physical - qubits , to construct the circuit for robust qft .it can be verified that all these states are contained in the biggest dfs . over these logical - qubits ,it is observed that and can be directly constructed from a sequence of elementary gates as follows ( the circuits are given in fig.[fig1 ] and fig.[fig2 ] ) : in quantum network under wcd .the element with corresponds to a controlled - not gate with control on the filled circle and target on the .( in this paper , all the different logical - qubits are labelled by numbers in the first column of the figures , while the individual physical - qubits are labelled by the numbers in the second column . ) ] and over the and logical - qubits in quantum network under wcd . ]being able to perform the gates and introduced above with elementary gates , one can now integrate the circuit for an n - qubit qft over a 2n - physical - qubit quantum network under wcd .the transformation can be realized by replacing the and in the gate sequence ( [ eq13 ] ) with those in eqs.([eq24 ] ) and ( [ eq25 ] ) .let .observing that the term commutes with when and commutes with when or , we have and therefore , we can choose as the unitary transformation in eq.([eq15 ] ) : consider the three - qubit qft as a simple example , the transformation can be realized by applying and in the sequence as follows ( see the circuit in fig.[fig3 ] ) : ; the gates and are those given in fig.1 and fig.2 . ]it is more complicated to design the circuit for qft over quantum networks under scd than wcd .the corresponding condition for the existence of a dfs is more critical .quantum network with four physical - qubits is of the smallest scale to ensure the existence of a nontrivial dfs , which is spanned by two orthonormal states naturally , and form one logical - qubit . by dividing the physical qubits into 4-qubit units, one can use the canonical basis , where represents the logical - qubit extracted from the to physical - qubits , to construct logical - qubits in a -physical - qubit quantum network under scd . to perform qft over the logical - qubits obtained above , it is still crucial to design the circuits for the corresponding two classes of gates and . herewe directly give the form of unitary transformation in eq.([eq15 ] ) , then the gates and are obtained according to sec.iii .let be an unitary transformation on the physical - qubits from to , which is realized by applying the sequence of elementary gates as follows ( see the circuits for the transformation and its inverse in fig.[fig4 ] ) : where , , .then , in a -physical - qubit quantum network , one of the feasible realization of the transformation is : corresponding to the case that implementing qft in a quantum network under scd .( b ) circuit for the inverse transformation . ] with the help of the unitary transformation , the fundamental gates and for performing n - qubit qft over the dfs of a 4n - qubit quantum network under scd are easy to be obtained(the corresponding circuits are given in fig.([fig5 ] ) and fig.([fig6 ] ) respectively ) : where the gates and satisfy the requirements given section iii : and over the logical - qubit in quantum network under scd . ] over the and logical - qubits in quantum network under scd . ] the circuit for performing qft in the dfs of quantum network is constructed by substituting the operators and into the gate sequence in ( [ eq13 ] ) .the encoding efficiency of quantum algorithms over the dfs of an -qubit quantum network , say , is defined as the ratio of the number of logical - qubits to that of physical - qubits .the efficiency depends on the selection of dfs and the way of building logical - qubits . from section iii , it is obvious that the maximum encoding efficiency is : }{n}.\ ] ] it has been derived in that the efficiency of the quantum network under collective decoherence approaches to 1 when . for the circuits we designed for qft over the quantum network under collective decoherence , the encoding efficiency for wcd and for scd .therefore , it is possible to design a more efficient circuit for realizing qft over the dfs of some quantum network under collective decoherence .however , our circuits are scalable for they are relatively easy to be realized for large scale robust qft over quantum networks .consequently , there is a trade - off between the encoding efficiency and circuit complexity .for example , if we want to implement -qubit qft in a dfs of some quantum network under collective decoherence , then at least \geq m\}\ ] ] physical - qubits are required .the corresponding circuit for qft over this -qubit quantum network is the most efficient , but it will become much more complicated in using more elementary gates . the circuit design will be a formidable task .in this paper , strategies for performing qft in a quantum network coupled with the environment are discussed .we propose a scheme for noise - isolated qft over the decoherence - free subspaces . following the scheme , circuits for implementing qftare designed in quantum network under collective decoherence .the efficiency of these circuits and some possible improvements are discussed as well . in the future, a general designing methodology needs to be found for more efficient qft over arbitrary quantum network .also , it is worthwhile to reduce the number of elementary gates using in the relevant quantum circuits .moreover , it is interesting and useful to extend the problem from the decoherence - free subspaces to decoherence - free subsystems .
quantum fourier transform is of primary importance in many quantum algorithms . in order to eliminate the destructive effects of decoherence induced by couplings between the quantum system and its environment , we propose a robust scheme for quantum fourier transform over the intrinsic decoherence - free subspaces . the scheme is then applied to the circuit design of quantum fourier transform over quantum networks under collective decoherence . the encoding efficiency and possible improvements are also discussed .
in radio astronomy , a pulsar s mean polarimetric pulse profile is measured by averaging the observed stokes parameters as a function of pulse longitude . by integrating many well - calibrated pulse profiles , a standard profile with high signal - to - noise ratio ( snr ) may be formed and used as a template to which individual observations are fit .for example , in appendix a of taylor ( 1992 ) , a method is presented for modeling the relationship between standard and observed total intensity profiles in the fourier domain . in the current treatment , the scalar equation that relates two total intensity profilesis replaced by an analogous matrix equation , which is expressed using the jones calculus .the polarization of the electromagnetic field is described by the coherency matrix , , where is the total intensity , is the stokes polarization vector , is the identity matrix , and are the pauli spin matrices ( britton 2000 ) . under a linear transformation of the electric field vector as represented by the jones matrix , , the coherency matrix is subjected to the congruence transformation , ( hamaker 2000 ) .let the coherency matrices , , represent the observed polarization as a function of discrete pulse longitude , , where and is the number of pulse longitude intervals .each observed polarimetric profile is related to the standard , , by the matrix expression , where is the dc offset between the two profiles , represents the system noise , is the polarimetric transformation and is the longitudinal shift between the two profiles .the jones matrix , , is analogous to the gain factor , , in equation ( 1 ) of taylor ( 1992 ) .however , as has seven non - degenerate degrees of freedom , the matrix formulation introduces six additional free parameters . the discrete fourier transform ( dft ) of equation ( [ eqn : model ] ) yields where is the discrete pulse frequencygiven the measured stokes parameters , , and their dfts , , the best - fit model parameters will minimize the objective merit function , |^2 \over \varsigma_k^2 } , \label{eqn : merit}\ ] ] where is calculated from the noise power and is the matrix trace . as in van straten ( 2004 ) , the partial derivatives of equation ( [ eqn : merit ] ) are computed with respect to both and the seven parameters that determine .the levenberg - marquardt method is then applied to find the parameters that minimize .dual - polarization observations of psrj0437 were made using the parkes multibeam receiver and cpsr - ii , the 128mhz baseband recording and real - time processing system at the parkes observatory .the data were observed during seven separate sessions between 5 june and 21 september 2003 , calibrated using the method described in van straten ( 2004 ) , and integrated to produce the polarimetric standard shown in figure 1 .[ fig : std ] in any high - precision pulsar timing experiment , the confidence limits placed on the derived physical parameters of interest are proportional to the precision with which pulse time - of - arrival ( toa ) estimates can be made . aside from typical observational constraints such as system temperature , instrumental bandwidth , and allocated time , toa precision also fundamentally depends upon the physical properties of the pulsar , including its flux density , pulse period , and the shape of its mean pulse profile .when fully resolved , sharp features in the mean pulse profile generate additional power in the high frequency components of its fourier transform .as higher frequencies contribute stronger constraints on the linear phase gradient in the last term of equation ( [ eqn : fourier_rho ] ) , sharp profile features translate into greater arrival time precision .this important property may be exploited in order to significantly improve the precision of arrival time estimates derived from full polarimetric data . as noted by kramer et al .( 1999 ) , the mean profiles of stokes , , and may contain much sharper features than that of stokes , especially when the pulsar exhibits transitions between orthogonally polarized modes . although the polarized flux density is lower than the total intensity , these transitions lead to a greater snr in the high frequency components of the polarized power spectra , as shown in figure 2 . [fig : snr ] to demonstrate the potential for improved arrival time precision , toa estimates spanning over 100 days were derived from the 490 five - minute integrations used to produce the standard plotted in figure 1 .the arrival times derived from the mean total intensity profile have a post - fit residual r.m.s . of 198ns .by modeling the polarimetric pulse profile as described in section 1 , the resulting arrival times have a post - fit residual r.m.s . of only 146ns , an improvement in precision of approximately 36% .based on the assumption that the mean polarimetric pulse profile does not vary significantly with time , the standard profile and modeling method may also be used to determine the polarimetric response of the observatory instrumentation at other epochs . as a demonstration , a single , uncalibrated, five - minute integration of psrj0437 was fitted to the polarimetric standard shown in figure 1 .the model was solved independently in each of the 128 frequency channels , producing the instrumental parameters shown with their formal standard deviations in figure 3 . in each 500khz channel , it is possible to estimate the ellipticities and orientations of the feed receptors with an uncertainty of only one degree .this unique model of the instrumental response may be used to calibrate observations of other point sources .[ fig : fit ]when compared with the scalar equation used to model the relationship between total intensity profiles , the matrix equation presented in section 1 quadruples the number of observational constraints while introducing only six additional free parameters . by completely utilizing all of the information available in meanpolarimetric pulse profiles , arrival time estimates may be obtained with greater precision than those derived from the total intensity profile alone .in addition , the modeling method may be used to uniquely determine the polarimetric response of the observatory instrumentation using only a short observation of a well - known source .the swinburne university of technology pulsar group provided the cpsr - ii observations presented in this poster .the parkes observatory is part of the australia telescope which is funded by the commonwealth of australia for operation as a national facility managed by csiro .britton , m. c. , 2000 , , 532 , 1240 hamaker , j. p. , 2000, , 143 , 515 kramer , m. , doroshenko , o. , & xilouris , k. m. , 1999 , poster presented at pulsar timing , general relativity and the internal structure of neutron stars van straten , w. , 2004 , , 152 , _ in press _ taylor , j. h. , 1992 , phil . trans .a , 341 , 117
a new method is presented for modeling the transformation between two polarimetric pulse profiles in the fourier domain . in practice , one is a well - determined standard with high signal - to - noise ratio and the other is an observation that is to be fitted to the standard . from this fit , both the longitudinal shift and the polarimetric transformation between the two profiles are determined . arrival time estimates derived from the best - fit longitudinal shift are shown to exhibit greater precision than those derived from the total intensity profile alone . in addition , the polarimetric transformation obtained through this method may be used to completely calibrate the instrumental response in observations of other sources .
quality requirements are usually seen as part of the _ non - functional _ requirements of a system .those non - functional requirements describe properties of the system that are not its primary functionality .`` think of these properties as the characteristics or qualities that make the product attractive , or usable , or fast , or reliable . '' although this notion of non - functional requirements is sometimes disputed , there always exist requirements that relate to specific qualities of the system .we call those demands _ quality requirements_. quality requirements are an often neglected issue in the requirements engineering of software systems .a main reason is that those requirements are generally difficult to express in a measurable way what also makes them difficult to analyse .one reason probably lies in the fact that quality itself `` [ ] is a complex and multifaceted concept . '' it is difficult to assess and thereby also the definition of quality requirements is a complex task . especially incorporating the various aspects of all the stakeholdersis often troublesome .hence , the problem is how to elicit and assess quality requirements in a structured and comprehensive way .we propose a 5-step approach for managing quality requirements using a two - dimensional quality model .this quality model uses activities as one dimension and describes the influences of system entities ( and their attributes ) on those activities .these two dimensions can conveniently be used as a structure for quality requirements as well .the stakeholders define the activities they perform on and with the system .they provide the most abstract level for quality requirements .refinements can be made by analysing which system entities are affected by which activities .finally , a direct traceability from the quality assurance to the quality requirements is given by the quality model .this is the case because the model can be used as basis for quality assurance techniques such as reviews .we start with describing related work in quality modelling and eliciting and structuring quality requirements in sec .[ sec : related ] .[ sec : quality_modelling ] introduces activity - based quality models and their advantages over traditional approaches . in sec .[ sec : elicitation ] , we propose an approach to elicit and refine quality requirements based on such activity - based quality models .the relation to assuring the requirements is described in sec .[ sec : assurance ] .then , in sec. [ sec : case_study ] , the approach is validated in a case study .final conclusions are given in sec .[ sec : conclusions ] .various approaches for non - functional requirements have been proposed .the standard ieee std 830 - 1998 considers requirements specifications in general .it concentrates strongly on functional issues and quality requirements play only a minor role .ebert discusses in an approach for managing non - functional requirements .he classifies them in _ user - oriented _ and _ development - oriented _ that gives them a first structure .however , this is still too coarse - grained to be applied fruitfully .also more general approaches such as do not impose a sufficient structure on the quality requirements that foster elicitation or assurance .more structure is provided by the umd approach .however , it focuses mainly on _ issues _ that should be avoided and hence do not provide enough connection to the stakeholders .finally , doerr et al . included the use of quality models in their approach to non - functional requirements .however , the used quality models themselves provide no direct connection to the stakeholders .to be able to efficiently manage quality requirements one needs a means to express them in a concise and consistent manner . since the 1970ies a number of _ quality models _ have been proposed to achieve this , as is argued in these approaches have a number of shortcomings .most importantly , they fail to make explicit the interrelation between system properties and the activities carried out on or with the system by the various stakeholders .we regard the omission of activities as a serious flaw as the activities performed on and with the system largely determine the overall life - cycle cost of a software system .moreover , the activities provide a natural criterion for the decomposition of the complex concept _ quality _ that many existing approaches lack . to address these problems , we propose a consequent separation of activities and system entities .this separation facilitates the identification of sound quality criteria and allows to reason about their interdependencies . to illustrate the activity - based quality model we use the quality attribute _maintainability _ that is known to have major influence on the total life - cycle cost of software systems .the 1^st^ dimension of the quality model consists of the activities carried out on or with the system by the various stakeholders . in the case of maintainabilitythe set of relevant activities depends on the particular development and maintenance process of the organisation that uses the quality model , e.g. the ieee 1219 standard maintenance process . as activitiescan be conveniently structured in activities and related sub - activities , the 1^st^ dimension of the model actually forms a tree : the _ activities tree_. the 2^nd^ dimension of the model , the _ entities tree _ describes a decomposition of the _ situation_. we use the term _ situation _ here to express that this tree is not limited to a description of the software system itself but also describes relevant aspects of the organisation that develops the system .this is necessary as organisational aspects like development processes and the provided infrastructure are known to have a major impact on the expected maintenance effort . to achieve or measure maintainability in a given project setting we need to establish the interrelation between entities and activitiesthis relationship is best expressed by a matrix as depicted in the simplified fig .[ fig : matrix ] . ]as the figure shows , to be able to express these relations , one needs to equip the entities with fundamental _ attributes _ like _ consistency , completeness , conciseness _ or _redundancy_. now entities and activities can be put into relation by the identification of _ impacts_. an impact is defined as a relation between an entity / attribute tuple and an activity where expresses a positive and a negative impact .an example in the figure is that expresses that the conciseness of identifier names has a positive influence on the activity _concept location_. the negative impact of superfluous variables on the modification of existing code is expressed as .the example shows that even the apparently simple attribute can be very powerful when we want to state that a proper infrastructure , e.g. , a debugger has an influence on specific activities .such a model is well - suited to classify quality requirements as the activities provide a straight - forward relation between the stakeholders , that ultimately define quality requirements , and the software system .for the example of maintainability , the related stakeholder is the developer .his main activity _ maintenance _ can be broken down in more tangible subactivities .this subactivities can then be related to situation entities via basic attributes .in it is presented how such a model can be used for the quality attribute _usability_. the central stakeholder is the user and his core activity _ usage _ can be decomposed in more specific subactivities like _ reading_.they can be related to concrete entities like the fonts used in the user interface .the main approaches to elicit quality requirements are either checking several requirements types and building prototypes or using positive and/or negative scenarios ( use cases and misuse cases ) .although we believe that both approaches are valid , important , and best used in combination , the incorporation of the quality model described in sec .[ sec : quality_modelling ] can improve the result by defining more structure .we use the structure induced by the quality model to elicit and refine the quality requirements .this elicitation and refinement process consists of 5 main steps that should be supported by the established elicitation techniques mentioned above .an overview is shown in fig . [fig : process ] .the steps are strongly oriented at using the two trees contained in the quality model and aim at refining the requirements to quantitative values as far as possible .obviously , the approach is influenced by the availability of a suitable quality model .ideally , an appropriate quality model exists that shows the needed activities and entities .however , this will often not be the case but many activities and the upper levels of the entities tree can usually be reused or found in the literature . then the quality model should be refined in parallel to the requirements .the first step is , similar as in other requirements elicitation approaches , to identify the stakeholders of the software system . for quality requirements ,this usually includes users , developers and maintainers , operators , and user trainers .obviously , other stakeholders can also be relevant for the quality requirements .when the stakeholders have been identified , the quality model can be used to derive the activities they perform on and with the system .for example , the activities for the maintainer include _ concept location _ , _ impact analysis _ , _ coding _ , or _ modification _ .especially the activities of the user can be further detailed by developing usage scenarios . in the next step ,we rank the activities of the relevant stakeholders according to their importance .if not all needed activities are defined in the model , it will be extended accordingly .this results in a list of all activities of the relevant stakeholders . on top of this listare the most important activities , the least important at the bottom .importance hereby means the activities that are expected to be performed most often and which are most elaborate .the justification can be given by expert opinion or experiences from similar projects .this list will be used in the following to focus the definition and refinement of the requirements .now , we need to answer the question how well we want the activities to be supported .the answers are in essence qualitative requirements on the software system .for example , if we expect rather complex and difficult concepts in the software because the problem domain already contains many concepts , the activity _ concept location _ is desired to be _these qualitative statements are needed for all the activities .depending on the amount of activities to be considered , the ones at the bottom of the list might be ignored and simply judged with _ do ntcare_. as described in sec .[ sec : quality_modelling ] , the entities tree contains the entities of the software and its environment that are in some way relevant for the quality of the system .the entities tree organises them in a well - defined , hierarchical manner that fits perfectly to the task of refining the requirements elicited based on the activities .the quality model itself is a valuable help for this .it actually captures the influences ( of properties ) of entities on the activities .hence , we only need to follow the impacts the other way round to find the entities that have an influence on a specific activity .if these influences are incomplete in the current model , this step can also be used to improve it .this way , consistency with the later quality assurance is significantly easier to achieve . for the definition of the refined quality requirements ,the attributes defined for the entities can be used .for example , a detailed requirement might be that each object must have a state accessible from outside because this has a positive effect on the _ test _ activity .we refine those higher - level requirements in more detail that have been judged to be important in the last steps . finally , the goal is to have quantitative and hence easily checkable requirements .we can quantify the requirements on the activity - level or on the entity - level . on the activity - level , this would be , for example , that an average _ modification _ activity should take 4 person - hours to complete. requirements on the entity - level might be quantitatively assessable ( cf . ) depending on the attribute concerned .for example , _ needless code variables _ can be counted and hence an upper limit can be given .we assume that it is theoretically possible for any requirement to define it quantitatively . yet, it is not always feasible in practice as either there is no known decomposition of the requirement or it is considered too elaborate .the model acts as a central knowledge base for the quality - related relationships in the product and process .therefore , it is also a well - suited basis for assuring that the defined quality requirements have been fulfilled .[ fig : assurance ] shows how the model can be used in several ways for constructive as well as analytical quality assurance ( qa ) .constructive qa is supported by the automatic generation of _ quality guidelines _ from the model .these guidelines define what _developers _ should do and what they should not do in order to meet the quality requirements expressed by the model . for analytic qa , the _ quality engineer _, uses manual _ reviews _ as well as the _ quality reports _ generated by _ quality analysis tools _ to evaluate if quality requirements are satisfied . like the guidelines , _ checklists _ that support the manual reviewsare automatically generated from the model . to be used efficiently ,review checklists are required to be as short as possible .as we annotate the properties in the model , whether they are automatically , semi - automatically , or only manually assessable , review checklists can be limited to issues that require manual evaluation and thereby be kept as concise as possible .checklist length can be further limited by selecting only the subset of the quality model that is relevant for the artefact type being reviewed .quality analysis tools like coding convention checkers or our assessment toolkit conqat report their results with respect to the entities defined in the model .hence , the quality engineer can give concrete instructions to developers to correct quality issues .the results of quality analysis tools that require the execution of the system , e.g. usability tests , do usually not provide such a direct relation to the entities .however , the model supports the identification of entities responsible for quality defects via the explicitly stored relations between entities and activities . ]we show the applicability of our approach in an automotive case study .daimlerchrysler published a sample system specification of an instrument cluster .the instrument cluster is the system behind a vehicle s dashboard controlling the rev meter , the speedometer , indicator lights , etc .the specification is strongly focused on functional requirements but also contains various `` business '' requirements that consider quality aspects .the functional requirements are analysed in more detail in .we mainly look at the software requirements but also at how the software influences the hardware requirements .we can identify two stakeholders for the quality requirements stated in .the relevant requirements are mainly concerned with the user of the system , i.e. , the _ driver_. he needs to have a good view on all information , relevant information needs to be given directly and his safety has to be ensured . to derive the corresponding activities , we can use the quality model for usability described in .it contains a case study about the iso 15005 that defines ergonomic principles for the design of transport information and control systems ( tics ) .the instrument cluster is one example of such systems .hence , the identified activities can be used here .the distinction on the top level is in _ driving _ and _ tics dialog_. the former describes the activity of controlling the car in order to navigate and manoeuvre it .examples are steering , braking , or accelerating .the latter means the actual use of a tics system .it is divided into : ( 1 ) _ view _ , ( 2 ) _ perception _ , ( 3 ) _ processing _ , and ( 4 ) _input_. this level of granularity is sufficient to describe quality related relationships .the second important stakeholder is the manufacturer of the vehicle , the _ oem_. the concern is mainly in two directions : ( 1 ) reuse of proven hardware from the last series and ( 2 ) power consumption .the former is an oem concern because it allows decreased costs and ensures a certain level of reliability which in turn also reduces costs by less defect fixes .the power consumption is typically an important topic in automotive development because of the high amount of electronic equipment that needs to be served .hence , to avoid a larger or a second battery and thereby higher costs , the power consumption has to be minimised . therefore , the relevant activities of the oem are ( 1 ) _ system integration _ in which the software is integrated with the hardware and ( 2 ) _ defect correction _ which includes callbacks as well as repairs because of warranty .the above identified activities of the two relevant stakeholders need now be ranked according to their importance for those stakeholders .the decisive view is obviously the one from the payer , in this case the _oem_. only legal constraints can have a higher priority .although we do not know how daimlerchrysler would prioritise these activities , we assume that usually the safety of the driver should have the highest priority . hence , the _ driving _ activity is ranked above _ defect correction _ and _system integration_. the rationale for ranking _ defect correction _ higher than _ system integration _ is that the former is is extremely expensive , especially in case the system is in the field already .this is partly backed up by the commonly known fact that it is the more expensive to fix a defect , the later it is detected .the complete ranking is also shown in tab .[ tab : activities ] . having identified and prioritised the activities that are performed on and with the system , they can be used to define the requirements qualitatively .this way , the requirements are elicited and some requirements may not be possible to give in finer detail . for the case study ,we analyse the `` business requirements '' and their rationales ( if available ) from to derive qualitative ratings .the ratings for the activities are summarised in tab .[ tab : activities ] ..qualitatively defined requirements for the prioritised activities [ tab : activities ] [ cols="<,<",options="header " , ] for all parts of the instrument cluster , it is wanted that _ driving _ is still comfortable .the _ driver _ should not be distracted in the _ driving _ activity .hence , it must be safe .the most information can not surprisingly be found about the _ tics dialog _ itself .it is often stated that it should be possible to obtain information and that the dialog should be attractive for the _ driver_. the information displayed needs to be correct , current , accurate , and authentic . in general , it is also stated that the dialog needs to be `` well - known '' what we called `` traditional '' but it must also improve over the current systems .the _ defect correction _ should be minimal with a high robustness and life - span . finally , _ system integrationshould have minimal hardware requirements and use existing hardware components .it should also be able to use different hardware , especially in the case of different radio vendors .we can again use the quality model from for most of the entities tree .it provides a decomposition of the _ vehicle _ into the _ driver _ and _ tics_. the _ tics _is further divided into _ hardware _ and _ software_. the software is decomposed based on an abstract architecture of user interfaces from as depicted in fig .[ fig : architecture ] .the hardware is divided into operating devices , indicators / display , and the actual tics unit .the quality model gives us also the connection from activities to those entities .it shows which entities need to be considered w.r.t .the activities of the stakeholders that are important for our instrument cluster .we can not describe this completely for reasons of brevity but give some examples for refinements using the entities tree .for the _ driving _ activity , we have a documented influence from the the hardware , for example .more specifically , the appropriateness of the position of the display has an influence on _ driving_. in our more formal notation that is : hence , in order to reach the qualitative goals for the _ driving _ activity , we need to ensure that the display position is appropriate .a second example starts from the _ processing _ activity .it is influenced by the unambiguousness of the representation of the output data : therefore , we have a requirement on the representation of the output data that it must be unambiguous , i.e. , the driver understands the priority of the information . finally , _ perception _ is an activity in the _ tics dialog _ that is influenced by the adaptability of the output data representation : the representation should be adapted to different driving situations so that the time for _ perception _ is minimised .such a requirement is currently missing in the specification .for the quantification of the requirements , the quality model can only help if there are metrics defined for measuring the facts .then an appropriate value can be defined for that metric .otherwise , the model must be extended here with a metric , if possible .we can again not describe all necessary quantifications of the instrument cluster specification but provide some examples .the above identified requirement about the appropriateness of the display position can be given a quantification .the specification actually demands that `` the display tolerance [ ] amounts to degrees . ''furthermore , it is stated that `` the angle of deflection of the pointer of the rev meter display amounts to 162 degrees . ''the example of the unambiguous representation of the output data can not be described with some kind of numerical value . however , the specification demands that the engine control light must not be placed in the digital display with lots of other information `` because an own place in the instrument cluster increases its importance '' . for reasons of brevity , we are not able to describe the whole quality requirements elicitation and refinement for the instrument cluster . however , the examples show that our approach is applicable to such an automotive system .we observed that we have a clear guidance in eliciting and refining the requirements along the quality model .starting from the stakeholders , their activities down to the influencing system entities is a straight - forward thinking process .moreover , we found that several of the informations needed during the application of our approach was already contained in the specification but not consistently for all its parts .finally , we also noted that we were able to identify several requirements that were not considered in the specification that can have an influence on the relevant activities .although it has been acknowledged that quality requirements are difficult to handle and that they often have been neglected , a well - founded and agreed structuring of those requirements has not been established . in some way , most classifications are related to the iso 9126 standard that , however , does not consider the various activities of the stakeholders .our unique , two - dimensional quality model resolves this problem by explicitly modelling the influences of entities on activities . using the activities of the stakeholders and following these impacts in the opposite direction, we can employ a structured and well - founded process to elicit and refine the quality requirements .moreover , there is a direct connection and traceability to the quality assurance techniques used in the project .those techniques are responsible for assuring that the quality requirements have been fulfilled .the quality model serves as a basis of the quality assurance and thereby allows to relate the results to the requirements .the applicability of the approach was shown in a case study based on a published instrument cluster specification of daimlerchrysler .we were able to show that the approach allows a structured elicitation and refinement of quality requirements using the activities of the stakeholders and their relationships with entities in the system that are documented in a quality model .we found information that could be fitted into our approach but also could identify omissions in the specification .we plan to develop the approach in more detail and to validate it in more case studies .moreover , the quality model itself is continuously extended and improved .this in turn also amends the quality requirements approach .we are grateful to k. buhr , n. heumesser , f. houdek , h. omasreiter , f. rothermel , r. tavakoli , and t. zink for the specification of the instrument cluster and d. mendez for useful comments .k. buhr , n. heumesser , f. houdek , h. omasreiter , f. rothermel , r. tavakoli , and t. zink .daimlerchrysler demonstrator : system specification instrument cluster .http://www.empress-itea.org/deliverables/d5.1_appendix_b_v1.0_public_version.pdf , 2003 .accessed 2008 - 01 - 15 .f. deissenboeck , s. wagner , m. pizka , s. teuchert , and j .- f .girard . an activity - based quality model for maintainability . in _ proc .23rd international conference on software maintenance ( icsm 07 ) _ , pages 184193 .ieee cs , 2007 .j. doerr , d. kerkow , t. koenig , t. olsson , and t. suzuki .non - functional requirements in industry three case studies adopting an experience - based nfr method . in _ proc .13th international conference on requirements engineering ( re05 ) _ , pages 373382 .ieee cs , 2005 .
managing requirements on quality aspects is an important issue in the development of software systems . difficulties arise from expressing them appropriately what in turn results from the difficulty of the concept of quality itself . building and using quality models is an approach to handle the complexity of software quality . a novel kind of quality models uses the activities performed on and with the software as an explicit dimension . these quality models are a well - suited basis for managing quality requirements from elicitation over refinement to assurance . the paper proposes such an approach and shows its applicability in an automotive case study . [ software quality assurance ( sqa ) ]
the amount of information that a system is able to process ( and/or store ) plays an essential role when one tries to quantify the level of `` complexity '' of a system , and indeed often the mutual information [ 1 ] stored in the system ( or a concept derived from it , such as the past - future mutual information ) is used as a measure of its statistical complexity [ 2 ] . over the last decadea number of authors have carried out work towards understanding under what conditions can we expect to maximize the information procesing capabilities of different types of complex systems . for instance , langton and others [ 3,4 ] investigated the behavior of cellular automata ( ca ) , while crutchfield , young and others [ 5 ] have been concerned mainly with iterated function systems and computational complexity in this area .the definitions used for complexity were rather problem dependent , and not surprisingly two main approaches to measuring statistical complexity have been developed over the years , as well as a large number of other `` ad hoc '' methods for describing structure .the first line of work uses information theory [ 6 - 9 ] , whereas the second approach defines complexity using computation theoretic tools [ 5,10 ] . in spite of this model dependence ,the common picture that seemed to emerge from this work was that complex systems were able to show a maximally varied and self - organizative behaviour ( i.e. , maximally complex behaviour ) in the vicinity of sharp phase transitions [ 11 ] .since these transitions often belonged to the class commonly known in statistical mechanics as order - disorder phase transitions , this naturally led to the notion that maximally interesting behaviour of complex systems takes place `` at the edge of chaos '' , in an expression coined by langton [ 3 ] .( note however that the disordered phase does not neccesarily need to be chaotic in the strict sense of the word , i.e. , ergodyc . )the underlying reason was simple and appeling enough , neither very ordered systems with static structures , nor disordered systems in which information can not be persistently stored are capable of complex information processing tasks . the actual verification of the fact that the mutual information ( or definitions of statistical complexity based on other approaches ) had a maximum in the vicinity of the relevant phase transitions were a trickier business though .early results by langton for ca s [ 3,4 ] and by crutchfield [ 5,10 ] for iterated dynamics showing sharp peaks in complexity as a function of the degree of order in the system at what appeared to be phase transitions were subsequently shown to be critically dependent on the particular measure of order choosen [ 2 ] .after this , arnold [ 12 ] showed numerically that the 2-dimensional ising model indeed had a maximum of statistical complexity ( defined through past - future mutual information ) at its order - disorder transition . without wanting to go into the debate of what exactly constitutes a good measure of complexity -a debate often riddled with the specifics of the particular problem at hand-, it would seem clear though that complexity and information must bear a close relationship .we will thus concern ourselves in this paper with the mutual information contained in random boolean networks ( rbn ) [ 13 ] and its behavior as the networks undergo their order - disorder phase transition ( for a view point computational see [ 18 ] ) . by using a mean field approximation and assuming markovian behaviour of the automata, we will show both numerically and analytically that the mutual information stored in the network indeed has a maximum at the transition point .random boolean networks ( rbn ) [ 13 ] are systems composed of a number n of automata ( ) with only two states available ( say and for instance ) , each having associated a boolean function of boolean arguments that will be used to update the automaton state at each time step .each automaton will then have associated other automata ( the inputs or vicinity of ) , whose states will be the entries of . that is , the automaton will change its state at each time step according to the rule both and the identity of its inputs are initially assigned to the automaton at random .( in particular , the s are created by randomly generating outputs of value one with a probability , and of value zero with a probability , where is called the bias of the network ) .this initial assignation will be maintained throught the evolution of the system , so we will be dealing with a quenched system . even keeping this assignation fixed ,the number of possible networks that we can form for given values of and is extraordinarily high ( a total of possible networks ) .thus , if we want to study general characteristics of rbn systems we are inevitably led to an statistical approach .one fact that can be observed for all rbn s is that although the number of available states for a network of size grows like , the dynamics of the net separates the possible states into disjoint sets , attractor basins .each basin will lead the system to a different attractor .however , since the number of states available is finite and the quenched system is fully deterministic , we can be sure that the system will at some point retrace its steps in the form of periodic cycles .thus attractors will neccesarily be periodic sets of states .since after a transient any initial state will end up in one attractor or another , their period ( or rather their average period ) will set the typical time scale characterizing an rbn .it has been known for some time now [ 13 ] that rbn s show two different phases separated , for a given value of , by a critical value of , : this behaviour naturally induced the conjecture that at the rbn s undergo a second order phase transition .this conjecture has been prooven correct and some more information about the transition has been gained [ 15 ] .for instance , as we change the value of the critical value at which the transtion takes place also changes and a `` critical line '' appears , as shown in figure 1 . as was shown by [ 16 ] this line corresponds to in the insets of figure 1 , three sets of states of a network with and are also shown as we move from the disordered state to the ordered one by changing , showing a typical order - disorder transition .each set of states contains 50 consecutive states , time running upwards along the vertical axys .[ criticalline - fig ]since rbn s appear undergo an order - disorder phase transition , a useful way to caharacterize the state of the system will be its `` self - overlap '' .this is simply defined to be one minus the hamming distance between an automaton at time and itself at time , averaged over all automata and times .let us expand on this .let us suppose that we generate an rbn with bias , and a random initial condition .we let the system evolve until the transient dies out and we are inside an attractor cycle , and then compute the states of the system for a number of time steps equal to the number of automata in the system ( that is , from to for the network that we have used .each experimental computer point in all figures is the average of differents networks with random initial conditions ) .let us suppose that we count the number of times that an automata is in the state both at time and , and average over all automata and time steps .this will give us the `` 1 state self - overlap '' , . repeating this procedure with the statewill then obviously give us the `` zero state self - overlap '' , .then , will simply be given by on the other hand , we can analogously define and . note that by symmetry we have to have even with , since and are the joint probability distributions , not the conditional probabilities of transitioning from to or viceversa .it is then fairly easy to find the equation that describes the evolution of .if we define to be then it is not difficult to convince oneself that in a mean field approximation we must have where is the connectivity of the net. this equation forces to evolve towards fixed points , , that will depend on and .the stability analysis of ( 5 ) for gives the critical line ( 2 ) separating the ordered phase ( ) from the disordered phase ( ) .this is shown in figure 2 , where the evolution of given by ( 5 ) ( solid line ) is plotted against the results of the numerical simulations ( dots ) .the evolution lasts for as long as it takes the transient to die out , and once the system is in the attractor cycle takes on its fixed point value ( from now on we drop the star and designate the fixed point value simply by ) .[ auto - fig ] let us now obtain analitycal expressions for the from our knowledge of , the normalization conditions and the fact that . by definition ( 3 ) and by normalization so that but then , by symmetry , we still have two more normalization conditions , derived from the fact that the probability of finding a mean field automaton in the state 1 is , and for the state whence which satisfy ( 3 ) above .figure[autos - fig ] shows the analytical expressions for the ( solid lines ) together with the results from the numerical simulations ( dots ) .[ autos - fig ] so far we have simply approximated the whole network by a set of mean field automata . however , since the are equivalent to we can now calculate the conditional probabilities if we now assume that our mean field automata are markovian these conditional probabilities will completely characterize their transition probabilities [ 1 ] . therefore , the transition matrix for the mean field markovian automaton is : which satisfy where are the probabilities of transitioning from the states to the state .thus , we have now reduced the whole network to a set of mean field automata evolving independently under markovian conditions , all the effects of their interactions being encoded in . to compute the past - future mutual information stored in the system we only have to apply information theory [ 2,17 ] .the one - automaton entropy is simply whereas the shannon uncertainty associated to the markovian evolution of this automaton will be with and the uncertainty is thus , whence the past - future mutual information will be : figure 4 shows the analytical expressions ( solid lines ) as well the experimental results from the simulations for both the one - automaton entropy ( dots ) and the shannon uncertainty ( triangles ) . note how the uncertainty is always smaller and decays faster than the one - automaton entropy .in particular , for ( where for our net with ) , we have and .thus in the ordered phase the mutual information becomes simply the one - automaton entropy .given this discussion , it is obvious that the mutual information that can be stored in the system has to have a maximum precisely at .this is shown in figure 4 , where the mutual information is plotted against ( again , both the analytical expression above as well as the experimental results ) .[ informacion - fig ] finally , in figure 5 the mutual information is plotted against the one - automaton entropy . from corresponding to to which corresponds to the critical value , we see that is just a straight line of slope .this is as it should be , since as we just saw is zero for beyond , and in this region . precisely at , reaches a maximum , and beyond this point it starts to decay non - lineary as the shanon uncertainty switches on .[ hversusi - fig ]by using a mean field approximation and a markovian ansatz for the evolution of an rbn , we have been able to show with a few , back of the envelope type of calculations , that the past - future mutual information contained in a rbn reaches a maximum at the point at which this system undergoes its order - disorder phase transition .also , in figure 5 we can see how the mutual information as a function of the amount of disorder present in the system ( the one - automaton entropy ) reaches a maximum at the point that corresponds to the phase transition .similar results obtained in [ 3,4 ] ( for ca s ) and in [ 5 ] ( for symbolic dynamics of the logistic map ) were criticized by li [ 2 ] on the ground that the peak was a artifact created by the particular quantity chosen to measure the disorder of the system .thus for instance li criticizes langton arguing that since in the ordered phase we have , it is only natural for him to find a straight line as the boundary of his plot of complexity against disorder ( as we do ) .li surmises that if instead of using as a measure of the disorder of the system one chooses to use the shanon uncertainty of the source ( for short from now on ) the left side of the plot would no longer be a straight line , and the maximum of would not be reached for intermediate values of the disorder .rather , in the plot the maximum of falls over the axis since reaches a maximum at zero , and monotonically decreases as increases .thus , the intuitive picture of the relationship between complexity and disorder proposed by langton and others ( i.e. , unimodal relationship between complexity and disorder with complexity reaching a maximum at intermediate values of the latter ) would no longer seem to be correct .this li takes as support to his conclusion that the dependence of on the amount of disorder in the system can take many varied forms .we think that the argument just presented , although trivially correct , fails to capture the essence behind the idea of unimodal dependence between and the amount of disorder in the system .we should first note that is not a single valued function .rather , since for , at grows from zero ( corresponding to ) to its maximum value ( corresponding to ) .that is , we have not got rid of the straight line in the graph , we have merely made it into a vertical line placed at .note however that the maximum of would still be reached at the transition point between the two phases of the system .this is in fact the central point of the issue at hand .the postulated unimodal dependence between ( or complexity ) and disorder rests under the assumption that , as we vary the order parameter , the system goes from an ordered phase into a disordered one with attainning its maximum value neither at one phase nor the other , but precisely at the transition point between them . if the quantity chosen as the order parameter varies over both phases then will reach this maximum for intermediate values of the parameter .if , on the other hand , a whole phase of the system is mapped into a single value of the order parameter , then quite obviously the maximum will be at one of the edges of the graph .thus one could say that the essence of ` unimodality ' lyes not on reaching its maximum for intermediate values of the order parameter , but on such maximum being at the transition point between the ordered and the disordered phases . [ main - text ]the authors would like to thank juan prez mercader and r. v. sol for help . this work has been supported by the centro de astrobiologa .
during the last few years an area of active research in the field of complex systems is that of their information storing and processing abilities . common opinion has it that the most interesting beaviour of these systems is found `` at the edge of chaos '' , which would seem to suggest that complex systems may have inherently non - trivial information proccesing abilities in the vicinity of sharp phase transitions . a comprenhensive , quantitative understanding of why this is the case is however still lacking . indeed , even `` experimental '' ( i.e. , often numerical ) evidence that this is so has been questioned for a number of systems . in this paper we will investigate , both numerically and analitically , the behavior of random boolean networks ( rbn s ) as they undergo their order - disorder phase transition . we will use a simple mean field approximation to treat the problem , and without lack of generality we will concentrate on a particular value for the connectivity of the system . in spite of the simplicity of our arguments , we will be able to reproduce analitically the amount of mutual information contained in the system as measured from numerical simulations .